forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
9nUBh4V6SA
Hierarchically Encapsulated Representation for Protocol Design in Self-Driving Labs
[ "Yu-Zhe Shi", "Mingchen Liu", "Fanxu Meng", "Qiao Xu", "Zhangqian Bi", "Kun He", "Lecheng Ruan", "Qining Wang" ]
Self-driving laboratories have begun to replace human experimenters in performing single experimental skills or predetermined experimental protocols. However, as the pace of idea iteration in scientific research has been intensified by Artificial Intelligence, the demand for rapid design of new protocols for new discoveries become evident. Efforts to automate protocol design have been initiated, but the capabilities of knowledge-based machine designers, such as Large Language Models, have not been fully elicited, probably for the absence of a systematic representation of experimental knowledge, as opposed to isolated, flatten pieces of information. To tackle this issue, we propose a multi-faceted, multi-scale representation, where instance actions, generalized operations, and product flow models are hierarchically encapsulated using Domain-Specific Languages. We further develop a data-driven algorithm based on non-parametric modeling that autonomously customizes these representations for specific domains. The proposed representation is equipped with various machine designers to manage protocol design tasks, including planning, modification, and adjustment. The results demonstrate that the proposed method could effectively complement Large Language Models in the protocol design process, serving as an auxiliary module in the realm of machine-assisted scientific exploration.
[ "Self-driving laboratories", "protocol design", "automated design", "domain-specific language" ]
Accept (Poster)
https://openreview.net/pdf?id=9nUBh4V6SA
https://openreview.net/forum?id=9nUBh4V6SA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xQMLvBDJzQ", "xHQSVhTbWa", "vVFf0eKzfI", "uGZQZ4D8IO", "u1mzn7vYsb", "tt5QZlCERr", "sLwzcL2ANE", "qul7IHntX9", "qqgLuqUGc4", "qYOGcdUtyl", "qBtqEJgzjE", "nabfefww1c", "m6DM906P4v", "kuNDHVZDLm", "k7XG5aZ24H", "j50GbZQ0NJ", "io0i9q1jNc", "hynnX7SINq", "hi98p8W6Pb", "hKx402jora", "foADscgmgd", "fgxIpM2skd", "ebo34yjN9Q", "eQmJOSo4Nr", "bJBVJaeDR1", "ZfnNK2aIZ1", "UmY0xu0dz9", "UZOSF6R079", "ToeAjoZG4R", "R4qjmgLrAH", "QZSxEGgQo9", "PMNvYE7Unl", "PAk5MoYGwJ", "OquyErBkJi", "NiYMSDatV0", "Mk3LlaWmSq", "IqXnUVr8RV", "I3HF8Q3ksX", "GKhvXYwJcV", "GAEBW9Wz61", "FfHpb5IHox", "AzqQFODlPc", "AMvWB5Oes2", "84z5zurpX3", "4sFsp93UaG", "17zF6sX5hp", "0YJhTevzJg" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731034166564, 1732554725391, 1732762389989, 1732772212373, 1732555324709, 1730681038108, 1732554658224, 1732934119180, 1732934826159, 1732762312669, 1732554935180, 1732555680037, 1732958243946, 1732554984913, 1732555783028, 1732958188862, 1732555954091, 1732555253823, 1732556112103, 1732554764283, 1732554857377, 1732554697947, 1732556057155, 1737523395747, 1732556005592, 1732555727344, 1732555529728, 1732555492166, 1732555586751, 1734637563266, 1732772289421, 1732555382040, 1730675687633, 1732958133721, 1732555443067, 1732958079126, 1732555890364, 1732934435574, 1732554502712, 1730514386294, 1732555160466, 1732554893560, 1732933727731, 1732555633750, 1732555218049, 1732555040675, 1732555824120 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission433/Reviewer_BhtL" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Reviewer_Xeav" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Reviewer_WJzq" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Reviewer_WJzq" ], [ "ICLR.cc/2025/Conference/Submission433/Reviewer_WJzq" ], [ "ICLR.cc/2025/Conference/Submission433/Reviewer_Xeav" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Area_Chair_Rxkf" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Reviewer_LMMv" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Reviewer_WJzq" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Reviewer_Xeav" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Reviewer_WJzq" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ], [ "ICLR.cc/2025/Conference/Submission433/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This article studies for automatic experiment protocol design, representations in different levels of semantics : the protocol element instantialization with elementary operation representation, function abstraction as a sequential representation of the operation, and a model abstraction which specifies reagent and intermediate products.\\nThe authors describe the 3 representations, propose an algorithm to automatically generate new protocols and demonstrate the creativity of the protocol generators based on the representation used.\\n\\nThe work can be framed as the hierarchical representation of policies for an MDP, trained from a natural language corpus, and used to generate new MDPs. The strength of this work lies in the fact that the methodology is tested across several experimental domains. However, the formalisation is not sound.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"The article ambitiously represents a synthetic representation of experimental protocols across different experimental sciences : Genetics, Medical, Bioengineering and Ecology.\\nThe article methodically studies the automatic computation of the representation, and then assesses the utility of the protocols generated.\", \"weaknesses\": \"* The article lacks a sound formalisation of the problem.\\n\\n** While the issue is to represent actions (experimental operations) at different levels of hierarchy, to model the change in the environmental reagents, there is no hint of using the formalisations of Markov Decision Processes or hierarchical actions/reinforcement learning.\\nWhen the three levels of representations can be easily formalised using the MPD formalisation (for instance in terms of actions/policies, options and waypoints states/subgoals), the proposed formalisation seems imprecise.\\n\\n** while the word \\\"planning\\\" is used several times in the text, it is only at l.444 that the authors explicit planning tasks as \\\"the exploration\\nof novel experimental goals\\\". This definition is confusing. While \\\"planning\\\" generally refers to finding the succession of actions to obtain a predefined goal, exploring novel goals is different, and can be referred to as \\\"goal babbling\\\" for instance.\\n\\n** In 2.2. the vocabulary used is confusing: a precondition is generally a property of the state that allows you to carry out your operations. It is a distinct notion from an input.\\n\\n** Some terms are not defined. For instance : execution context (in l.190 which is used differently from execution condition) and key-value pairs (l193)\\n\\n\\n* While the authors present comparative results of their representations and algorithms, there is no comparison with other approaches. How do the results compare quantitatively or qualitatively to the state of the art ?\\n\\n*While the authors mention as objectives the \\\"exploration of novel goals\\\", \\\"generating novel experimental objectives\\\" and aim to measure \\\"a protocol's novelty\\\", their metrics is only based on similarity measures. Diversity is not mentioned in the criteria. Could you add diversity measures or argue how the proposed metrics take into account diversity ?\", \"minor_comments\": [\"Figures need a description to better understand what is shown\", \"l.234: \\\"Any status transition of the product flow is caused, and is only caused, by the effects of operations\\\". This stance excludes general evolving systems. What about dynamic systems, including with slow transformations, especially in biology or ecology?\", \"l.440 : \\\"the three scenarios of protocol design introduced in sec 1\\\". It seems sec 1 introduces 3 representations/levels of encapsulation.\", \"The results should report computing load\"], \"typos\": [\"l317 : \\\"is consist of \\\"\", \"l399: \\\"to cover as rich context as possible\\\"\", \"l420 : \\\"we report and analysis\\\"\"], \"questions\": \"In section 2.1, why is an experimental objective specified both by the final product and the final operation ? I believe the specification of the final product is the objective, and the final operation is in most cases, only the means. In the example of a test of the significance of a hypothesis, I would re-formulate the objective as the observation of a property that confirms or contradicts the hypothesis. A definition of a hypothesis based on an operation is restrictive on the generalisation of hypotheses. The authors write in l/225 \\\"operations are the methods to realize rather than the objectives to achieve\\\".\\n\\nWhy were other experimental domains, such as chemistry or physics left out ?\\n\\nIn section 4, I do not quite understand how the testing set is evaluated. Under which criteria/input/prompt is each protocol generated ? What is then the corresponding ground truth ?\", \"flag_for_ethics_review\": \"['No ethics review needed.', 'Yes, Legal compliance (e.g., GDPR, copyright, terms of use)']\", \"details_of_ethics_concerns\": \"This work relies on the analysis of scientific corpus describing experimental protocols. However they declare that the corpora complies with open access policies.\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer #Xeav - 3\", \"comment\": \"> It is unclear why LLMs are chosen for this purpose. This is essentially a big design-space exploration problem with the added opportunity of using a physical actor in purposeful experimentation. A similarly apt approach could be using digital twins with traditional learning methods, e.g., reinforcement learning and obtain a formally modeled protocol.\\n\\nThis is a very good question. The same consideration was evaluated during the decision-making process of designing our representation. It seems that we can formulate the protocol design problem in the fashion of Markov Decision Process (MDP) and solve it by heuristic-based planning methods or Hierarchical Reinforcement Learning (HRL) approaches. However, although the formulation itself is feasible, solving the problem may not be practical. Consider solving the problem through an HRL approach designed for heterogeneous action space with parameters (as the protocol is required to decide both the key properties of an operation and the corresponding values). This hierarchical agent may be trained to converge on a fine-grained environment with a clearly designed reward function, or on a large dataset with trajectories for offline learning. Unfortunately, we have access to neither an interactive environment simulating the experiments nor sufficient data to support offline training [1]. \\n\\nTreating the experimental procedures as a white box and creating digital twins for experiments can be an elegant solution and thereby facilitate various applications other than protocol design. This effort requires elaborated design of simulation granularity, exhaustive collection of primitive principles of the system, efficient implementation of rule production, and define precise metrics for evaluating the distance between current and objective states (serving as a reward function), which can be labor-intensive and is far out of the scope of this work. On the other hand, viewing those published protocols as trajectories for offline training, the scale of the offline dataset and the density of the reward function are much too insufficient to support training to convergence. Augmenting the data, synthesizing realistic trajectories, or enhancing the accessibility of protocols, are out of the scope of this work. Given the current obstacles, we choose not to formulate the problem in an MDP fashion. Though an MDP-style formulation can be more precise and elegant, it may misguide the readers to some extent. Instead, we decide to leverage the rich domain-specific knowledge provided by knowledge-based agents such as Large Language Models (LLMs), where knowledge may complement the lack of data and dense reward function. This design choice is also in line with the initial attempts on automatic experiment design [2, 3]. \\n\\nIn summary, our design choice of formulation is a compromise based on currently limited resources and restricted scope. Nonetheless, the exploration of more precise and elegant formulations represents a promising avenue for future research, and we appreciate the reviewer's insightful suggestion in this regard. We have made the revisions to convey these insights.\", \"references\": \"[1] Pateria, S., Subagdja, B., Tan, A. H., & Quek, C. (2021). Hierarchical reinforcement learning: A comprehensive survey. ACM Computing Surveys (CSUR), 54(5), 1-35.\\n\\n[2] Boiko, D. A., MacKnight, R., Kline, B., & Gomes, G. (2023). Autonomous chemical research with large language models. Nature, 624(7992), 570-578.\\n\\n[3] M. Bran, A., Cox, S., Schilter, O., Baldassari, C., White, A. D., & Schwaller, P. (2024). Augmenting large language models with chemistry tools. Nature Machine Intelligence, 1-11.\"}", "{\"comment\": \"Thank you for the clarification. I recommend elaborating on this aspect in the discussion of the work in the camera-ready version.\"}", "{\"comment\": \"Thank you for your suggestion. We have incorporated the discussion on design choice into the 'Additional Remarks' section. We are continuously working to further elaborate on this part.\"}", "{\"title\": \"Response to reviewer #LMMv - 6.4\", \"comment\": \"**BioEng**\\n| Method | IoU(Op) mean (std, var, stderr) | IoU(Prod) | IoU(Dev) | Sim(Exec) | Sim(Goal) | Sim(Param) |\\n| ------ | ------------------------------- | --------------------------- | --------------------------- | --------------------------- | --------------------------- | --------------------------- |\\n| FB | 0.176 (0.085, 0.007, 0.016) | 0.048 (0.089, 0.008, 0.016) | 0.05 (0.081, 0.007, 0.015) | 0.3 (0.084, 0.007, 0.015) | 0.79 (0.09, 0.008, 0.016) | 0.826 (0.042, 0.002, 0.008) |\\n| IB | 0.149 (0.077, 0.006, 0.014) | 0.05 (0.087, 0.008, 0.016) | 0.038 (0.091, 0.008, 0.017) | 0.286 (0.078, 0.006, 0.014) | 0.767 (0.083, 0.007, 0.015) | 0.797 (0.046, 0.002, 0.008) |\\n| II | 0.352 (0.151, 0.023, 0.028) | 0.062 (0.09, 0.008, 0.016) | 0.066 (0.187, 0.035, 0.034) | 0.443 (0.125, 0.016, 0.023) | 0.81 (0.073, 0.005, 0.013) | 0.86 (0.045, 0.002, 0.008) |\\n| EI | 0.565 (0.164, 0.027, 0.03) | 0.307 (0.186, 0.034, 0.034) | 0.31 (0.249, 0.062, 0.045) | 0.603 (0.169, 0.029, 0.031) | 0.851 (0.072, 0.005, 0.013) | 0.93 (0.033, 0.001, 0.006) |\\n| EI+ | 0.657 (0.209, 0.044, 0.038) | 0.577 (0.177, 0.031, 0.032) | 0.394 (0.241, 0.058, 0.044) | 0.743 (0.179, 0.032, 0.033) | 0.888 (0.056, 0.003, 0.01) | 0.944 (0.041, 0.002, 0.007) |\\n| EE | 0.558 (0.162, 0.026, 0.03) | 0.392 (0.214, 0.046, 0.039) | 0.303 (0.246, 0.061, 0.045) | 0.598 (0.165, 0.027, 0.03) | 0.855 (0.076, 0.006, 0.014) | 0.933 (0.03, 0.001, 0.005) |\\n| EE+ | 0.653 (0.206, 0.042, 0.038) | 0.614 (0.172, 0.03, 0.031) | 0.401 (0.246, 0.061, 0.045) | 0.742 (0.176, 0.031, 0.032) | 0.9 (0.046, 0.002, 0.008) | 0.945 (0.041, 0.002, 0.007) |\\n\\n> However, it could be improved even more by relating it to existing work that creates structured workflows using LLMs, and relating it to the more general problem of getting LLMs to structure their outputs.\\n\\nThanks for this insightful suggestion. In this work, our proposed representation structures the information into multiple granularities from coarse- to fine-grained, including operations, reagents, devices, and their corresponding parameters. By using machines' externalized language in parallel with humans' internalized language [1], the hierarchical structure of information can be precisely captured, resulting in at least rational output in the local context, namely, a value for the key at least lies in its permissible range of value. By contrast, end-to-end natural language representation flattens the hierarchical structure of information. Although humans are able to recognize them thanks to internalized language [2], machines may generate \\\"not even wrong\\\" irrelevant content with misinterpreted information structures [3]. We have revised the discussion sections to connect these ideas.\", \"references\": \"[1] Chomsky, N. (2007). Approaching UG from below. Interfaces+ recursion= language, 89, 1-30. \\n\\n[2] Chomsky, N. (1956). Three models for the description of language. IRE Transactions on information theory, 2(3), 113-124.\\n\\n[3] Zhang, Y., Li, Y., Cui, L., Cai, D., Liu, L., Fu, T., ... & Shi, S. (2023). Siren's song in the AI ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219.\\n\\n> Very limited discussion of the results.\\n\\nThanks for the comment. Our discussion covers three typical aspects of method evaluation: (i) discussion on the contributions of different building blocks to the performance of the proposed method; (ii) discussion on the scalability of the proposed method from relatively simple tasks to relatively complicated ones; and (iii) discussion on the generalizability of the proposed method towards different application domains. We acknowledge that due to space limitations, these aspects are not sufficiently extended. Furthermore, some other critical aspects, such as the influence of automated protocol design on human experts and the limitations of the current solution, are not covered. We have made the revisions extending the insightful discussions on these topics, to improve the comprehensibility of the paper.\"}", "{\"summary\": \"This paper addresses the challenge of automated protocol design for conducting scientific experiments. While it is currently possible to automatically execute predefined protocols, it remains a challenge how to design new protocols or modify existing ones to achieve novel experimental objectives. The authors propose a hierarchical representation framework that enables automated protocol design by capturing both procedural and domain knowledge at multiple levels of abstraction. Specifically, the framework consists of three hierarchical levels: (1) a basic level that breaks down protocols into individual actions with their specific attributes (like timing, temperature, etc.), (2) an operation-centric view that groups and generalizes these actions based on their purpose, and (3) a product-flow-centric view that tracks how materials change and interact throughout the experiment. These three levels are implemented using Domain-Specific Languages (DSLs) that help verify and ensure the correctness of protocols. The framework includes an automated method to generate these representations for different scientific domains using (mostly) non-parametric modeling, allowing it to learn and adapt to different types of experiments.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The premise of the paper is extremely compelling - enabling AI agents (specifically LLMs) with a structured representation of expert knowledge that allows them to harness their generative capabilities to create novel experimental protocols.\\n\\n2. Representing the task of protocol modeling protocols as conditional probabilities is particularly elegant, as it enables better control and parameter learning compared to black-box approaches like neural networks, primarily because this representation naturally captures the logical dependencies between experimental steps. \\n\\n3. The dual verification system combining product and operation views is especially clever, as it allows simultaneous optimization of protocol operations while ensuring each step both builds on previous products and generates viable outputs for subsequent steps.\\n\\n4. The comprehensive evaluation across multiple scientific domains with real human experts demonstrates the practical utility and broad applicability of the method.\", \"weaknesses\": \"1. The language is often very abstract and abstruse. This is sometimes expected because the topic itself is very abstract. But making it simpler would enable the reader to appreciate the contributions more if there was more clarity in explanations.\\n\\n2. The work suggests it uses LLMs for the protocol design and it is indeed mentioned where LLMs are used. There are no details, however\\non how exactly LLMs are employed in the suggested framework (i.e. do they receive the input from the DSL? or is the DSL a step along the way). It is somewhat intuitive, but an explicit description would be helpful. I appreciated the details in the Appendix but this should be mentioned in the main text.\\n\\n3. It is not clear how scalable this approach is for more difficult protocols. It is not also clear for someone outside of the tested areas if there even are more difficult protocols. This could be addressed.\", \"other_points\": [\"It would be very helpful to start with a motivational example of some particular case of research design like the one in Figure 1. Currently, it reads very abstract. But the Figure is helpful.\", \"Lines 92-107 are supposed to briefly summarize the mechanism introduced by authors but it is very hard to comprehend. It would be good to correspond these to Figure 1B.\"], \"questions\": \"1. Can you provide more intuition on what the interface \\\\phi is? Is that simply a set of possible experiments for a given operation (such as the given \\u201chomogenization\\u201d)? Or a set of functions? Why is it operationalized the way it was operationalized?\\n\\n2. The motivating examples and descriptions seem to be very heavily concerned with natural sciences. Is it possible to extend your framework to, say psychology or experimentation in computer science itself?\\n\\n3. How scalable is the framework for more complex protocols?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer #Xeav - 1\", \"comment\": \"> Why LLMs and not some other AI method? This would allow for a better positioning of the work.\\n\\nThis is a very good question. The same consideration was evaluated during the decision-making process of designing our representation. It seems that we can formulate the protocol design problem in the fashion of Markov Decision Process (MDP) and solve it by heuristic-based planning methods or Hierarchical Reinforcement Learning (HRL) approaches. However, although the formulation itself is feasible, solving the problem may not be practical. Consider solving the problem through an HRL approach designed for heterogeneous action space with parameters (as the protocol is required to decide both the key properties of an operation and the corresponding values). This hierarchical agent may be trained to converge on a fine-grained environment with a clearly designed reward function, or on a large dataset with trajectories for offline learning. Unfortunately, we have access to neither an interactive environment simulating the experiments nor sufficient data to support offline training [1]. \\n\\nTreating the experimental procedures as a white box and creating digital twins for experiments can be an elegant solution and thereby facilitate various applications other than protocol design. This effort requires elaborated design of simulation granularity, exhaustive collection of primitive principles of the system, efficient implementation of rule production, and define precise metrics for evaluating the distance between current and objective states (serving as a reward function), which can be labor-intensive and is far out of the scope of this work. On the other hand, viewing those published protocols as trajectories for offline training, the scale of the offline dataset and the density of the reward function are much too insufficient to support training to convergence. Augmenting the data, synthesizing realistic trajectories, or enhancing the accessibility of protocols, are out of the scope of this work. Given the current obstacles, we choose not to formulate the problem in an MDP fashion. Though an MDP-style formulation can be more precise and elegant, it may misguide the readers to some extent. Instead, we decide to leverage the rich domain-specific knowledge provided by knowledge-based agents such as Large Language Models (LLMs), where knowledge may complement the lack of data and dense reward function. This design choice is also in line with the initial attempts on automatic experiment design [2, 3]. \\n\\nIn summary, our design choice of formulation is a compromise based on currently limited resources and restricted scope. Nonetheless, the exploration of more precise and elegant formulations represents a promising avenue for future research, and we appreciate the reviewer's insightful suggestion in this regard. We have made the revisions to convey these insights.\", \"references\": \"[1] Pateria, S., Subagdja, B., Tan, A. H., & Quek, C. (2021). Hierarchical reinforcement learning: A comprehensive survey. ACM Computing Surveys (CSUR), 54(5), 1-35.\\n\\n[2] Boiko, D. A., MacKnight, R., Kline, B., & Gomes, G. (2023). Autonomous chemical research with large language models. Nature, 624(7992), 570-578.\\n\\n[3] M. Bran, A., Cox, S., Schilter, O., Baldassari, C., White, A. D., & Schwaller, P. (2024). Augmenting large language models with chemistry tools. Nature Machine Intelligence, 1-11.\"}", "{\"comment\": \"This is great, thank you, I appreciate you added the first example to the appendix. It's already very long but I think it would be also helpful if you added the scalability analysis. Seems very promising, good job!\"}", "{\"comment\": \"Thank you for moving Figure 1 higher and referring to it in the main text. I do not think this change is sufficient, however. The description in lines 91-107 should use examples from Figure 1. It would be easier if the theoretical definitions of the levels were cut to the necessary minimum but the intuition behind the levels was conveyed via the figure and examples. You could move more detailed description to the Appendix.\"}", "{\"comment\": \"Thank you, this part of the response clarifies things:\\n> However, although the formulation itself is feasible, solving the problem may not be practical.\\n\\nI recommend explaining the design choice in the camera-ready version of the paper.\"}", "{\"title\": \"Response to reviewer #LMMv - 3\", \"comment\": \"> The testing set is susceptible for memorization because you are using LLMs. It would be good to have a set of protocols that are after the cutoff dates of the LLM training.\\n\\nThanks for the comment. The same concern was considered during the development of these machine designers. We made the design choice from the following two aspects. On one hand, the benchmark used for evaluation, namely, the groundtruth DSL programs as generated protocols are created and verified by our team of domain experts. This indicates that the benchmark has never been publicly released at the time LLMs were collecting training data. On the other hand, although the high-level experimental objectives can be duplicated on the Internet, the performance of LLM-based protocol designer evaluated in previous work reveals that LLMs cannot exploit such unstructured knowledge to generate fine-grained experimental procedures [1]. \\n\\nWe also employ the broadly accepted standard operating process to empirically verify that LLMs did not memorize the data. We adopted the methodology outlined in Section 5.2 of *Skywork* [2] and drew upon recent studies on detecting memorization in large language models (LLMs) [3, 4]. Specifically, we used gpt-4o mini to synthesize data resembling the style of steps from novel protocols, and then calculated the perplexity on the test set and reference set. Since the reference set was newly generated, we considered it clean, not belonging to any training set of any model.\\n\\nWe randomly sampled 100 sequences each from the test set and the reference set of the novel protocols. Each sequence corresponds to a single procedural step described in natural language. We truncated the final 50 tokens of each sequence, retaining the prefixes. These prefixes were then used as prompts for the LLM to predict the next 50 tokens, for which we calculated the perplexity. If the test set\\u2019s perplexity is significantly lower than the reference set\\u2019s, the test set might have appeared in the model\\u2019s training phase.\", \"the_perplexity_results_for_the_test_set_and_reference_set_are_as_follows\": \"| | average_ppl (std, var, stderr) |\\n| ------------- | ------------------------------ |\\n| test set | 1.366 (0.123, 0.015, 0.012) |\\n| reference set | 1.315 (0.116, 0.014, 0.012) |\\n\\n[Comparison between the perplexity of the test set and the reference set](https://anonymous.4open.science/api/repo/AutoDSL-Planning-Figure-0DFE/file/ppl.png?v=2263a5a4)\\n\\nThe results indicate that the LLM\\u2019s average perplexity on the test set is significantly higher than that on the reference set ($t(198)=3.040, \\\\mu_d<0, p<.05$; see the figure above), suggesting that the LLM encounters greater uncertainty with the novel protocols in the test set. This finding implies that for a published, widely accepted, and standardized operating process, there is no evidence to suggest that the LLM has memorized the data.\", \"references\": \"[1] O\\u2019Donoghue, O., Shtedritski, A., Ginger, J., Abboud, R., Ghareeb, A., & Rodriques, S. (2023, December). BioPlanner: Automatic Evaluation of LLMs on Protocol Planning in Biology. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 2676-2694).\\n\\n[2] Wei T, Zhao L, Zhang L, et al. Skywork: A more open bilingual foundation model. arXiv preprint arXiv:2310.19341, 2023.\\n\\n[3] Carlini N, Ippolito D, Jagielski M, et al. Quantifying memorization across neural language models. arXiv preprint arXiv:2202.07646, 2022.\\n\\n[4] Carlini N, Tramer F, Wallace E, et al. Extracting training data from large language models//30th USENIX Security Symposium (USENIX Security 21). 2021: 2633-2650.\"}", "{\"title\": \"Response to reviewer #BhtL - 1\", \"comment\": \"> In section 2.1, why is an experimental objective specified both by the final product and the final operation? I believe the specification of the final product is the objective, and the final operation is in most cases, only the means. In the example of a test of the significance of a hypothesis, I would re-formulate the objective as the observation of a property that confirms or contradicts the hypothesis. A definition of a hypothesis based on an operation is restrictive on the generalisation of hypotheses. The authors write in l/225 \\\"operations are the methods to realize rather than the objectives to achieve\\\".\\n\\nThis is a very good question. We originally considered modeling the protocol design problem in an end-to-end fashion. The intuition is that, for experimental objectives such as detecting a predicted behavior or testing a specific hypothesis, the objective not only includes the desired final product, but also the final operation to be conducted upon the final product. This is like an additional step appended to the normal protocols end by a final product. Therefore, we have two possible choices for formulation: (i) end-to-end formulation by integrating the additional step into the protocols; (ii) unified formulation with final product only and leaving out the additional step.\\n\\nWe appreciate the modification that the reviewer has suggested. Indeed, the single additional final operation may not be able to sufficiently account for the additional steps for observation and testing. Moreover, the formulation with an objective specified by two variables can be more complicated than that with only one. In pursuit of generality and succinctness, we choose to follow the reviewer's suggestion and discard the final product variable from the formulation of the objective. We have made the revisions accordingly. \\n\\n> Why were other experimental domains, such as chemistry or physics left out ?\\n\\nThanks for the question. The preliminary factor that restricts our choice of experimental domains is data accessibility. Our corpora are retrieved from open-sourced websites run by top-tier publishers, including Nature's [Protocolexchange](https://protocolexchange.researchsquare.com/), Cell's [Star-protocols](https://star-protocols.cell.com/), [Bio-protocol](https://bio-protocol.org/en), Wiley's [Current Protocols](https://currentprotocols.onlinelibrary.wiley.com/), and [Jove](https://www.jove.com/). We aggregated the corpora and analyzed the themes of the protocols according to the first- and second-level labels attached to them. This results in the taxonomies of the four major domains: Genetics, Medical and Clinical Research (Medical), Ecology and Environmental Research (Ecology), and Bioengineering. Therefore, we employ these four domains in this study. \\n\\nWe recognize that physics and chemistry are also representative domains of experimental sciences, besides Biology, Medical, and Ecology. Due to the higher cost of accessing the corpora of protocols for conducting physics and chemistry experiments, for example, mining the protocol from the \\\"method\\\" section of relevant published papers, we leave the application to physics and chemistry for future work. We have made the revisions to clarify this point.\"}", "{\"comment\": \"Thank you for your suggestion. We will extend this part following your suggestion. We will ground the theoretical definitions of the levels to the examples in Figure 1 (B), to make them more intuitive and, consequently, more accessible. Additionally, we will move the redundantly detailed description to the Appendix.\"}", "{\"title\": \"Response to reviewer #LMMv - 4\", \"comment\": \"> it would be good to introduce the SA algorithm in more detail so that we understand what the similarity score actually represents.\\n\\nThanks for the suggestion. We treat the operation execution order of each protocol as an ordered sequence of varying lengths. The SA metric evaluates the similarity between two such sequences, accounting for both the similarity of individual operations and the consistency of their execution order. We have made the revisions accordingly.\\n\\n[Algorithm for Sequence Alignment (SA) metric](https://anonymous.4open.science/api/repo/AutoDSL-Planning-Figure-0DFE/file/SA.png?v=d162a8df)\\n\\n[Needleman-Wunsch Algorithm](https://anonymous.4open.science/api/repo/AutoDSL-Planning-Figure-0DFE/file/Needleman-Wunsch.png?v=db1c2685)\\n\\n[Smith-Waterman Algorithm](https://anonymous.4open.science/api/repo/AutoDSL-Planning-Figure-0DFE/file/Smith-Waterman.png?v=dbeab0fb)\\n\\n| GroundTruth | EE+ | EE | II | IB | FB |\\n| ------------------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |\\n| [\\\"Dissolve\\\", \\\"Add\\\", \\\"Add\\\", \\\"Incubate\\\", \\\"Analyze\\\"] | [\\\"Dissolve\\\", \\\"Add\\\", \\\"Add\\\", \\\"Incubate\\\", \\\"Analyze\\\"] **SA = 1.0** | [\\\"Dissolve\\\", \\\"Add\\\", \\\"Incubate\\\", \\\"Add\\\", \\\"Dissolve\\\", \\\"Analyze\\\"] **SA = 0.75** | [\\\"Add\\\", \\\"Incubate\\\", \\\"Collect\\\", \\\"Analyze\\\"] **SA = 0.57** | [\\\"Incubate\\\", \\\"Lyse\\\", \\\"Add\\\", \\\"Run\\\", \\\"Wash\\\", \\\"Extract\\\", \\\"Incubate\\\", \\\"Perform\\\", \\\"Detect\\\", \\\"Analyze\\\"] **SA = 0.34** | [\\\"Prepare\\\", \\\"Add\\\", \\\"Terminate\\\", \\\"Cool\\\", \\\"Purify\\\", \\\"Wash\\\", \\\"Elute\\\", \\\"Concentrate\\\", \\\"Prepare\\\", \\\"Dissolve\\\", \\\"Setup\\\", \\\"Inject\\\", \\\"Perform\\\", \\\"Analyse\\\", \\\"Compile\\\"] **SA = 0.23** |\\n| [\\\"Discharge\\\", \\\"Place\\\", \\\"Sit\\\", \\\"Stain\\\"] | [\\\"Make\\\", \\\"Place\\\", \\\"Let\\\", \\\"Stain\\\"] **SA = 0.63** | [\\\"Resuspend\\\", \\\"Place\\\", \\\"Place\\\", \\\"Let\\\", \\\"Stain\\\", \\\"Let\\\", \\\"Dispose\\\"] **SA = 0.47** | [\\\"Prepare\\\", \\\"Glow\\\", \\\"Apply\\\", \\\"Wick\\\", \\\"Adsorb\\\", \\\"Allow\\\", \\\"Stain\\\", \\\"Image\\\"] **SA = 0.25** | [\\\"Cut\\\", \\\"Dip\\\", \\\"Prepare\\\", \\\"Mix\\\", \\\"Allow\\\", \\\"Transfer\\\", \\\"Collect\\\", \\\"Post\\\", \\\"Stain\\\", \\\"Image\\\"] **SA = 0.22** | [\\\"Prepare\\\", \\\"Place\\\", \\\"Dilute\\\", \\\"Pipette\\\", \\\"Incubate\\\", \\\"Remove\\\", \\\"Rinse\\\", \\\"Fix\\\", \\\"Remove\\\", \\\"Wash\\\", \\\"Strain\\\", \\\"Remove\\\", \\\"Rinse\\\", \\\"Complete\\\", \\\"Observe\\\"] **SA = 0.15** |\\n\\n> Lack of baselines from the literature\\n\\nThanks for the question. Automating the design of experiments is a relatively new domain, which was initially introduced by recent works in 2023 [1, 2]. In the previous literature search, we only find the current state-of-the-art work BioPlanner [3], which explicitizes the originally implicit experiment design process in previous works [1, 2]. As we have mentioned in the paper, our baselines are developed based on the methods proposed by these previous works. The Instance-Internal (II) designer is developed based on the state-of-the-art method of BioPlanner. The Flatten-Baseline (FB) and Instance-Baseline (IB) designers are developed based on the baselines being evaluated in [3].\\n\\nWe appreciate the reviewer for pointing this out. We have revised the paper to enhance the links between the introduction of these baseline methods in the subsection \\\"Machine designers\\\" and our citations of these previous works in the section \\\"Introduction\\\".\", \"references\": \"[1] Boiko, D. A., MacKnight, R., Kline, B., & Gomes, G. (2023). Autonomous chemical research with large language models. Nature, 624(7992), 570-578.\\n\\n[2] M. Bran, A., Cox, S., Schilter, O., Baldassari, C., White, A. D., & Schwaller, P. (2024). Augmenting large language models with chemistry tools. Nature Machine Intelligence, 1-11.\\n\\n[3] O\\u2019Donoghue, O., Shtedritski, A., Ginger, J., Abboud, R., Ghareeb, A., & Rodriques, S. (2023, December). BioPlanner: Automatic Evaluation of LLMs on Protocol Planning in Biology. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 2676-2694).\\n\\n> Figure 3 Caption is not standalone, very small font sizes for xticks and y ticks. The x-axis is not mentioned, which makes interpretation impossible.\\n\\nThanks for pointing this out. We have made the revisions accordingly to enhance the readability of the figure.\\n\\n> Figure 2 labels and ticks are too small, the caption is largely uninformative without the main text\\n\\nThanks for pointing this out. We have made the revisions accordingly to enhance the readability of the figure.\\n\\n> Fig 1 B is unclear\\n\\nThanks for pointing this out. We have made the revisions accordingly to enhance the readability of the figure.\"}", "{\"title\": \"Response to reviewer #BhtL - 3\", \"comment\": \"> While the issue is to represent actions (experimental operations) at different levels of hierarchy, to model the change in the environmental reagents, there is no hint of using the formalisations of Markov Decision Processes or hierarchical actions/reinforcement learning. When the three levels of representations can be easily formalised using the MPD formalisation (for instance in terms of actions/policies, options and waypoints states/subgoals), the proposed formalisation seems imprecise.\\n\\nThis is a very good question. The same consideration was evaluated during the decision-making process of designing our representation. It seems that we can formulate the protocol design problem in the fashion of Markov Decision Process (MDP) and solve it by heuristic-based planning methods or Hierarchical Reinforcement Learning (HRL) approaches. However, although the formulation itself is feasible, solving the problem may not be practical. Consider solving the problem through an HRL approach designed for heterogeneous action space with parameters (as the protocol is required to decide both the key properties of an operation and the corresponding values). This hierarchical agent may be trained to converge on a fine-grained environment with a clearly designed reward function, or on a large dataset with trajectories for offline learning. Unfortunately, we have access to neither an interactive environment simulating the experiments nor sufficient data to support offline training [1]. \\n\\nTreating the experimental procedures as a white box and creating digital twins for experiments can be an elegant solution and thereby facilitate various applications other than protocol design. This effort requires elaborated design of simulation granularity, exhaustive collection of primitive principles of the system, efficient implementation of rule production, and define precise metrics for evaluating the distance between current and objective states (serving as a reward function), which can be labor-intensive and is far out of the scope of this work. On the other hand, viewing those published protocols as trajectories for offline training, the scale of the offline dataset and the density of the reward function are much too insufficient to support training to convergence. Augmenting the data, synthesizing realistic trajectories, or enhancing the accessibility of protocols, are out of the scope of this work. Given the current obstacles, we choose not to formulate the problem in an MDP fashion. Though an MDP-style formulation can be more precise and elegant, it may misguide the readers to some extent. Instead, we decide to leverage the rich domain-specific knowledge provided by knowledge-based agents such as Large Language Models (LLMs), where knowledge may complement the lack of data and dense reward function. This design choice is also in line with the initial attempts on automatic experiment design [2, 3]. \\n\\nIn summary, our design choice of formulation is a compromise based on currently limited resources and restricted scope. Nonetheless, the exploration of more precise and elegant formulations represents a promising avenue for future research, and we appreciate the reviewer's insightful suggestion in this regard. We have made the revisions to convey these insights.\", \"references\": \"[1] Pateria, S., Subagdja, B., Tan, A. H., & Quek, C. (2021). Hierarchical reinforcement learning: A comprehensive survey. ACM Computing Surveys (CSUR), 54(5), 1-35.\\n\\n[2] Boiko, D. A., MacKnight, R., Kline, B., & Gomes, G. (2023). Autonomous chemical research with large language models. Nature, 624(7992), 570-578.\\n\\n[3] M. Bran, A., Cox, S., Schilter, O., Baldassari, C., White, A. D., & Schwaller, P. (2024). Augmenting large language models with chemistry tools. Nature Machine Intelligence, 1-11.\"}", "{\"comment\": \"Thank you for your suggestion. We will incorporate this two-sentence information into Section 4.3 \\\"Machine designers\\\".\"}", "{\"title\": \"Response to reviewer #BhtL - 5.2\", \"comment\": \"**Planning**\\n| Original protocol (title) | Original protocol (description) | Novel protocol (title) | Novel protocol (description) | Interpretation |\\n| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |\\n| Protocol for the determination of intracellular phase separation thresholds | The objective of this protocol is to determine the thresholds for intracellular phase separation by quantifying the relationship between GFP intensity and stress granule initiation time in G3BP knockout U2OS cells. This approach enables the analysis of endogenous protein levels and their correlation with phase separation behavior, ultimately contributing to the understanding of the mechanisms underlying stress granule assembly and other membrane-less organelles. | Personality assessment protocol | This molecular biology protocol aims to assess the personality of western mosquitofish (Gambusia affinis) by evaluating their boldness, activity, and sociability using well-established experimental approaches. The protocol measures boldness as the latency to emerge from the shelter, activity by counting the number of squares crossed in the test arena, and sociability by determining the time spent near a group of conspecifics. | The two protocols differ completely: the old examines cellular phase separation in human cells, while the new focuses on fish personality behavior, showcasing distinct objectives, systems, and methods. |\\n| Electrophysiological measurements of synaptic connectivity and plasticity in the longitudinal dentate gyrus network from mouse hippocampal slices | The objective of this protocol is to obtain acute longitudinal dentate gyrus slices from mouse hippocampi in order to measure extracellular excitatory postsynaptic potentials (fEPSPs) and to investigate the synaptic connectivity and plasticity of granule cells within the longitudinal dentate gyrus network. This involves techniques such as whole-cell patch clamping and two-photon imaging to explore the interactions between neighboring dentate gyrus granule cells and their synaptic relationships. | Animal models for depression-like and anxiety-like behavior | The objective of this scientific protocol is to investigate depression-like and anxiety-like behaviors in animal models, specifically mice and rats, using various behavioral assays such as the Forced Swim Test, Tail Suspension Test, Elevated Plus Maze, Open Field Test, and Novelty Induced Hypophagia. The protocol aims to assess the effects of pharmacological interventions on these behaviors to better understand the underlying mechanisms of depression and anxiety. | The two protocols differ entirely: the old examines synaptic connectivity in hippocampal slices via electrophysiology, while the new uses behavioral assays to study depression and anxiety in live animals, reflecting distinct objectives and methodologies. |\\n| Protocol for oral transplantation of maternal fecal microbiota to newborn infants born by cesarean section | The objective of this protocol is to facilitate the oral transplantation of maternal fecal microbiota to newborn infants delivered via cesarean section, aiming to restore beneficial gut microbiota that may be lacking due to the mode of delivery. This process involves meticulous recruitment, screening, preparation, and administration of the transplant, while ensuring the safety and health of both the mother and infant throughout the procedure. | Continuous monitoring of health data with a wearable device in pediatric patients undergoing chemotherapy for cancer \\u2013 a feasibility pilot study | The objective of this feasibility pilot study is to continuously monitor health data using a wearable device (WD) in pediatric patients undergoing chemotherapy for cancer over a 14-day period. The study aims to assess the acceptance and effectiveness of the WD in this population, including the collection of data regarding side effects, daily activities, and overall experiences with the device. | The two protocols differ significantly: the old focuses on microbiota transplantation for newborn health, while the new explores wearable devices for monitoring pediatric cancer patients, reflecting distinct objectives, populations, and methodologies. |\"}", "{\"title\": \"Response to reviewer #LMMv - 6.3\", \"comment\": \"**Ecology**\\n| Method | IoU(Op) mean (std, var, stderr) | IoU(Prod) | IoU(Dev) | Sim(Exec) | Sim(Goal) | Sim(Param) |\\n| ------ | ------------------------------- | --------------------------- | --------------------------- | --------------------------- | --------------------------- | --------------------------- |\\n| FB | 0.155 (0.085, 0.007, 0.023) | 0.03 (0.035, 0.001, 0.01) | 0.021 (0.048, 0.002, 0.013) | 0.297 (0.088, 0.008, 0.024) | 0.781 (0.096, 0.009, 0.027) | 0.807 (0.056, 0.003, 0.015) |\\n| IB | 0.162 (0.118, 0.014, 0.033) | 0.006 (0.015, 0.0, 0.004) | 0.03 (0.058, 0.003, 0.016) | 0.275 (0.105, 0.011, 0.029) | 0.763 (0.09, 0.008, 0.025) | 0.788 (0.063, 0.004, 0.017) |\\n| II | 0.386 (0.176, 0.031, 0.049) | 0.043 (0.062, 0.004, 0.017) | 0.027 (0.062, 0.004, 0.017) | 0.448 (0.131, 0.017, 0.036) | 0.788 (0.065, 0.004, 0.018) | 0.856 (0.044, 0.002, 0.012) |\\n| EI | 0.458 (0.171, 0.029, 0.047) | 0.259 (0.134, 0.018, 0.037) | 0.351 (0.195, 0.038, 0.054) | 0.514 (0.142, 0.02, 0.039) | 0.879 (0.048, 0.002, 0.013) | 0.933 (0.028, 0.001, 0.008) |\\n| EI+ | 0.411 (0.134, 0.018, 0.037) | 0.569 (0.133, 0.018, 0.037) | 0.359 (0.175, 0.03, 0.048) | 0.586 (0.127, 0.016, 0.035) | 0.888 (0.052, 0.003, 0.015) | 0.945 (0.023, 0.001, 0.006) |\\n| EE | 0.458 (0.171, 0.029, 0.047) | 0.347 (0.151, 0.023, 0.042) | 0.351 (0.195, 0.038, 0.054) | 0.507 (0.138, 0.019, 0.038) | 0.874 (0.048, 0.002, 0.013) | 0.934 (0.029, 0.001, 0.008) |\\n| EE+ | 0.414 (0.142, 0.02, 0.039) | 0.581 (0.141, 0.02, 0.039) | 0.346 (0.177, 0.031, 0.049) | 0.586 (0.131, 0.017, 0.036) | 0.91 (0.035, 0.001, 0.01) | 0.944 (0.024, 0.001, 0.007) |\\n\\n**Medical**\\n| Method | IoU(Op) mean (std, var, stderr) | IoU(Prod) | IoU(Dev) | Sim(Exec) | Sim(Goal) | Sim(Param) |\\n| ------ | ------------------------------- | --------------------------- | --------------------------- | --------------------------- | --------------------------- | --------------------------- |\\n| FB | 0.174 (0.085, 0.007, 0.016) | 0.048 (0.07, 0.005, 0.013) | 0.03 (0.067, 0.004, 0.013) | 0.312 (0.075, 0.006, 0.014) | 0.796 (0.087, 0.007, 0.017) | 0.839 (0.043, 0.002, 0.008) |\\n| IB | 0.139 (0.054, 0.003, 0.01) | 0.029 (0.038, 0.001, 0.007) | 0.023 (0.063, 0.004, 0.012) | 0.264 (0.045, 0.002, 0.009) | 0.721 (0.123, 0.015, 0.024) | 0.795 (0.07, 0.005, 0.013) |\\n| II | 0.373 (0.093, 0.009, 0.018) | 0.081 (0.087, 0.008, 0.017) | 0.091 (0.205, 0.042, 0.039) | 0.424 (0.072, 0.005, 0.014) | 0.776 (0.097, 0.009, 0.019) | 0.871 (0.041, 0.002, 0.008) |\\n| EI | 0.604 (0.167, 0.028, 0.032) | 0.322 (0.146, 0.021, 0.028) | 0.309 (0.253, 0.064, 0.049) | 0.594 (0.148, 0.022, 0.029) | 0.861 (0.079, 0.006, 0.015) | 0.932 (0.031, 0.001, 0.006) |\\n| EI+ | 0.615 (0.196, 0.039, 0.038) | 0.574 (0.242, 0.059, 0.047) | 0.4 (0.198, 0.039, 0.038) | 0.758 (0.149, 0.022, 0.029) | 0.871 (0.06, 0.004, 0.012) | 0.952 (0.021, 0.0, 0.004) |\\n| EE | 0.591 (0.158, 0.025, 0.03) | 0.373 (0.166, 0.028, 0.032) | 0.298 (0.234, 0.055, 0.045) | 0.583 (0.149, 0.022, 0.029) | 0.873 (0.054, 0.003, 0.01) | 0.936 (0.03, 0.001, 0.006) |\\n| EE+ | 0.615 (0.197, 0.039, 0.038) | 0.613 (0.21, 0.044, 0.04) | 0.39 (0.202, 0.041, 0.039) | 0.756 (0.151, 0.023, 0.029) | 0.891 (0.04, 0.002, 0.008) | 0.955 (0.019, 0.0, 0.004) |\"}", "{\"title\": \"Response to reviewer #BhtL - 6\", \"comment\": \"> Figures need a description to better understand what is shown\\n\\nThanks for pointing these out. We have revised the paper to improve the readability of the figures. We have also revised the references to the figures in the main text.\\n\\n> l.234: \\\"Any status transition of the product flow is caused, and is only caused, by the effects of operations\\\". This stance excludes general evolving systems. What about dynamic systems, including with slow transformations, especially in biology or ecology?\\n\\nThanks for the comment. As our objective is to automate the design of protocols for self-driving laboratories, we decided to treat these machine executable operations as *monitors*. For example, the operation \\\"cultivate\\\" is not an impulse-style instant which may consecutively leave the cells alone for 72 hours, where the cells are under slow transformations. Instead, \\\"cultivate\\\" comes with a property of *duration = 72 hours*, thereby covering the process of transformation. By incorporating these interval-style features into operations [1], we try to treat all systems as static systems.\\n\\nHowever, we recognize that the claim of \\\"any status transition\\\" is much too ambitious. There should exist general evolving systems that come out of our consideration. Therefore, we relaxed this claim to maintain the rigorousness of the paper. We appreciate the reviewer for pointing this out and have revised the paper accordingly.\", \"references\": \"[1] Kuipers, B. (1994). Qualitative reasoning: modeling and simulation with incomplete knowledge. MIT press.\\n\\n> l.440 : \\\"the three scenarios of protocol design introduced in sec 1\\\". It seems sec 1 introduces 3 representations/levels of encapsulation.\\n\\nThanks for pointing out this ambiguity. This sentence \\\"the three scenarios of protocol design introduced in sec 1\\\" aims to refer to the three purposes of experiment design requirements: (i) confirmation of unverified objectives to seek specific findings; (ii) testing parallel hypotheses or solutions; and (iii) replication of established experiments within the constraints of available laboratory resources. \\n\\nWe appreciate the reviewer for pointing this out. We have revised the paper accordingly to enhance the clarity of this cross reference.\\n\\n> The results should report computing load\\n\\nThanks for the comment. For automated representation generation, we primarily used GPT-4o mini with OpenAI\\u2019s Batch API for preprocessing, incurring a cost of approximately \\\\$60 across four domains. The design of the DSL was executed on a MacBook with an M2 chip, running 1,000 iterations to ensure convergence. This process required an average of 55 seconds per iteration for the operation-centric view DSL and an average of 2 seconds per iteration for the product-centric view DSL. For the machine designer, we primarily utilized GPT-4o mini combined with RAG for design, with a total cost of approximately \\\\\\\\$10 (7 methods, 140 protocols). In summary, the overall computational load is relatively low, highlighting the accessibility of our machine designers when utilizing the proposed representations and the corresponding automatic representation generation modules. We have made the revisions to clarify this point.\\n\\n> Typos\\n\\nThanks for pointing out these typos. We have made the revisions accordingly.\"}", "{\"title\": \"Response to reviewer #Xeav - 4\", \"comment\": \"> I find it somewhat concerning that a costly and potentially hazardous endeavor such as robotized experimentation with chemicals and biologically active materials is approached at the level of LLMs and the safety concerns are not discussed. For example, certification of protocols through formal verification might not be possible in this approach.\\n\\nThanks for the insightful comment. We totally agree with the reviewer, in particular on the observation that LLMs can be much too uncontrollable for engineering practices such as lab automation, which may lead to unpredictable dangerous situations [1]. There comes a dilemma --- we try to exploit the capability of reasoning over knowledge of LLMs, while we try to alleviate the drawbacks brought up by the uncontrollable nature of LLMs. Our proposed representation is dedicated to resolving the dilemma. The representations not only elicit LLMs' potential on protocol design through structural knowledge representation, but also serve as a guardrail for LLMs. Since the generated protocols are represented as corresponding DSL programs, the permissible output space is much more confined compared with that of pure LLMs, serving as constraints upon the LLM-generated protocols. Thanks to the verification mechanisms provided by DSLs, the correctness of the generated protocols can be checked to some extent. Therefore, by equipping LLMs with an auxiliary module of the constraints, we may approach a balance between knowledge utilization and preciseness.\\n\\nHowever, the current verification on the level of DSL programs is far from sufficient for serving as a certification. Certification is a serious process, where any possibilities of reporting false positive cases are required to be eliminated. Some cases can be highly long-tailed distributed, which may not be detected by data-driven and knowledge-driven machine certifiers. In this context, human domain experts are responsible for coming up with these potential risks through their experiences and tacit knowledge. Therefore, we are not likely to move human experts out of the loop, except that we can efficiently build up appropriate digital twins for self-driving laboratories. In current practices, the automation of protocol design puts human experts into a larger loop without focusing on the low-level details of experiments. As a result, they are alloweds more time for high-level thoughts on things like values, which are not likely to be alternated by machines. In summary, it is neither practical nor necessary to totally move human experts out of the loop of automatic scientific discovery. The investigation of human-machine coordination in protocol certification represents a promising avenue for future research, and we appreciate the reviewer's insightful suggestion in this regard. We have extended the discussion accordingly.\", \"references\": \"[1] Zhang, Y., Li, Y., Cui, L., Cai, D., Liu, L., Fu, T., ... & Shi, S. (2023). Siren's song in the AI ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219.\"}", "{\"title\": \"Response to reviewer #LMMv - 1\", \"comment\": \"> Line 44: What is a \\\"protocol portal\\\"? Such terminology, if not generally entrenched in the suggested domain, should be introduced.\\n\\nThanks for pointing this out. The term \\\"protocol portal\\\" refers to the open-sourced websites run by top-tier publishers, including Nature's [Protocolexchange](https://protocolexchange.researchsquare.com/), Cell's [Star-protocols](https://star-protocols.cell.com/), [Bio-protocol](https://bio-protocol.org/en), Wiley's [Current Protocols](https://currentprotocols.onlinelibrary.wiley.com/), and [Jove](https://www.jove.com/). As these websites are essentially protocol databases, we changed the expression to \\\"protocol databases\\\", which seems to be more common, to enhance the clarity of the paper.\\n\\n> Line 199: \\\"The total amount of the instance actions can be extremely high, i.e., about 150K per domain,\\\" how do you arrive at this number?\\n\\nThanks for the question. We randomly sampled approximately 10% of the protocols from the large corpus in each domain and used the method of BioPlanner to generate corresponding pseudofunctions and pseudocode [1]. Each unique function name represents an instance action. We then counted the number of instance actions in the sampled subset and extrapolated the total number of instance actions for each domain within the large corpus. Since the corpus we used encompasses the majority of protocols in each domain, these estimates are considered to be fairly accurate. We have made the revisions to clarify this point.\", \"references\": \"[1] Bartley, B., Beal, J., Rogers, M., Bryce, D., Goldman, R. P., Keller, B., ... & Weston, M. (2023). Building an open representation for biological protocols. ACM Journal on Emerging Technologies in Computing Systems, 19(3), 1-21.\\n\\n[2] Abelson, H., & Sussman, G. J. (1996). Structure and interpretation of computer programs (p. 688). The MIT Press.\"}", "{\"title\": \"Response to reviewer #Xeav - 2\", \"comment\": \"> How can the generated protocols be certified? This would be a much-appreciated, brief discussion point.\\n\\nThis is a very good question. Certification is always one of the central focuses in the engineering practices of automation. In our practice, we only automate the process of protocol design, which is the primary objective of this work, and keep the manual certification part. On one hand, relieving experimental scientists from the labour-intensive protocol design tasks, thereby allowing them more time for high-level thinking, is a sufficiently significant improvement so far. On the other hand, engineering practices such as lab automation and manufacturing are in high demand for preciseness. This leads to the requirement of manual certification. Domain experts handle subtle cases through their tacit domain-specific knowledge and are responsible for their decisions [1]. According to these considerations and the standard operating processes of experimental sciences, we choose to certify the designed protocols by domain experts.\\n\\nOur current choice is a compromise on the limitation of techniques and the demand for preciseness. In future work, we can conduct investigations on how to build digital twins of self-driving laboratories. Such digital twins support prediction, explanation, and counterfactual analysis of unseen behaviors of the experiments, which may facilitate machine-based protocol certification. Grounding these blue-sky thoughts necessitates addressing the challenging problems regarding the decision of simulation granularity, the implementation of data-efficient simulation model construction, and the injection of tacit domain-specific knowledge. In summary, the exploration of generated-protocol-certification by machines represents a promising avenue for future research, and we appreciate the reviewer's insightful suggestion in this regard. We have extended the discussion accordingly.\", \"references\": \"[1] Wang, H., Fu, T., Du, Y., Gao, W., Huang, K., Liu, Z., ... & Zitnik, M. (2023). Scientific discovery in the age of artificial intelligence. Nature, 620(7972), 47-60.\"}", "{\"title\": \"Response to reviewer #BhtL - 5.4\", \"comment\": \"**Adjustment**\\n| Original protocol (title) | Original protocol (description) | Novel protocol (title) | Novel protocol (description) | Interpretation |\\n| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |\\n| Re-using Criterion plastic precast gel cassettes for SDS-polyacrylamide electrophoresis. | The objective of this protocol is to outline the steps for re-using Criterion plastic precast gel cassettes to prepare and pour SDS-polyacrylamide gels using agarose, ensuring proper sealing and preventing leaks during the gel formation process. It provides detailed instructions for assembling the cassettes, preparing the agarose solution, and performing tests to confirm the integrity of the gel structure before electrophoresis. | 1% Agarose Gel Electrophoresis Prep | The objective of this molecular biology protocol is to prepare a 1% agarose gel for electrophoresis and utilize it for genomic DNA quality checking. This protocol details the steps for gel preparation, sample preparation, and gel loading, allowing researchers to assess the quality of their DNA samples. | Both protocols utilize gel electrophoresis, but they differ in their specific objectives: the old protocol focuses on reusing precast gels for protein analysis, while the novel protocol prepares agarose gels for DNA quality assessment. The core technique is similar, but the purpose and context differ, indicating an adjustment rather than a significant modification. |\\n| Immunohistochemistry and in situ hybridization protocols | The objective of this protocol is to perform immunohistochemistry on cryostat sections and in situ hybridization using T7-PCR based probes to visualize specific proteins and mRNA expression patterns in tissue samples. This combined approach allows researchers to study the localization and abundance of target molecules within their biological context. | Rtl1 and CD31 double-immunohistochemistry | The objective of the 'Rtl1 and CD31 double-immunohistochemistry' protocol is to detect and visualize the expression of Rtl1 (Peg11) and CD31 (Pecam-1) in placental tissue sections using a dual fluorescence approach. This method allows for the localization and assessment of these proteins in the context of placental morphology under a fluorescence microscope. | Both protocols focus on visualizing specific molecular markers in tissue sections using immunohistochemistry techniques. The old protocol combines immunohistochemistry with *in situ* hybridization, while the novel protocol applies a dual fluorescence method to target specific proteins (Rtl1 and CD31). Although the markers and detection methods differ, the overall approach and objectives are similar, making this an adjustment within the scope of existing techniques. |\\n| Functional and Morphological Assessment of Diaphragm Innervation by Phrenic Motor Neurons | The objective of this protocol is to assess the functional and morphological characteristics of diaphragm innervation by phrenic motor neurons through compound muscle action potential (CMAP) recordings and detailed analysis of neuromuscular junction (NMJ) morphology in the hemi-diaphragm of rats. By quantifying the presence and conditions of NMJs, such as intactness and innervation status, the protocol aims to elucidate the integrity of phrenic nerve innervation and potential compensatory mechanisms in response to denervation. | Measuring Diaphragm Thickness and Function Using Point-of-Care Ultrasound | The objective of this protocol is to measure diaphragm thickness and function during tidal breathing and maximal inspiratory efforts using point-of-care ultrasound, allowing for assessment of diaphragm contractility and respiratory mechanics. This technique facilitates the evaluation of diaphragm thickness (Tdi) and thickening fraction (TFdi), providing valuable insights into respiratory muscle performance in various clinical settings. | Both protocols are focused on assessing diaphragm function, but they use different methods: the old protocol involves electrophysiological recordings and morphological analysis of neuromuscular junctions, while the novel protocol utilizes point-of-care ultrasound to measure diaphragm thickness and contractility. Despite the differences in techniques, the overall objective of studying diaphragm functionality and health remains similar, indicating an adjustment in methodology rather than a major shift in research focus. |\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to reviewer #BhtL - 5.3\", \"comment\": \"**Modification**\\n| Original protocol (title) | Original protocol (description) | Novel protocol (title) | Novel protocol (description) | Interpretation |\\n| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |\\n| low Virometry for Characterizing the Size, Concentration, and Surface Antigens of Viruses | The objective of this protocol is to employ flow virometry (FVM) to characterize the size, concentration, and surface antigens of viral particles, specifically focusing on the replicative murine leukemia virus (MLV) that expresses an envelope-superfolder GFP fusion protein. This technique facilitates the detection and quantification of viral particles by combining light scatter and fluorescence measurements, thereby enabling detailed analysis of viral populations produced through various methods. | Wet-mount Method for Enumeration of Aquatic Viruses | The objective of this scientific protocol is to provide a low-cost alternative method, called wet-mount, for enumerating aquatic viruses using epifluorescence microscopy. This method is efficient, rapid, precise, and appropriate for a wide range of viral concentrations that may be encountered in field and laboratory samples. | Both protocols focus on viral characterization but differ in techniques: the old uses flow virometry for particle analysis, while the new employs a wet-mount method with epifluorescence microscopy for aquatic viruses, adapting to different samples and equipment. |\\n| The sequential isolation of metabolites, RNA, DNA, and proteins from a single, undivided mixed microbial community sample | The protocol aims to sequentially isolate and purify metabolites, RNA, DNA, and proteins from a single, undivided mixed microbial community sample, facilitating comprehensive biochemical analyses. By preserving the integrity of each biomolecular fraction throughout the extraction process, the workflow allows for a detailed investigation of microbial community composition and function. | BCP-MG: A Web Server for Predicting Bacterial Community of Metagenome | The protocol outlines the steps for using the BCP-MG web server to predict the bacterial community composition of metagenomic samples based on uploaded enzyme or reaction data. It enables researchers to choose between different metabolic databases and organism selection strategies to refine their predictions and analyze the microbial diversity within their samples. | Both protocols analyze microbial communities but differ in methods and objectives: the old uses sequential biomolecule extraction, while the new employs a web server to predict community composition from metagenomic data, shifting from experimental to computational approaches. |\\n| Molecular profile to cancer cell line matchmaking | The objective of this protocol is to establish a systematic approach for pairing cancer cell lines based on their molecular profiles, specifically focusing on shared therapeutic sensitivities and genomic similarities. By employing various analysis models and metrics, the protocol aims to enhance the understanding of molecular features that correlate with treatment response in cancer therapies. | A multistep computational procedure to identify candidate master Transcriptional Regulators (TRs) of glioblastoma (GBM) | The objective of this protocol is to identify candidate master transcriptional regulators (TRs) of glioblastoma (GBM) by reconstructing a regulatory network using gene expression data and epigenetic information. This process involves scoring the TRs based on their regulatory activity in GBM stem cells and differentiating cells, ultimately identifying those with significant regulatory effects on gene expression. | Both protocols analyze molecular features in cancer but differ in focus and methods: the old matches cancer cell lines by molecular profiles for therapy, while the new identifies transcriptional regulators in glioblastoma via network reconstruction, shifting the analytical approach. |\"}", "{\"title\": \"Response to reviewer #BhtL - 2\", \"comment\": \"> In section 4, I do not quite understand how the testing set is evaluated. Under which criteria/input/prompt is each protocol generated ? What is then the corresponding ground truth ?\\n\\nThanks for the question. We illustrate the criteria, input, prompt, and the corresponding ground truth of the evaluation process as follows. We have updated the appendix with these running examples.\\n\\n[Running example 1](https://anonymous.4open.science/api/repo/AutoDSL-Planning-Figure-0DFE/file/example1.1.png?v=93587d92)\\n\\n[Running example 2](https://anonymous.4open.science/api/repo/AutoDSL-Planning-Figure-0DFE/file/example1.2.png?v=8ca0c344)\\n\\n[Running example 3](https://anonymous.4open.science/api/repo/AutoDSL-Planning-Figure-0DFE/file/example1.3.png?v=3ca1a684)\\n\\n**Interpretation**\\n\\nExample 1\\n\\n| | Strengths | Weakness |\\n| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |\\n| EE+ | The method provides detailed tracking of material states (e.g., \\\"State\\\": \\\"Frozen\\\"), which ensures precise control over the experimental conditions. | |\\n| EE | | The inclusion of \\\"Lysis_Buffer\\\" as an input in the preconditions is incorrect. |\\n| II | | This method lacks key parameters and conditions for effective grinding, such as the device type (e.g., \\u201cMortar_and_Pestle\\u201d) and the desired output state (e.g., \\u201cFine_Powder\\u201d). |\\n| FB | | The workflow is overly complex, with preliminary steps like sample collection and centrifugation adding unnecessary complications to a simple grinding process. |\\n\\nExample 2\\n\\n| | Strengths | Weakness |\\n| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |\\n| EE+ | The method accurately matches the temperature and incubation conditions of 37\\u00b0C, ensuring consistency with the protocol, and correctly tracks the flow of components from treatment to hydrolysate production, aligning well with the intended experimental workflow. | |\\n| EE | | The incubation time is incorrect, set to 1 hour instead of the required 3 hours. Additionally, the use of a generic sample input (\\u201cIncubated_Sample-1\\u201d) instead of specifying RNase_T2 treatment suggests a lack of alignment with the experimental preconditions. |\\n| II | | The incubation time is incorrectly set to 30 minutes instead of the required 3 hours. Additionally, the use of \\u201chydrolyzed RNAs\\u201d as the input mixture is inaccurate, as the incubation step should involve RNase_T2 treatment to produce the hydrolysate, not process an already hydrolyzed sample. |\\n| FB | | The incubation time is incorrectly set to 30 minutes instead of the required 3 hours. Additionally, the use of \\u201crna_samples\\u201d as the input lacks specificity, as it should refer to the RNase_T2-treated samples according to the protocol. |\\n\\nExample 3\\n\\n| | Strengths | Weakness |\\n| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |\\n| EE+ | The temperature and time parameters are accurately specified. | The configuration lacks the inversion count parameter. |\\n| EE | | The specified temperature is incorrect. The protocol requires incubation at 65\\u00b0C to ensure optimal reaction conditions, but the generated configuration uses 37\\u00b0C, which is insufficient and may compromise the reaction\\u2019s effectiveness. |\\n| II | | The incubation time is incorrect. The experiment requires a duration of 30 minutes for optimal results, but the generated configuration specifies 1 hour. |\\n| FB | | The specified incubation time is incorrect; it should be 30 minutes instead of 1 hour. Extending the incubation to 1 hour may negatively impact the sample integrity and alter the reaction dynamics, potentially leading to suboptimal experimental results. |\"}", "{\"title\": \"Response to reviewer #WJzq - 3.1\", \"comment\": \"> The language is often very abstract and abstruse. This is sometimes expected because the topic itself is very abstract. But making it simpler would enable the reader to appreciate the contributions more if there was more clarity in explanations.\\n\\nThanks for the comment. The same concern was raised when we were writing the paper. We have tried to make the expressions as concrete and intuitive as possible. However, as the reviewer has mentioned, the topic is intrinsically abstract because we are describing a scientific problem abstracted from real-world applications. We appreciate the reviewer for pointing this out. We are trying our best in the revision process to enhance the accessibility of the writing.\\n\\n> The work suggests it uses LLMs for the protocol design and it is indeed mentioned where LLMs are used. There are no details, however on how exactly LLMs are employed in the suggested framework (i.e. do they receive the input from the DSL? or is the DSL a step along the way). It is somewhat intuitive, but an explicit description would be helpful. I appreciated the details in the Appendix but this should be mentioned in the main text.\\n\\nThanks for the question. \\n1. We begin by retrieving DSL instructions that are potentially relevant to the target protocol;\\n2. Next, we combine the title, description, and DSL instructions as a prompt for the LLM.\\n\\nHere is an example of a prompt used for EI and the initial stage of EE methods.\\n```\\nYour goal is to generate plan in domain specific language (DSL) for biology protocols.\\nThe DSL specifications related to the operations involved in the experiment are provided. The DSL specification of each operation consists of multiple patterns, each pattern is an operation execution paradigm.\\nOutput each operation of the plan in the form of a DSL program. Each DSL program is a dictionary. The final plan consists of the program of each step and is returned in a json block, without any annotation.\\n\\nHere is an example of how to generate plan in DSL for a biology protocol.\", \"example\": \"{example protocol title}\", \"here_are_some_extra_details_about_the_protocol\": \"This molecular biology protocol aims to extract high molecular weight genomic DNA from coral sperm using a method based on RNAse and ProteinaseK treatment, followed by phenol/chloroform extraction. The protocol prioritizes purity and minimal damage to the DNA, making it suitable for downstream genetic analyses.\", \"example_plan_in_dsl\": \"{example plan}\", \"your_task\": \"Generate plan in DSL for a protocol for High Molecular Weight genomic DNA from coral sperm.\", \"you_can_choose_to_instantiate_the_following_dsl_specification_to_construct_the_dsl_program\": \"{\\n \\\"Grind\\\": [\\n {\\n \\\"pattern\\\": {\\n \\\"Precond\\\": {\\n \\\"SlotArgNum\\\": 1,\\n \\\"SlotArg\\\": [\\n \\\"Liquid\\\"\\n ]\\n },\\n \\\"Execution\\\": [\\n {\\n \\\"DeviceType\\\": \\\"centrifuge\\\",\\n \\\"Config\\\": {\\n \\\"time\\\": [\\n \\\"3 - 5 sec\\\"\\n ]\\n }\\n },\\n {\\n \\\"DeviceType\\\": \\\"mortar and pestle\\\",\\n \\\"Config\\\": {}\\n }\\n ],\\n \\\"Postcond\\\": {\\n \\\"EmitArgNum\\\": 1,\\n \\\"EmitArg\\\": [\\n \\\"Solid\\\"\\n ]\\n }\\n },\\n \\\"examples\\\": [\\n \\\"Grind again as in step B3 ( make sure the water is not frozen before grinding ) .\\\",\\n \\\"grind tissue with a mortar and pestle in the presence of liquid nitrogen .\\\",\\n \\\"Grind the tissue to a fine powder by using mortar and pestle using liquid nitrogen .\\\",\\n \\\"The material before the grinding process ( Before ) and the fully grinded material ( After ) DNA extractionBriefly centrifuge the CTAB treated samples for 3 - 5 sec .\\\"\\n ]\\n }\\n ],\\n ...\\n {The rest of Operation-view DSL specification}\\n}\", \"your_plan_in_dsl_program\": \"```\"}", "{\"title\": \"Response to reviewer #WJzq - 2\", \"comment\": \"> The motivating examples and descriptions seem to be very heavily concerned with natural sciences. Is it possible to extend your framework to, say psychology or experimentation in computer science itself?\\n\\nThanks for the question. In theory, our framework can be applied to any field that requires adherence to specific protocols and has a need for automated execution. As an example, consider an automated kitchen controlled by a computer, which we provide here for your reference:\\n1. Assuming the automated kitchen\\u2019s computer is already programmed to prepare \\u201cbraised pork ribs\\u201d and \\u201csteamed sea bass\\u201d:\\n **Braised Pork Ribs**\\n```Plain\\nBraised Pork Ribs\\n1.Select pork ribs as the main ingredient.\\n2.Heat a pan over high heat.\\n3.Add the ribs to the pan and fry for about 5 minutes until they are browned.\\n4.Add seasonings: soy sauce and sugar.\\n5.Reduce the heat to medium.\\n6.Simmer the ribs for 30 minutes until tender.\\n7.Serve hot.\\n```\\n```Plain\\nSTART\", \"select_ingredient\": \"sea bass\", \"action\": \"simmer, temperature: medium, time: 20 min\\nEND\\n```\\n> How scalable is the framework for more complex protocols?\\n\\nThis is a very good question. Following the convention of experimental sciences, more complex protocols can be referred to longer protocols. Here we profile the length distribution of the groundtruth in our test set across the four domains. On this basis, we select one of the longest protocols to demonstrate the scalability of the framework. \\n\\nNumber of steps (corresponding to the length of the ground-truth pseudocode program): min=2, avg=12.62, max=33\\n\\n[Steps of novel protocols across four domains](https://anonymous.4open.science/api/repo/AutoDSL-Planning-Figure-0DFE/file/len_count_program.png?v=7d71ce56)\\n\\nWe select a complex protocol as an example to demonstrate the scalability of our framework. This protocol consists of 26 steps in its pseudocode program and 131 steps in its natural language procedure. Below, we present a fragment of the generated results from our best approach alongside several baseline methods:\\n\\n[Complex example](https://anonymous.4open.science/api/repo/AutoDSL-Planning-Figure-0DFE/file/example2.png?v=c742c1a1)\\n\\n1. Overall, our framework demonstrates strong performance even when handling complex protocols;\\n2. When facing long and complex protocols, the effect will be influenced by the hallucination of LLM:\\n 1. The machine designers may select devices different from those specified in the ground truth to complete the experiment;\\n 2. The consistency of LLM-generated design results may decrease, underscoring the importance of validating LLM outputs through DSL.\\n\\nThe more rigorous analysis of scalability represents a promising avenue for future research, and we appreciate the reviewer's insightful suggestion in this regard. We have included this additional discussion in the revised version.\", \"add_seasoning\": \"soy sauce, sugar\"}", "{\"title\": \"Response to reviewer #WJzq - 3.2\", \"comment\": \"Here is an example of a prompt used for EI+ and the initial stage of EE+ methods.\\n```\\nYour goal is to generate plan in domain specific language (DSL) for biology protocols.\", \"two_perspectives_of_the_dsl_specification_are_provided\": \"the specification for experimental operations and the specification for experimental products.\\nThe DSL specification of each operation or product consists of multiple patterns, each pattern is an operation execution paradigm or a product flow paradigm.\\nOutput every operation of the plan in the form of an operation DSL program and every product of the plan in the form of a product DSL program.\\nEach DSL program is a dictionary. The final plan consists of the program of each step and product and is returned in a json block, without any annotation.\\n\\nHere is an example of how to generate plan in DSL for a biology protocol.\", \"example\": \"{example protocol title}\", \"here_are_some_extra_details_about_the_protocol\": \"This molecular biology protocol aims to extract high molecular weight genomic DNA from coral sperm using a method based on RNAse and ProteinaseK treatment, followed by phenol/chloroform extraction. The protocol prioritizes purity and minimal damage to the DNA, making it suitable for downstream genetic analyses.\", \"example_plan_in_dsl\": \"{example plan}\", \"your_task\": \"Generate plan in DSL for a protocol for High Molecular Weight genomic DNA from coral sperm.\", \"you_can_choose_to_instantiate_the_following_dsl_specifications_to_construct_the_dsl_program\": \"\", \"operation_view_dsl_specification\": \"{\\n \\\"Place\\\": [\\n {\\n \\\"pattern\\\": {\\n \\\"Precond\\\": {\\n \\\"SlotArgNum\\\": 1,\\n \\\"SlotArg\\\": [\\n \\\"Physical Object\\\"\\n ]\\n },\\n \\\"Execution\\\": [\\n {\\n \\\"DeviceType\\\": \\\"pasteur pipet\\\",\\n \\\"Config\\\": {}\\n }\\n ],\\n \\\"Postcond\\\": {}\\n },\\n \\\"examples\\\": [\\n \\\"Place the clipped cartridges into the grid box.\\\",\\n \\\"( B ) The needle bevel is placed against the side of the 15 mL tube and the liquid gently layered on top of the previous density .\\\",\\n \\\"Place a cover slide ( 24 x 60 mm ) on top of your samples .\\\"\\n ]\\n }\\n ],\\n ...\\n {The rest of Operation-view DSL specification}\\n}\", \"product_view_dsl_specification\": \"{\\n \\\"suspended semen\\\": {\\n \\\"Pred\\\": \\\"Transfer Operations\\\",\\n \\\"FlowUnit\\\": {\\n \\\"Component\\\": \\\"suspended semen\\\",\\n \\\"ComponentType\\\": \\\"Liquid\\\",\\n \\\"Vol\\\": [\\n \\\"one drop\\\"\\n ],\\n \\\"Container\\\": [\\n \\\"slide\\\"\\n ],\\n \\\"Cond\\\": {}\\n },\\n \\\"Succ\\\": \\\"Detection and Measurement Operations\\\"\\n },\\n ...\\n {The rest of Product-view DSL specification}\\n}\", \"your_plan_in_dsl_program\": \"```\\nWe have referred this to the main text in the revised version to enhance the accessibility of the paper.\"}", "{\"metareview\": \"This paper introduces a framework for automated scientific protocol design in self-driving labs. Unlike existing systems that only execute predefined protocols, this system automatically designs new ones using a hierarchical representation which captures protocols at three levels: actions, operations, and material flows, each encoded using domain-specific languages. The resulting protocols are verifiable (although only manually), and the system is extensively tested across three tasks in four different domains. The hierarchical representation consistently outperforms an ablation without the suggested hierarchy.\\n\\nAll reviewers agreed that this paper is tackling an interesting and important challenge which is well motivated in the introduction. Several reviewers also felt that the particular method described here was elegant and clever, and appreciated the extensive evaluation of the method across domains and tasks.\\n\\nSeveral reviewers also mentioned that there were parts of the paper's presentation which could be improved. For many of these points, the authors adapted the paper during the revision period to address some of these presentation points, including better situating the work in some of the relevant literature. However, the authors should ensure that the remaining points are addressed before the camera ready deadline.\\n\\nGiven that all reviewers agreed that the paper tackles an important challenge with a novel method validated by experiments, I recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The paper was significantly updated during the rebuttal period. As mentioned above, several reviewers brought up issues of presentation (especially the figure captions, problem formalization, and results), and the authors changed the paper accordingly (including adding large results sections to the appendix, adding expanded and descriptive captions for all figures, and completely revising the discussion section). The updated paper is significantly better than the initially submitted version.\"}", "{\"comment\": \"Thank you for your suggestion. We have incorporated the discussion on protocol certification into the 'Additional Remarks' section. We are continuously working to further elaborate on this part.\"}", "{\"title\": \"Response to reviewer #LMMv - 7\", \"comment\": \"> Unclear whether they consider the problem setting to be a contribution. It would be important to contrast the problem formulation with that of prior work, such as that of O\\u2019Donoghue et al. (2023). In particular, whether their formulation of the protocol design problem (not the representation itself) differs in any meaningful way. This is important because the identification of the protocol design problem is cited as a contribution on line 147.\\n\\nThanks for the comment. As the reviewer has mentioned, the protocol design problem has been explored by previous works. Differently, our major contribution is identifying the problem of representation for protocol design, which was not explicitly considered by the previous work. The type of representation is the pivot variable we are controlling in the evaluation. In comparison with our compared baseline approaches, including the original natural language-based text representation, i.e., FB; the instance actions with attributes representation developed upon BioPlanner [1], i.e., IB and II; and the operation-centric view-only representation, i.e., EI and EE, the results indicate that our proposed approaches with the dual-representation of operation- and product-flow-centric views, i.e., EI+ and EE+, significantly outperform their counterparts with alternative representations (EE+ vs. EE: $t(278) = 8.007, \\\\mu_d < 0, p < .0001$; EI+ vs. EI: $t(278) = 8.397, \\\\mu_d < 0, p < .0001$; EE+ vs. II: $t(278) = 24.493, \\\\mu_d < 0, p < .0001$; EI+ vs. II: $t(278) = 23.855, \\\\mu_d < 0, p < .0001$; EE vs. II: $t(278) = 16.315, \\\\mu_d < 0, p < .0001$; EI vs. II: $t(278) = 15.259, \\\\mu_d < 0, p < .0001$; II vs. FB: $t(278) = 8.340, \\\\mu_d < 0, p < .0001$; also see the figures below). This result suggests that representation can be a key factor to the extent we are able to elicit the potential of knowledge-based machine designers like LLMs on protocol design. Therefore, the problem of representation for protocol design should be a sufficiently significant contribution, as we have stated on line 147.\\n\\n[Comparison between the capabilities of different machine designers across the six dimensions](https://anonymous.4open.science/api/repo/AutoDSL-Planning-Figure-0DFE/file/redar_2.png?v=a97ddaac)\\n\\nTo clarify, the protocol design problem part is not essentially distinct from that introduced in the previous work [1], and we did not identify it as a major contribution. Since there are only natural language-based empirical descriptions of the protocol design problem, we provide the symbolic notations of the protocol design problem, in order to set up the foundation of the problem formulation of representation for protocol design. We have made the revisions to make this point clearer.\", \"references\": \"[1] O\\u2019Donoghue, O., Shtedritski, A., Ginger, J., Abboud, R., Ghareeb, A., & Rodriques, S. (2023, December). BioPlanner: Automatic Evaluation of LLMs on Protocol Planning in Biology. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 2676-2694).\"}", "{\"summary\": [\"The authors propose a hierarchical structured representation of experiment protocols to improve the reliability of novel protocols generated by LLMs with RAG.\", \"The representation is actually a dual representation over operation-centric and flow-centric views of a protocol.\", \"This works because the representation serves as a domain-specific language to generate protocols in a bipartite graph-like structure, which enables verification of both the \\\"nodes\\\" and \\\"edges\\\" of the protocol, improving its reliability.\", \"However, to use this structured representation requires extracting the entities, which is domain-specific. To do this, they use a Dirichlet Process Mixture Model to extract entities from natural language protocols, which they then aggregate for functions but not for components.\", \"To demonstrate the improved viability of protocols generated by this approach, they subselect a test set of 140 protocols from their database with the intent of evaluating across 3 different protocol design tasks and 4 domains. They evaluate generated protocols against the ground truth using six distinct metrics. They compare seven different design approaches which consider different elements of their proposed representation, with and without verification, showing how the different aspects of their approach contribute to its success.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The authors make three contributions:\\n1. They present the protocol design problem setting, as well as the hierarchical representation of protocols.\\n2. The propose a method to automatically determine the elements of this structured representation from existing protocols.\\n3. They perform experiments assessing the value of their representation by testing different methods based on this representation on a varied set of tasks.\\n\\n# originality\\n\\nThe proposed representation and method appear quite novel, and the choice of metrics to assess the protocol designers is interesting. However, there is no mention of the extensive literature on problem decomposition for LLMs, e.g. Chain-of-Thought reasoning (Wei et al. (2023)). This makes it difficult to ascertain exactly how novel such a decomposition really is. it would be helpful to compare and contrast to existing methods in the literature both for the applied problem of protocol design (e.g. O\\u2019Donoghue et al. (2023)) and the solution of task decomposition (e.g. Wei et al. (2023)). It would also be helpful to explicitly state in the contributions whether the is a new problem formulation, application of an existing method to a new domain, or both.\\n\\n# quality\\n\\nThe conclusions drawn in section 4.5 are the following: \\n1. Their proposed approaches outperform naive counterparts.\\n2. Dual-view hierarchical representations excel at protocol design tasks.\\n3. The approach generalizes across scientific domains.\\n\\nThe first claim is sound considering the extensive metrics used to assess different approaches. However, it is also shallow compared to the amount of information in Figure 3. On average, it seems that the approach put forward performs well, but there is significantly more variability in their performance. Furthermore, different metrics seem to behave different to the new metrics, with IoU metrics increasing the tail of good results (e.g. IoU metrics), and Sim metrics decreasing the tail of bad results. It is surprising that this is not discussed in more detail, and that there is no estimator of uncertainty (e.g. std, stderr, variance) in the tables in Appendix A. \\n\\nThe second claim is hard to interpret, as there are no baselines from the literature that are evaluated. As such, it is unclear what is meant by \\\"excel\\\" in this case. Given the highly variable performance values in Figure 3 and the lack of baselines from the literature, it is not possible to conclude this from the available results. You could alternatively state that the dual-view approaches outperform their counterparts across all tasks on average across all metrics.\\n\\nThe third claim is well supported by the experiments across four domains and the aggregate results. That being said, an additional set of results in the appendix that shows results by scientific domain would also be helpful in supporting this claim, especially if these results were to show that the proposed methods performed best across all domains separately (i.e., tables like those in appendix A, but partitioned by domain instead of task).\\n\\n# clarity\\n\\nAs a non-expert on protocol design, I found the problem motivation well done. The task is well situated within its broader context. Figure 1A is enlightening, but Figure 1B is slightly confusing, as the relationships between the different elements could be made clearer. The problem setting is well presented with a judicious use of notation and examples throughout. The experiments are well explained as well.\\n\\n# significance\\n\\nThe motivation put forward in the introduction accurately captures the significance of this work. However, it could be improved even more by relating it to existing work that creates structured workflows using LLMs, and relating it to the more general problem of getting LLMs to structure their outputs.\", \"weaknesses\": [\"# Presentation\", \"Figure 3 Caption is not standalone, very small font sizes for xticks and y ticks. The x-axis is not mentioned, which makes interpretation impossible.\", \"Figure 2 labels and ticks are too small, the caption is largely uninformative without the main text\", \"Fig 1 B is unclear\", \"Limitations section is empty (Appendix G)\", \"# Soundness\", \"Line 361: A reference to Fig 2B is insufficient for the claim that function abstraction works. Idem for Fig 2C. The results must be interpreted (see the \\\"Quality\\\" section in strengths for more details).\", \"The testing set is susceptible for memorization because you are using LLMs. It would be good to have a set of protocols that are after the cutoff dates of the LLM training.\", \"it would be good to introduce the SA algorithm in more detail so that we understand what the similarity score actually represents.\", \"Lack of baselines from the literature\", \"# Contribution\", \"Very limited discussion of the results.\", \"Unclear whether they consider the problem setting to be a contribution. It would be important to contrast the problem formulation with that of prior work, such as that of O\\u2019Donoghue et al. (2023). In particular, whether their formulation of the protocol design problem (not the representation itself) differs in any meaningful way. This is important because the identification of the protocol design problem is cited as a contribution on line 147.\"], \"questions\": [\"Line 44: What is a \\\"protocol portal\\\"? Such terminology, if not generally entrenched in the suggested domain, should be introduced.\", \"Line 199: \\\"The total amount of the instance actions can be extremely high, i.e., about 150K per domain,\\\" how do you arrive at this number?\", \"How do you track quantities of reagents? The algorithm seems to treat them as a set and remove them if used once without any notion of quantity.\", \"Line 504: what it t( ) ?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your suggestion. We will incorporate scalability analysis into the Appendix. We are continuously working to further elaborate on this part.\"}", "{\"title\": \"Response to reviewer #WJzq - 1\", \"comment\": \"> Can you provide more intuition on what the interface \\\\phi is? Is that simply a set of possible experiments for a given operation (such as the given \\u201chomogenization\\u201d)? Or a set of functions? Why is it operationalized the way it was operationalized?\\n\\nThis is a very good question. Interface is a concept of functional abstraction [1]. Interface disentangles the abstract functionality on the semantics level and its corresponding implementation details on the execution level. This approach encapsulates the implementation of an operation into a \\\"black-box\\\", so the users of the operation would only need to consider its input and output. Therefore, with such encapsulated representation for protocol design, we only need to care about the consistency between the output of the predecessor operation and the input of the successor operation, without caring about their implementation details. \\n\\nThis is the idea behind operationalization. Operationalization makes the interface an abstract function over all relative instance actions. The interface is abstracted from the execution contexts of all instance actions with the same reference name, i.e., the same purpose, and can be instantiated to an instance action given a specific execution context. A specific context can be the predecessor operation, the successor operation, the precondition, or the postcondition of the considered operation. An instance action configures a specific implementation for a specific execution context. For the operation \\\"homogenization\\\", the implementation of one instance action can be \\\"using an ultrasonic homogenizer\\\" if the precondition, namely, the execution context, has intermediate product \\\"cell suspension\\\" available; the implementation of another instance action can be \\\"using a bead mill\\\" if the precondition contains tissue. This example demonstrates the relationship between interface and instance actions of an operation: the interface is abstracted from the set of instance actions and can be instantiated to instance actions.\\n\\nHere we also give a more intuitive example to enhance the reviewer's comprehension. Consider the culinary scenario with the actions \\\"frying the egg\\\", \\\"frying the fish\\\", and \\\"frying the steak\\\".These are different instance actions coming with the same purpose \\\"to fry something\\\". Therefore, we can abstract the interface from these instance actions to operationalize the operation \\\"fry\\\". The input of \\\"fry\\\" should be something raw and its output should be something fried. Given different preconditions with available eggs or pieces of steak, the abstract semantic operation \\\"fry\\\" can be grounded to instance actions \\\"frying the egg\\\" or \\\"frying the steak\\\" respectively, through the instantiation of the interface. In summary, an interface serves as the bridge between the semantics level and the execution level. We have made the revision to enhance the accessibility of the concept.\", \"references\": \"[1] Abelson, H., & Sussman, G. J. (1996). Structure and interpretation of computer programs (p. 688). The MIT Press.\"}", "{\"comment\": \"Thank you for your suggestion. We will revise Section 2.3 to integrate the intuition behind operationalization. Since you have mentioned the \\\"out of place\\\" issue, we will adapt this intuition to the biologically-inspired examples in the context. For example, the actions \\\"homogenization of mouse liver tissue ...\\\", \\\"homogenization of bacterial cell suspension ...\\\", and \\\"homogenization of bacterial air samples ...\\\" are different instance actions coming with the same purpose \\\"to homogenize something\\\", and can be operationalized to the operation \\\"homogenization\\\". We will add this one-sentence intuition to the main text, and retain the culinarily-inspired examples in the additional remarks for a more intuitive reference.\"}", "{\"title\": \"Response to reviewer #BhtL - 5.1\", \"comment\": \"> While the authors mention as objectives the \\\"exploration of novel goals\\\", \\\"generating novel experimental objectives\\\" and aim to measure \\\"a protocol's novelty\\\", their metrics is only based on similarity measures. Diversity is not mentioned in the criteria. Could you add diversity measures or argue how the proposed metrics take into account diversity ?\\n\\nThanks for the comment. According to professional standards for experimental protocols in the natural sciences [1], the fundamental components of an experimental protocol include the objectives of the experiment, the operations performed, the sequence of these operations, and the reagents or intermediate products used. The diversity of experiments arises from the various combinations of these elements. Within this context, our measurement method effectively captures the diversity of experimental protocols for the following reasons.\\n\\nFirstly, in measuring the novelty of an experimental protocol, we comprehensively evaluate its differences from existing protocols across multiple dimensions, including experimental objectives, operation sequences, and product flows. This ensures that the new protocol exhibits dissimilarity and diversity from existing ones at both the experimental design and execution step levels. Based on these evaluations, we classify novel protocols into three levels of novelty: planning, modification, and adjustment.\\n\\nSecondly, the subdivision of specific domains further ensures that our measurement method captures the diversity of experimental protocols within the same domain. The fundamental components of experimental protocols vary significantly across different domains (see Fig. 2). Therefore, when evaluating the novelty of a protocol, we primarily focus on its differences from existing protocols within the same domain. This approach prevents the exclusion of protocols that may share similar distributions across different domains.\\n\\n[Confusion matrices on operation distribution, product distribution and device distribution](https://anonymous.4open.science/api/repo/AutoDSL-Planning-Figure-0DFE/file/domain_heatmap.png?v=653dc994)\\n\\nWe appreciate the reviewer\\u2019s emphasis on diversity. Measuring diversity among novel protocols is indeed both informative and meaningful. To address this, we have supplemented our analysis with a t-SNE visualization of the experimental objectives (described in natural language) for the novel protocols we selected. The results demonstrate a well-dispersed distribution, indicating a sufficient level of diversity among the protocols. We have made the revisions accordingly.\\n\\n[Diversity among novel protocols across four domains](https://anonymous.4open.science/api/repo/AutoDSL-Planning-Figure-0DFE/file/diversity.png?v=f4e8a566)\", \"references\": \"[1] Bartley, B., Beal, J., Rogers, M., Bryce, D., Goldman, R. P., Keller, B., ... & Weston, M. (2023). Building an open representation for biological protocols. ACM Journal on Emerging Technologies in Computing Systems, 19(3), 1-21.\"}", "{\"comment\": \"Thank you for this clarification. Please, add this 2-sentence information on how you specifically use the LLMs and the DSL to the appropriate section in the main text.\"}", "{\"title\": \"General Response\", \"comment\": \"We thank all reviewers for their time and valuable comments. The feedback is both substantial and helpful for improving our paper. In this work, we study the representations for protocol design, to fully elicit the capabilities of knowledge-based machine designers, such as Large Language Models, on this task. Accordingly, we propose a multi-faceted, multi-scale representation, where instance actions, generalized operations, and product flow models are hierarchically encapsulated using Domain-Specific Languages. We further develop a data-driven algorithm that autonomously customizes these representations for specific domains. Our qualitative and quantitative results underscore the potential of the representation to serve as an auxiliary module for Large Language Models, in the realm of machine-assisted scientific exploration.\", \"we_would_like_to_thank_the_reviewers_for_acknowledging_our_work_to_be\": \"1. The paper identifies \\\"a relevant and timely problem, which is much-needed and of high added value\\\" (reviewer #Xeav), \\\"is well-motivated with accurately captured significance\\\" (reviewer #LMMv), with \\\"extremely compelling premise\\\" (reviewer #WJzq), and \\\"ambitiously represents a synthetic representation of experimental protocols across multiple sciences\\\" (reviewer #BhtL).\\n2. The proposed representation \\\"is particularly elegant\\\" that treats \\\"protocol modeling as conditional probabilities\\\" (reviewer #WJzq), and \\\"is especially clever\\\" regarding \\\"the dual verification system\\\" (reviewer #WJzq), which \\\"is well presented\\\" and \\\"appears quite novel\\\" (reviewer #LMMv), thereby \\\"contributes to the state of the art\\\" (reviewer #Xeav). \\n3. The experiments \\\"are well explained\\\" with \\\"interesting choice of metrics to assess protocol designers\\\" (reviewer #LMMv), which is \\\"sound and convincing\\\" (reviewer #Xeav), demonstrating \\\"practical utility and broad applicability\\\" (reviewer #WJzq) through \\\"clear presentation and discussion\\\" (reviewer #Xeav).\\n\\nBased on the reviewers' comments, we made the revisions including:\\n\\n1. Clarifying specific concepts to enhance the paper's accessibility for readers with a background outside experimental sciences.\\n2. Demonstrating running examples of the machine designers equipped with our resulting representations in detail to make the paper more intuitive and comprehensive.\\n3. Conducting additional analyses and discussions regarding the computational complexity, rationality, scalability, and generality of our proposed framework to make the paper more rigorous and self-consistent.\\n4. Improving the readability of the paper through fixing typos, resolving ambiguities, enlarging the font size of the text in the plots, and rewriting the captions and references of the figures.\\n\\nWe have highlighted the changed part of the text with red color in the new version of the paper pdf file. \\n\\nIn the following, we address specific questions for each reviewer.\"}", "{\"summary\": \"The work focuses on the automated design of self-driving labs' protocols. Protocols specify how robots in self-driving labs conduct experiments. In many cases, conservative protocols are not sufficient because they do not allow for significant scientific discovery. However, designing new protocols is a cumbersome and error-prone process. To tackle these challenges, the authors suggest LLMs for the generation of protocols.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The problem is relevant and timely. Automation is much-needed and of high added value.\", \"The problem is open without sufficient solutions in the area. The approach seems to be novel and contributes to the state of the art.\", \"The contribution seems to be sound and the evaluation is convincing.\", \"Clear presentation and discussion.\"], \"weaknesses\": [\"It is unclear why LLMs are chosen for this purpose. This is essentially a big design-space exploration problem with the added opportunity of using a physical actor in purposeful experimentation. A similarly apt approach could be using digital twins with traditional learning methods, e.g., reinforcement learning and obtain a formally modeled protocol.\", \"I find it somewhat concerning that a costly and potentially hazardous endeavor such as robotized experimentation with chemicals and biologically active materials is approached at the level of LLMs and the safety concerns are not discussed. For example, certification of protocols through formal verification might not be possible in this approach.\"], \"questions\": [\"Why LLMs and not some other AI method? This would allow for a better positioning of the work.\", \"How can the generated protocols be certified? This would be a much-appreciated, brief discussion point.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer #LMMv - 6.1\", \"comment\": \"> However, it is also shallow compared to the amount of information in Figure 3. On average, it seems that the approach put forward performs well, but there is significantly more variability in their performance. Furthermore, different metrics seem to behave different to the new metrics, with IoU metrics increasing the tail of good results (e.g. IoU metrics), and Sim metrics decreasing the tail of bad results. It is surprising that this is not discussed in more detail, and that there is no estimator of uncertainty (e.g. std, stderr, variance) in the tables in Appendix A.\\n\\nThanks for the comment. Though we didn't show the results of uncertainty, we performed significance tests on the effects between methods, which is shown in Figure 3 (C-E), the bar plots with error bars. \\n\\n**planning**\\n| Method | IoU(Op) mean(std, var, stderr) | IoU(Prod) | IoU(Dev) | Sim(Exec) | Sim(Goal) | Sim(Param) |\\n| ------ | ------------------------------ | --------------------------- | --------------------------- | --------------------------- | --------------------------- | --------------------------- |\\n| FB | 0.143 (0.087, 0.008, 0.016) | 0.04 (0.055, 0.003, 0.01) | 0.02 (0.06, 0.004, 0.011) | 0.282 (0.09, 0.008, 0.016) | 0.766 (0.096, 0.009, 0.017) | 0.826 (0.059, 0.004, 0.011) |\\n| IB | 0.109 (0.067, 0.004, 0.012) | 0.036 (0.053, 0.003, 0.01) | 0.019 (0.074, 0.005, 0.013) | 0.242 (0.065, 0.004, 0.012) | 0.735 (0.09, 0.008, 0.016) | 0.781 (0.069, 0.005, 0.012) |\\n| II | 0.382 (0.154, 0.024, 0.028) | 0.05 (0.062, 0.004, 0.011) | 0.084 (0.191, 0.037, 0.034) | 0.452 (0.134, 0.018, 0.024) | 0.788 (0.074, 0.006, 0.013) | 0.851 (0.062, 0.004, 0.011) |\\n| EI | 0.542 (0.16, 0.026, 0.029) | 0.305 (0.181, 0.033, 0.032) | 0.259 (0.211, 0.045, 0.038) | 0.572 (0.152, 0.023, 0.027) | 0.849 (0.066, 0.004, 0.012) | 0.926 (0.026, 0.001, 0.005) |\\n| EI+ | 0.603 (0.208, 0.043, 0.037) | 0.555 (0.26, 0.068, 0.047) | 0.357 (0.237, 0.056, 0.043) | 0.737 (0.172, 0.03, 0.031) | 0.875 (0.057, 0.003, 0.01) | 0.949 (0.023, 0.001, 0.004) |\\n| EE | 0.524 (0.151, 0.023, 0.027) | 0.37 (0.198, 0.039, 0.036) | 0.252 (0.206, 0.043, 0.037) | 0.558 (0.148, 0.022, 0.027) | 0.846 (0.078, 0.006, 0.014) | 0.928 (0.025, 0.001, 0.004) |\\n| EE+ | 0.607 (0.211, 0.044, 0.038) | 0.605 (0.235, 0.055, 0.042) | 0.355 (0.242, 0.059, 0.044) | 0.744 (0.179, 0.032, 0.032) | 0.893 (0.056, 0.003, 0.01) | 0.951 (0.021, 0.0, 0.004) |\\n\\n**modification**\\n| Method | IoU(Op) mean(std, var, stderr) | IoU(Prod) | IoU(Dev) | Sim(Exec) | Sim(Goal) | Sim(Param) |\\n| ------ | ------------------------------ | --------------------------- | --------------------------- | --------------------------- | --------------------------- | --------------------------- |\\n| FB | 0.181 (0.102, 0.01, 0.012) | 0.05 (0.071, 0.005, 0.008) | 0.038 (0.071, 0.005, 0.008) | 0.304 (0.102, 0.01, 0.012) | 0.796 (0.09, 0.008, 0.01) | 0.809 (0.06, 0.004, 0.007) |\\n| IB | 0.15 (0.1, 0.01, 0.012) | 0.038 (0.065, 0.004, 0.008) | 0.039 (0.076, 0.006, 0.009) | 0.281 (0.1, 0.01, 0.012) | 0.771 (0.089, 0.008, 0.01) | 0.788 (0.06, 0.004, 0.007) |\\n| II | 0.331 (0.143, 0.021, 0.017) | 0.101 (0.131, 0.017, 0.015) | 0.061 (0.135, 0.018, 0.016) | 0.416 (0.127, 0.016, 0.015) | 0.802 (0.087, 0.008, 0.01) | 0.851 (0.059, 0.004, 0.007) |\\n| EI | 0.593 (0.186, 0.035, 0.022) | 0.318 (0.158, 0.025, 0.018) | 0.336 (0.235, 0.055, 0.027) | 0.602 (0.164, 0.027, 0.019) | 0.866 (0.066, 0.004, 0.008) | 0.937 (0.03, 0.001, 0.003) |\\n| EI+ | 0.648 (0.21, 0.044, 0.024) | 0.626 (0.188, 0.035, 0.022) | 0.413 (0.256, 0.065, 0.03) | 0.765 (0.17, 0.029, 0.02) | 0.883 (0.055, 0.003, 0.006) | 0.952 (0.031, 0.001, 0.004) |\\n| EE | 0.588 (0.185, 0.034, 0.022) | 0.403 (0.192, 0.037, 0.022) | 0.332 (0.228, 0.052, 0.027) | 0.601 (0.164, 0.027, 0.019) | 0.873 (0.053, 0.003, 0.006) | 0.94 (0.028, 0.001, 0.003) |\\n| EE+ | 0.64 (0.213, 0.045, 0.025) | 0.661 (0.179, 0.032, 0.021) | 0.41 (0.253, 0.064, 0.029) | 0.757 (0.17, 0.029, 0.02) | 0.893 (0.043, 0.002, 0.005) | 0.953 (0.032, 0.001, 0.004) |\"}", "{\"title\": \"Response to reviewer #LMMv - 2\", \"comment\": \"> Line 504: what it t( ) ?\\n\\nThanks for the question. In our study, we used a statistical method called the paired t-test, represented by the symbol $t()$, to determine whether there is a significant difference in performance between machine designers using our proposed representation (EE+ and EI+) and the baseline methods. This test helps us verify our hypothesis that our proposed methods significantly outperform the baseline methods.\\n\\nThe formulation $t(\\\\cdot)=\\\\dots, \\u03bc_d<0, p<.01$ in our results indicates the outcome of the t-test. Here, $\\u03bc_d$ represents the average difference in performance between the two groups being compared. If $\\u03bc_d$ is less than zero, it suggests that the performances of the proposed methods are generally better than the performances of the baselines. However, the crucial part of this result is the p-value, represented by $p$. The p-value is a statistical measure that helps us determine the probability of observing the results we did if the null hypothesis were true. The null hypothesis in this context is that there is no difference in performance between the two methods.\\n\\nAs our results show that the p-value is less than 0.01, it indicates that there is less than a 1% chance that the observed difference in performance could occur if there were actually no differences (i.e., if the null hypothesis were true). This low probability leads us to reject the null hypothesis, thus concluding that the difference in performance is statistically significant. This means we have sufficient evidence to support our claim that the proposed representations (EE+ and EI+) outperform the baselines in a meaningful way. We have made the revisions to clarify this concept.\", \"references\": \"[1] Student. (1908). The probable error of a mean. Biometrika, 1-25.\\n\\n> Line 361: A reference to Fig 2B is insufficient for the claim that function abstraction works. Idem for Fig 2C. The results must be interpreted (see the \\\"Quality\\\" section in strengths for more details).\\n\\nThanks for the suggestion. We have extended both the caption of Fig 2 and the corresponding text for reference to Fig 2.\"}", "{\"comment\": \"Thank you for this response, this makes the operationalization clear. I assure you however, that it is not clear while reading Section 2.3. The need for an interface is well-established there. But if possible -- Section 2.3 could really use a sentence conveying this intuition you gave above :\\n\\n\\\"Consider the culinary scenario with the actions \\\"frying the egg\\\", \\\"frying the fish\\\", and \\\"frying the steak\\\".These are different instance actions coming with the same purpose \\\"to fry something\\\". Therefore, we can abstract the interface from these instance actions to operationalize the operation \\\"fry\\\".\\\"\\n\\n(although frying itself may seem out of place with so many biologically-inspired examples).\"}", "{\"title\": \"Response to reviewer #WJzq - 4\", \"comment\": \"> It is not clear how scalable this approach is for more difficult protocols. It is not also clear for someone outside of the tested areas if there even are more difficult protocols. This could be addressed.\\n\\nThis is a very good question. Following the convention of experimental sciences, more complex protocols can be referred to longer protocols. Here we profile the length distribution of the groundtruth in our test set across the four domains. On this basis, we select one of the longest protocols to demonstrate the scalability of the framework. \\n\\nNumber of steps (corresponding to the length of the ground-truth pseudocode program): min=2, avg=12.62, max=33\\n\\n[Steps of novel protocols across four domains](https://anonymous.4open.science/api/repo/AutoDSL-Planning-Figure-0DFE/file/len_count_program.png?v=7d71ce56)\\n\\nWe select a complex protocol as an example to demonstrate the scalability of our framework. This protocol consists of 26 steps in its pseudocode program and 131 steps in its natural language procedure. Below, we present a fragment of the generated results from our best approach alongside several baseline methods:\\n\\n[Complex example](https://anonymous.4open.science/api/repo/AutoDSL-Planning-Figure-0DFE/file/example2.png?v=c742c1a1)\\n\\n1. Overall, our framework demonstrates strong performance even when handling complex protocols;\\n2. When facing long and complex protocols, the effect will be influenced by the hallucination of LLM:\\n 1. The machine designers may select devices different from those specified in the ground truth to complete the experiment;\\n 2. The consistency of LLM-generated design results may decrease, underscoring the importance of validating LLM outputs through DSL.\\n\\nThe more rigorous analysis of scalability represents a promising avenue for future research, and we appreciate the reviewer's insightful suggestion in this regard. We have included this additional discussion in the revised version.\\n\\n> It would be very helpful to start with a motivational example of some particular case of research design like the one in Figure 1. Currently, it reads very abstract. But the Figure is helpful.\\n\\nThanks for pointing this out. We have extended both the caption of Fig 1 and the corresponding text for reference to Fig 1. \\n\\n> Lines 92-107 are supposed to briefly summarize the mechanism introduced by authors but it is very hard to comprehend. It would be good to correspond these to Figure 1B.\\n\\nThanks for the suggestion. We have corresponded the ideas and concepts introduced in this part to Figure 1B.\"}", "{\"title\": \"Response to reviewer #LMMv - 6.2\", \"comment\": \"**adjustment**\\n| Method | IoU(Op) mean(std, var, stderr) | IoU(Prod) | IoU(Dev) | Sim(Exec) | Sim(Goal) | Sim(Param) |\\n| ------ | ------------------------------ | --------------------------- | --------------------------- | --------------------------- | --------------------------- | --------------------------- |\\n| FB | 0.192 (0.1, 0.01, 0.017) | 0.077 (0.104, 0.011, 0.018) | 0.051 (0.094, 0.009, 0.016) | 0.319 (0.103, 0.011, 0.017) | 0.811 (0.078, 0.006, 0.013) | 0.823 (0.051, 0.003, 0.009) |\\n| IB | 0.197 (0.131, 0.017, 0.022) | 0.039 (0.063, 0.004, 0.011) | 0.006 (0.021, 0.0, 0.004) | 0.337 (0.141, 0.02, 0.024) | 0.802 (0.082, 0.007, 0.014) | 0.81 (0.049, 0.002, 0.008) |\\n| II | 0.453 (0.208, 0.043, 0.035) | 0.115 (0.161, 0.026, 0.027) | 0.091 (0.211, 0.045, 0.036) | 0.508 (0.184, 0.034, 0.031) | 0.805 (0.081, 0.007, 0.014) | 0.873 (0.056, 0.003, 0.01) |\\n| EI | 0.587 (0.19, 0.036, 0.032) | 0.328 (0.186, 0.034, 0.031) | 0.4 (0.265, 0.07, 0.045) | 0.623 (0.165, 0.027, 0.028) | 0.863 (0.055, 0.003, 0.009) | 0.944 (0.027, 0.001, 0.005) |\\n| EI+ | 0.668 (0.208, 0.043, 0.035) | 0.545 (0.259, 0.067, 0.044) | 0.449 (0.247, 0.061, 0.042) | 0.775 (0.152, 0.023, 0.026) | 0.883 (0.056, 0.003, 0.009) | 0.95 (0.04, 0.002, 0.007) |\\n| EE | 0.581 (0.184, 0.034, 0.031) | 0.404 (0.205, 0.042, 0.035) | 0.395 (0.261, 0.068, 0.044) | 0.616 (0.162, 0.026, 0.027) | 0.875 (0.039, 0.002, 0.007) | 0.946 (0.026, 0.001, 0.004) |\\n| EE+ | 0.65 (0.22, 0.048, 0.037) | 0.589 (0.229, 0.052, 0.039) | 0.441 (0.248, 0.062, 0.042) | 0.758 (0.16, 0.025, 0.027) | 0.893 (0.033, 0.001, 0.006) | 0.95 (0.042, 0.002, 0.007) |\\n\\n> As such, it is unclear what is meant by \\\"excel\\\" in this case. Given the highly variable performance values in Figure 3 and the lack of baselines from the literature, it is not possible to conclude this from the available results. You could alternatively state that the dual-view approaches outperform their counterparts across all tasks on average across all metrics.\\n\\nThanks for the suggestion. As the reviewer has mentioned, analysis and discussion over results from multi-dimensional evaluation metrics are subtle. Therefore, we changed the term \\\"excel\\\" to a more objective expression following the reviewer's suggestion. We have made the revision accordingly.\\n\\n> That being said, an additional set of results in the appendix that shows results by scientific domain would also be helpful in supporting this claim, especially if these results were to show that the proposed methods performed best across all domains separately (i.e., tables like those in appendix A, but partitioned by domain instead of task).\\n\\nThanks for the suggestion. Here we present the result of domain-indexed performance. We have made the revision accordingly.\\n\\n**Genetics**\\n| Method | IoU(Op) mean (std, var, stderr) | IoU(Prod) | IoU(Dev) | Sim(Exec) | Sim(Goal) | Sim(Param) |\\n| ------ | ------------------------------- | --------------------------- | --------------------------- | --------------------------- | --------------------------- | --------------------------- |\\n| FB | 0.179 (0.113, 0.013, 0.014) | 0.065 (0.082, 0.007, 0.01) | 0.037 (0.08, 0.006, 0.01) | 0.301 (0.116, 0.014, 0.014) | 0.795 (0.091, 0.008, 0.011) | 0.805 (0.066, 0.004, 0.008) |\\n| IB | 0.157 (0.129, 0.017, 0.015) | 0.042 (0.06, 0.004, 0.007) | 0.022 (0.059, 0.003, 0.007) | 0.297 (0.137, 0.019, 0.016) | 0.793 (0.07, 0.005, 0.008) | 0.789 (0.062, 0.004, 0.007) |\\n| II | 0.379 (0.2, 0.04, 0.024) | 0.12 (0.158, 0.025, 0.019) | 0.079 (0.16, 0.026, 0.019) | 0.457 (0.18, 0.032, 0.022) | 0.807 (0.083, 0.007, 0.01) | 0.85 (0.072, 0.005, 0.009) |\\n| EI | 0.599 (0.189, 0.036, 0.023) | 0.332 (0.177, 0.031, 0.021) | 0.353 (0.243, 0.059, 0.029) | 0.619 (0.164, 0.027, 0.02) | 0.862 (0.055, 0.003, 0.007) | 0.941 (0.026, 0.001, 0.003) |\\n| EI+ | 0.691 (0.198, 0.039, 0.024) | 0.606 (0.252, 0.064, 0.03) | 0.429 (0.283, 0.08, 0.034) | 0.803 (0.151, 0.023, 0.018) | 0.882 (0.054, 0.003, 0.006) | 0.954 (0.033, 0.001, 0.004) |\\n| EE | 0.592 (0.189, 0.036, 0.023) | 0.415 (0.206, 0.042, 0.025) | 0.351 (0.241, 0.058, 0.029) | 0.615 (0.163, 0.027, 0.02) | 0.87 (0.052, 0.003, 0.006) | 0.943 (0.025, 0.001, 0.003) |\\n| EE+ | 0.677 (0.21, 0.044, 0.025) | 0.653 (0.228, 0.052, 0.027) | 0.425 (0.28, 0.078, 0.033) | 0.791 (0.161, 0.026, 0.019) | 0.888 (0.045, 0.002, 0.005) | 0.955 (0.034, 0.001, 0.004) |\"}", "{\"title\": \"Response to reviewer #LMMv - 5\", \"comment\": \"> Limitations section is empty (Appendix G)\\n\\nThanks for pointing this out. Appendix G was accidentally removed from the version of the submission due to version control issues. We have recovered it in the revised version.\\n\\n> However, there is no mention of the extensive literature on problem decomposition for LLMs, e.g. Chain-of-Thought reasoning (Wei et al. (2023)). This makes it difficult to ascertain exactly how novel such a decomposition really is. it would be helpful to compare and contrast to existing methods in the literature both for the applied problem of protocol design (e.g. O\\u2019Donoghue et al. (2023)) and the solution of task decomposition (e.g. Wei et al. (2023)). It would also be helpful to explicitly state in the contributions whether the is a new problem formulation, application of an existing method to a new domain, or both.\\n\\nThanks for the comment. We would like to clarify that our objective is not to alternate Chain-of-Thought (CoT) reasoning. According to recent studies on the properties of CoT, LLMs with CoT may generate coherent but unprofessional text in expertise-intensive application scenarios [1]. Therefore, our proposed representation serves as an auxiliary guardrail module for LLMs with reasoning techniques such as CoT, enhancing LLMs' reasoning capability from two aspects: (i) the representation constrain the scope of reasoning into a close set of entities, such as available operations, reagents, and devices commonly used in the domain; and (ii) the representation provides fine-grained injection of domain-specific knowledge for LLMs, resulting in not only coherent but also professionality-compatible generated content.\\n\\nOne of our compared baseline approaches, II, is originated from the BioPlanner approach [2], which is actually implemented based on CoT. BioPlanner equips CoT with a relatively naive representation, namely, the instance actions with attributes described in Sec 2.2 of our paper. Results show that our approach significantly outperforms the approaches with CoT and the instance actions with attributes representation (EE+ vs. II: $t(278) = 24.493, \\\\mu_d < 0, p < .0001 $; EI+ vs. II: $t(278) = 23.855, \\\\mu_d < 0, p < .0001 $; also see the figures below), demonstrating the capability of our proposed representation. Also, this implies that representation may be dominant for LLMs' performances in this task. We have made the revisions to clarify this point.\\n\\n[Comparison between the capabilities of our approach and II across the six dimensions](https://anonymous.4open.science/api/repo/AutoDSL-Planning-Figure-0DFE/file/radar_1.png?v=928256b1)\", \"references\": \"[1] Xiao, Z., Zhang, D., Wu, Y., Xu, L., Wang, Y. J., Han, X., ... & Chen, G. (2023). Chain-of-Experts: When LLMs Meet Complex Operations Research Problems. In The Twelfth International Conference on Learning Representations.\\n\\n[2] O\\u2019Donoghue, O., Shtedritski, A., Ginger, J., Abboud, R., Ghareeb, A., & Rodriques, S. (2023, December). BioPlanner: Automatic Evaluation of LLMs on Protocol Planning in Biology. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 2676-2694).\"}", "{\"title\": \"Response to reviewer #BhtL - 4\", \"comment\": \"> while the word \\\"planning\\\" is used several times in the text, it is only at l.444 that the authors explicit planning tasks as \\\"the exploration of novel experimental goals\\\". This definition is confusing. While \\\"planning\\\" generally refers to finding the succession of actions to obtain a predefined goal, exploring novel goals is different, and can be referred to as \\\"goal babbling\\\" for instance.\\n\\nThanks for the insightful suggestion. The same concern was raised when we described the task as \\\"novel experimental goal exploration\\\". We did not come up with other better expressions, and it comes out that the terms \\\"novel\\\" and \\\"exploration\\\" are misleading. They seem to refer to an explorative process bootstrapping an unknown model without a predefined goal, which echoes the definition of goal babbling. However, in our context, these \\\"novel experimental goals\\\" are predefined by scientists before they are sent to self-driving laboratories for physical validation. The \\\"exploration\\\" is made by scientists and AI models for scientific discovery in their hypothesis forming phase. Therefore, in the physical validation phase, self-driving laboratories are given specific goals. We aim to empower them with the capability of designing protocols to achieve the predefined goals automatically. This requirement is aligned with the definition of planning and is distinct from that of goal babbling. To make the definition clear, we have revised the description to \\\"confirmation of unverified experimental goals\\\". We appreciate the reviewer for pointing out this ambiguity. We have revised the paper to improve the clarity of introducing this task.\\n\\n> In 2.2. the vocabulary used is confusing: a precondition is generally a property of the state that allows you to carry out your operations. It is a distinct notion from an input.\\n\\nThanks for the comment. We deliberately employ the term \\\"precondition\\\" rather than \\\"input\\\" to demonstrate a sense of resource requirement for an operation. An operation can only be executed when the required reagents and intermediate products are available, otherwise, it must wait until these required resources are ready. This property indicates the dependence between operations, shaped by the reagent flows. It also functions as the prerequisite of our proposed reciprocative verification mechanism based on the dual representation of operations and reagents. We believe the term precondition in our context can be viewed as a property of the state that allows for operation execution, which is in line with the concept mentioned by the reviewer. \\n\\nWe appreciate the reviewer for pointing this out and we have revised the paper to improve the clarity of introducing this concept.\\n\\n> Some terms are not defined. For instance : execution context (in l.190 which is used differently from execution condition) and key-value pairs (l193)\\n\\nThanks for pointing these out. We have revised the paper to improve the clarity of these terms.\\n\\n> While the authors present comparative results of their representations and algorithms, there is no comparison with other approaches. How do the results compare quantitatively or qualitatively to the state of the art ?\\n\\nThanks for the question. Automating the design of experiments is a relatively new domain, which was initially introduced by recent works in 2023 [1, 2]. In the previous literature search, we only find the current state-of-the-art work BioPlanner [3], which explicitizes the originally implicit experiment design process in previous works [1, 2]. As we have mentioned in the paper, our baselines are developed based on the methods proposed by these previous works. The Instance-Internal (II) designer is developed based on the state-of-the-art method of BioPlanner. The Flatten-Baseline (FB) and Instance-Baseline (IB) designers are developed based on the baselines being evaluated in [3].\\n\\nWe appreciate the reviewer for pointing this out. We have revised the paper to enhance the links between the introduction of these baseline methods in the subsection \\\"Machine designers\\\" and our citations of these previous works in the section \\\"Introduction\\\".\", \"references\": \"[1] Boiko, D. A., MacKnight, R., Kline, B., & Gomes, G. (2023). Autonomous chemical research with large language models. Nature, 624(7992), 570-578.\\n\\n[2] M. Bran, A., Cox, S., Schilter, O., Baldassari, C., White, A. D., & Schwaller, P. (2024). Augmenting large language models with chemistry tools. Nature Machine Intelligence, 1-11.\\n\\n[3] O\\u2019Donoghue, O., Shtedritski, A., Ginger, J., Abboud, R., Ghareeb, A., & Rodriques, S. (2023, December). BioPlanner: Automatic Evaluation of LLMs on Protocol Planning in Biology. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 2676-2694).\"}" ] }
9mjZ800m7Y
Multi-objective Differentiable Neural Architecture Search
[ "Rhea Sanjay Sukthanker", "Arber Zela", "Benedikt Staffler", "Samuel Dooley", "Josif Grabocka", "Frank Hutter" ]
Pareto front profiling in multi-objective optimization (MOO), i.e., finding a diverse set of Pareto optimal solutions, is challenging, especially with expensive objectives that require training a neural network. Typically, in MOO for neural architecture search (NAS), we aim to balance performance and hardware metrics across devices. Prior NAS approaches simplify this task by incorporating hardware constraints into the objective function, but profiling the Pareto front necessitates a computationally expensive search for each constraint. In this work, we propose a novel NAS algorithm that encodes user preferences to trade-off performance and hardware metrics, yielding representative and diverse architectures across multiple devices in just a single search run. To this end, we parameterize the joint architectural distribution across devices and multiple objectives via a hypernetwork that can be conditioned on hardware features and preference vectors, enabling zero-shot transferability to new devices. Extensive experiments involving up to 19 hardware devices and 3 different objectives demonstrate the effectiveness and scalability of our method. Finally, we show that, without any additional costs, our method outperforms existing MOO NAS methods across a broad range of qualitatively different search spaces and datasets, including MobileNetV3 on ImageNet-1k, an encoder-decoder transformer space for machine translation and a decoder-only space for language modelling.
[ "hardware efficiency", "neural architecture search", "network compression" ]
Accept (Poster)
https://openreview.net/pdf?id=9mjZ800m7Y
https://openreview.net/forum?id=9mjZ800m7Y
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wJnnPTcHER", "wGxcQmIefe", "rAoFIe6xHd", "mow3JRR80E", "mAJlhXEdZL", "jVTCjgsOFd", "i60WGVEidD", "hPheFdlIUb", "fVF3Q6iJW3", "dQUjRbjkfd", "ZafUimPsr0", "YwP3iPj9sH", "Xakuxtn7hX", "XWdWuiVdWi", "X83BZMHxgK", "W0vAAV6hME", "TdaDVyq7SI", "PSTQzUnZ1r", "P4PBSlomlY", "FqwaXgKXOK", "CYquM3QgfM", "AwAnRe7DIh", "9SEXfWqwuv", "8itY3ZdU8l", "78yH4DFvgg", "63eh2Hke2K", "4OuYpmuAwF", "3r0N6FRt9e", "1ZEQ5nVEhp", "0e6PuboPGv" ], "note_type": [ "official_comment", "official_comment", "meta_review", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732466175609, 1731934313385, 1734744779869, 1737523733106, 1730645240849, 1731933508188, 1732265657648, 1732381835631, 1732297767712, 1732603194166, 1731933487635, 1733202501342, 1731933441469, 1732466350495, 1732385235343, 1731934723391, 1731024382444, 1731934738736, 1732465772337, 1730120268162, 1732297858098, 1731935086985, 1729478587293, 1732225322496, 1732416436003, 1731935111670, 1732629500776, 1731934291261, 1732465823882, 1731935240634 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Area_Chair_3XE6" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5914/Reviewer_ugEA" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Reviewer_VHgP" ], [ "ICLR.cc/2025/Conference/Submission5914/Reviewer_ugEA" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Reviewer_ei5k" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Reviewer_VHgP" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Reviewer_ei5k" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Reviewer_VHgP" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Reviewer_KeBz" ], [ "ICLR.cc/2025/Conference/Submission5914/Reviewer_KeBz" ], [ "ICLR.cc/2025/Conference/Submission5914/Reviewer_ei5k" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ], [ "ICLR.cc/2025/Conference/Submission5914/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to followup questions\", \"comment\": \"Thank you very much for your followup questions. We respond to each of your questions inline:\\n### *1-Regarding utilizing hypernetwork, if the search space is small ( i.e., 30 alpha parameters) why do we need to use hypernetwork? Can\\u2019t we use simple heuristic, bayesian, or evolutionary algorithms? What benefits do hypernetworks bring to the table?*\\n\\nWe would like to clarify this point here. Let\\u2019s consider NB201 (the smallest space we consider) for simplicity. This space has 15,625 possible architectures and this corresponds to 5^6. Thus, while the output space of the hypernetwork is 6x5 (30 architecture parameters are predicted), corresponding to 6 edges and 5 operation choices, we can sample 5 possible operation choices for every edge edge using the discrete sampler reinmax, resulting in 15,625 (5^6) architecture choices. \\n\\nFurthermore, we want to refer you to the discussion on the search complexity in Section 4.4. One major benefit that our MODNAS pipeline has (including the hypernetwork) is the ability to generate a pareto front on multiple devices and objectives in just a search run. In the case of other blackbox heuristics such as BO or ES, the search phase needs to be conducted **multiple times on individual devices** since there is no hardware specific information being given to the algorithm during search. Furthermore, these algorithms normally rely on ground truth architecture evaluations and can be extremely in-efficient even in small search spaces\\u2013something that is not the case for MODNAS. \\n\\n### *2-I am not expecting the authors to provide results for object detection, but it would be great to have a discussion in the appendix on how to apply the proposed method in object detention and what challenges need to be solved. The following work might be useful: [X] Benchmarking Deep Learning Models for Object Detection on Edge Computing Devices [Y] YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems [Z] Virtuoso: Energy- and Latency-aware Streamlining of Streaming Videos on Systems-on-Chips*\\n\\nThank you for pointing us to these relevant papers benchmarking object detection on different hardware devices. We agree that object detection is a very important application since it is probably one of the most relevant use cases of neural networks on embedded devices (e.g. in self-driving cars). As per your suggestion we have now included a discussion on object detection as a potential application in Appendix P of the paper where we also refer to the papers you pointed.\\n\\n### *3- I still believe that Eyeriss is an old work and the authors need to use more recent work/numbers with 2-4 nm technology size y to get a more accurate estimation. However, I do understand this is out of the scope of this paper. So, no need to take action on this.*\\n\\nWe share the same opinion as you here. Given the rapid advancements in hardware devices for deep learning, benchmarks do get outdated quickly. In this work, we simply rely on previous hardware benchmarks, some of which (e.g.: HW-NAS-Bench) can be older and some newer, such as HW-GPT-Bench. Our main goal while chosing these benchmarks was to showcase the ability of MODNAS to work on 1) different search spaces, e.g., convolutional, transformer-based; 2) different tasks, e.g., machine translation, image classification, language modeling; 3) different objectives, e.g., latency, energy usage, perplexity, accuracy, and now memory usage as well. We believe that what you mention is an important call for developing \\u201cevolving hardware benchmarks\\u201d, which are continuously adapted as new hardware becomes available.\\n\\n### *4- I was referring to different DNN NAS compared to the MobileNet search space. For example, DARTS search space and not GPT ones.*\\n\\nThank you for clarifying this. We are not aware of a benchmark that includes hardware metrics on the DARTS search space and would be happy to include an experiment on this search space if there exists one including architecture hardware metrics profiled across a variety of devices. We do not foresee any issues with MODNAS struggling on such cell-based search spaces since the NB201 is also cell-based (though with a fixed cell topology) and MODNAS works very reliably there. Moreover, the Supernetwork that MODNAS uses was originally developed on such cell-based search spaces.\"}", "{\"title\": \"Official Response from Authors (2/2)\", \"comment\": \"> ### *Please explain how Architect makes architecture parameters differentiable and propagate them to the MetaHypernetwork.*\\n\\nWe apologize for not making this clearer. We have updated the paper by adding Appendix N, where this is explained in detail. The idea is as follows:\\n\\n**Forward pass**\\n\\n1. The MetaHypernetwork parameterizes the unnormalized architectural distribution: $\\\\tilde{\\\\alpha} = H_{\\\\Phi}$, where $\\\\Phi$ are the MetaHypernetwork parameters. \\n2. $\\\\tilde{\\\\alpha}$ is passed to the Architect and it does the following steps: \\n\\n a) Normalizes $\\\\tilde{\\\\alpha}$ and samples a one-hot (discrete) $\\\\alpha$: $\\\\alpha \\\\sim Categorical(Softmax(\\\\tilde{\\\\alpha}))$. \\n b) Sets the Supernetwork architectural parameters to the one-hot $\\\\alpha$, i.e., resulting in a single subnetwork by masking the Supernetwork. \\n c) Passes $\\\\alpha$ as input to the MetaPredictor. \\n3. The Supernetwork and MetaPredictor do a forward pass using the training data (e.g., images) and hardware embedding, respectively. \\n4. Compute the scalarized loss function. \\n\\nThe main problem now is that we cannot directly backpropagate the gradient computation through the Architect to update the MetaHypernetwork parameters. This is due to the sampling from the *Categorical distribution in step 2/a above being non-differentiable*. The Straight-Through Estimator (STE) [1,2] approximates the gradient for the discrete architectural parameters by ignoring this actual non-differentiable sampling operation as follows:\\n\\n---\\n\\n**Backward pass**\\n\\n1. Compute the gradient of the loss with respect to the discrete architectural parameters $\\\\alpha$: $\\\\partial \\\\mathcal{L} / \\\\partial \\\\alpha$. \\n2. Propagate this gradient back to $\\\\Phi$ (MetaHypernetwork parameters) via the probability distribution: \\n \\n - $\\\\nabla_{\\\\Phi} \\\\mathcal{L} = \\\\frac{\\\\partial \\\\mathcal{L}}{\\\\partial \\\\alpha}\\\\frac{\\\\partial\\\\alpha}{\\\\partial Softmax}\\\\nabla_{\\\\Phi}Softmax(H_{\\\\Phi})$\\n\\n - STE backpropagates \\\"through\\\" a proxy that treats the non-differentiable function (sampling of $\\\\alpha$) as an identity function (as a result $\\\\frac{\\\\partial\\\\alpha}{\\\\partial Softmax} = 1$) and computes the gradient w.r.t. the MetaHypernetwork parameters: \\n $\\\\nabla_{\\\\Phi} \\\\mathcal{L} = \\\\frac{\\\\partial \\\\mathcal{L}}{\\\\partial \\\\alpha} \\\\nabla_{\\\\Phi} Softmax(H_{\\\\Phi})$. \\n\\nTo recap, during the forward pass the Architect samples a discrete architecture from an architecture distribution parameterized by the MetaHypernetwork, and during backpropagation the STE is utilized to propagate back through the sampling operation to update the MetaHypernetwork parameters, hence the distribution where the discrete architectures in the next iteration will be sampled from. We hope that this makes the update procedure for the architectural parameters clearer. If you have additional questions we are happy to follow up on the discussion.\\n\\nWe would like to thank you again for the very detailed review. We hope that we were able to address all of your concerns and that you would consider increasing your score. \\n\\n---\\n\\n**\\u2013References\\u2013**\\n\\n[1] Jang, E., Gu, S. and Poole, B., 2017, April. Categorical Reparametrization with Gumble-Softmax. In International Conference on Learning Representations (ICLR 2017).\\n\\n[2] Liu, L., Dong, C., Liu, X., Yu, B. and Gao, J., 2024. Bridging discrete and backpropagation: Straight-through and beyond. Advances in Neural Information Processing Systems, 36.\\n\\n[3] Sukthanker, R.S., Zela, A., Staffler, B., Klein, A., Purucker, L., Franke, J.K. and Hutter, F., 2024. HW-GPT-Bench: Hardware-Aware Architecture Benchmark for Language Models. In 38th Conference on Neural Information Processing Systems, NeurIPS 2024, Datasets and Benchmarks Track\\n\\n[4] Lee, H., Lee, S., Chong, S. and Hwang, S.J., 2021. HELP: Hardware-Adaptive Efficient Latency Prediction for NAS via Meta-Learning. In 35th Conference on Neural Information Processing Systems, NeurIPS 2021\"}", "{\"metareview\": \"This paper presents a hypernetwork-based method for hardware-aware neural architecture search and demonstrates zero-shot generalization to new devices. The strengths of this paper include the detailed experiments that tested the proposed MODNAS on 19 hardware devices and showed good performance. The main weaknesses of this paper include the details of the approach were not initially very clear, and reviewers generally do not favor hyper network solutions to NAS. After the rebuttal, the missing details were addressed as can be seen in the edits of the appendix. All the reviewers ended up with a positive score for this paper.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion period, the authors adequately addressed the reviewers' comments. Specifically, they added five sections to the appendix. The reviewer acknowledged the changes and adjusted their score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper proposes a Neural Architecture Search (NAS) algorithm that optimizes several performance metrics on various hardware.\\nThe paper adopts Multi Objective Optimization (MOO) to optimize multiple objectives simultaneously.\\nThe key idea is to train $\\\\textbf{MetaHypernetwork}$ which takes hardware features and user preferences about performance metrics as inputs and returns optimized network architectures as outputs.\\nTo train $\\\\textbf{MetaHypernetwork}$, the paper exploits $\\\\textbf{Supernetwork}$ for NAS and $\\\\textbf{MetaPredictor}$ for optimization in terms of efficiency.\\n\\n$\\\\textbf{MetaHypernetwork}$ takes user preferences and device features, and returns network architecture designs.\\nThen, $\\\\textbf{Architect}$ makes those architecture designs into differentiable architecture parameters.\\nWith these architecture parameters, $\\\\textbf{Supernetwork}$ computes accuracy-related loss of the given network design.\\nAlso, $\\\\textbf{MetaPredictor}$ computes efficiency-related loss with the network architecture under the given hardware features.\\nFurther, the paper proposes to update $\\\\textbf{MetaHypernetwork}$ using Multiple Gradient Descent (MGD) to get optimized network architectures that satisfy multiple objectives concurrently.\\n\\nFor NAS, the proposed method is adaptable to diverse pretrained $\\\\textbf{Supernetwork}$.\\nFor efficiency, $\\\\textbf{MetaHypernetwork}$ can be updated with a lot of hardware-related loss functions at once.\\nFor optimization, the paper achieves Pareto frontier solutions that are hard to reach with sequential or averaged gradient updates.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper proposes to execute one-shot NAS while search networks satisfy multiple objectives about accuracy and hardware efficiency with target hardware.\", \"The paper analyzes the proposed method with various aspects such as efficacy, and robustness of the training process.\", \"The paper provides extensive experiments and visualizations to support the proposal and its analysis.\"], \"weaknesses\": [\"It may be hard to regulate the trade-off among user preferences with scalarization. Figure 4 can be a support, but it is just an abstract depiction, not experimental results.\", \"The proposed method can help search optimized network architectures quickly at low cost. However, network architectures the same as or near ground truth solutions may hard to be reach with $\\\\textbf{MetaHypernetwork}$, where other works can reach with huge search costs.\"], \"questions\": [\"How are user preference and hardware device embeddings designed?\", \"Please explain how $\\\\textbf{Architect}$ makes architecture parameters differentiable and propagate them to the $\\\\textbf{MetaHypernetwork}$.\", \"In Figure 4, solutions on the Pareto frontier can be achieved by modulating user preferences. However, in HDX [1], which is a one-shot NAS with hardware constraints, claims that modulating hyperparameters linearly doesn\\u2019t lead to linearly distributed results. Is $\\\\textbf{MetaHypernetwork}$ free from this problem? Can real experimental results be plotted like Figure 4 to substantiate the integrity of $\\\\textbf{MetaHypernetwork}$?\", \"To search network architectures with other works, the paper sets the search time budget up to 2.5 times compared to that of MODNAS. (Or fixed time budget, e.g., 192 hours) That is, MODNAS outperforms prior works in terms of search time, and the quality of solutions is better than that of others under given time budgets. What the review wonders is whether can other works find near-GT solutions with a larger time budget. If a target network is distributed extensively and used frequently, huge search costs can be tolerable if there are better solutions.\", \"[1] Deokki Hong, Kanghyun Choi, Hye Yoon Lee, Joonsang Yu, Noseong Park, Youngsok Kim, and Jinho Lee. Enabling Hard Constraints in Differentiable Neural Network and Accelerator Co-Exploration. In Proceedings of the 59th ACM/IEEE Design Automation Conference, 2022.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Response from Authors (3/3)\", \"comment\": \"> ### *The approach appears tailored to devices with GPU-like architectures. It would be valuable to understand how MODNAS performs on other architectures, like mobile hardware.*\\n\\nWe study MODNAS across a wide variety of devices. Our benchmarks **contain other hardware platforms**, including mobile hardware (Pixel3, Samsung devices), embedded devices (FPGA, Raspberry Pi) and CPU devices. See Table 4 in the appendix for a list.\\n\\n> ### *Regarding minor points and the paper being dense.*\\n\\nWe thank you for carefully reading our paper. We will increase the size of figures 7-8 and move some parts of the paper to appendix to make the paper more legible and clear for readers. We have also fixed the typos you pointed out in the updated version of the paper.\\n\\nWe would like to thank you again for your very detailed review. We hope that we were able to address all your concerns and that you will consider increasing your score after reading our responses. We are also happy to engage in further discussion if you have more concerns. \\n\\n\\n**\\u2013References\\u2013**\\n\\n[1] Liu, L., Dong, C., Liu, X., Yu, B. and Gao, J., 2024. Bridging discrete and backpropagation: Straight-through and beyond. Advances in Neural Information Processing Systems, 36.\\n\\n [2] Lee, H., Lee, S., Chong, S. and Hwang, S.J., 2021. HELP: Hardware-Adaptive Efficient Latency Prediction for NAS via Meta-Learning. In 35th Conference on Neural Information Processing Systems, NeurIPS 2021 (pp. 27016-27028). \\n\\n[3] Cai, H., Gan, C., Wang, T., Zhang, Z. and Han, S., Once-for-All: Train One Network and Specialize it for Efficient Deployment. In International Conference on Learning Representations 2020.\\n\\n[4] Wang, H., Wu, Z., Liu, Z., Cai, H., Zhu, L., Gan, C. and Han, S., 2020, July. HAT: Hardware-Aware Transformers for Efficient Natural Language Processing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 7675-7688).\\n\\n[5] Sukthanker, R.S., Zela, A., Staffler, B., Klein, A., Purucker, L., Franke, J.K. and Hutter, F., 2024. HW-GPT-Bench: Hardware-Aware Architecture Benchmark for Language Models. In 38th Conference on Neural Information Processing Systems, NeurIPS 2024, Datasets and Benchmarks Track\\n\\n[6] Li, C., Yu, Z., Fu, Y., Zhang, Y., Zhao, Y., You, H., Yu, Q., Wang, Y., Hao, C. and Lin, Y., HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark. In International Conference on Learning Representations.\"}", "{\"comment\": \"Thank you for answering my questions and for clarifiying. So far, it has addressed my concerns. I am looking forward to the memory-perplexity optimization results to adjust my scoring.\"}", "{\"comment\": \"Thank the authors for their kind answers and additional experiments.\\n\\nI believe that the paper satisfies my acceptance threshold.\\n\\nTherefore, I decide to maintain the rating of acceptance.\"}", "{\"title\": \"Update with the memory-perplexity results\", \"comment\": \"Thank you very much for your response. We're pleased to know that our response addressed your concerns. As promised, we\\u2019ve updated the PDF by including in the end Appendix O and Figure 35, where we showcase the application of MODNAS for optimizing memory usage (using Bfloat16 precision and context size of 1024) and perplexity on OpenWebtext within the HW-GPT-Bench GPT-L search space, featuring models up to 774M parameters. Since memory usage does not depend on the device type, our approach does not utilize the MGD updates in Algorithm 1 for computing the common gradient descent direction, instead leveraging only preference vectors to calculate the scalarized objective. This highlights once again the flexibility of MODNAS across diverse settings, even the ones it was not designed for. Despite this adjustment, MODNAS remains competitive, delivering a Pareto front comparable to leading black-box MOO baselines.\"}", "{\"comment\": \"Thank you for providing answers to my questions and doubts. I will raise my score.\"}", "{\"title\": \"Official Response from Authors (2/3)\", \"comment\": \"> ### *MobileNet search space is pretty small DNN. How does it work on more complex DNN search space?*\\n\\nIn Figure 11 in the main paper, we show the performance of MODNAS on the GPT-S Transformer space (largest model containing 124M parameters) and **search space size of $10^{12}$** from the recent HW-GPT-Bench paper [5].\\n\\n> ### *The paper mentions scalability but doesn\\u2019t mention potential bottlenecks. For example, are there search space complexities that MODNAS struggles with?*\\n\\nWe discuss the limitations of our work in Section 5.\\n\\n> ### *The paper doesn't provide the code. That limits reproducibility of the proposed method.*\\n\\nWe apologize for the inconvenience. We had provided our complete codebase in the introduction (line 91-92), however it seems that the anonymous link had expired. We have updated it and will make the code public upon acceptance. We also provide the link here for completeness: https://anonymous.4open.science/r/MODNAS-1CB7/README.md. \\n\\n> ### *I am not sure how accurate is the energy model that the paper is using considering even Eyeriss is a pretty old paper (2016). The authors need to provide more details about energy modeling.*\\n\\nFor the experiments using the energy as an objective we use the **precomputed energy values** from the respective papers, namely HW-NAS-Bench [6] and HW-GPT-Bench [5] and HELP [2]. We only use our energy predictors to predict the ground truth values from these benchmarks.\\n\\n> ### *I think it is better to use power rather than energy. Considering energy is power x latency, when you minimize the latency, if the power is fixed, energy is minimized automatically. Considering power, accuracy, and latency can be a better metric.*\\n\\nThank you for the suggestion. Indeed, the energy and latency are highly correlated in the benchmarks they were precomputed. The high correlation might necessarily hold across devices and model scales, e.g., in figure 64 of HW-GPT-Bench [5], the authors observe that energy usage and latency on the H100 GPU have a kendall-$\\\\tau$ correlation coefficient of only 0.67. In a practical scenario, it makes more sense to use power as you suggested. We conducted the experiment with 3 objectives with the main motive of showcasing that MODNAS can be scaled to more than 2 objectives without any additional search costs.\\n\\n> ### *The paper employs the Frank-Wolfe solver for optimizing scalarizations. Could you discuss any trade-offs in this choice, and if other optimizers were considered?*\\n\\nIn the **Figure 6 of the main paper**, we already demonstrate empirically how MGD compares to other gradient update strategies. Please see the \\u201c*robustness of MGD*\\u201d paragraph (lines 406-412) and figure 6. As it can be seen, MGD (red curve) performs the best.\\n\\n> ### *Given that preference vectors affect the optimization landscape, an analysis of how different preference configurations influence the final architectures would clarify MODNAS\\u2019s versatility.*\\n\\nThank you for this careful observation. In earlier experiments on NB201, we noticed that using random uniform samples of the preference vectors from the probability simplex, resulted in the highest hypervolume. Using different concentration coefficients in the Dirichlet distribution resulted in a worse final performance. In lines 236-238, we also briefly mention that it is possible to optimize/adapt the parameters of the Dirichlet distribution, however this would require differentiating through discrete samples from the distribution and would add another update step in the algorithm. Finally, in **Appendix K** of our updated paper we have added a section on \\u201c*Alignment of preference vectors with pareto front*\\u201d, where we plot the generated solutions from the MetaHypernetwork together with their respective preference vectors.\\n\\n> ### *MODNAS is tested for three objectives\\u2014accuracy, latency, and energy. Could this method scale effectively if additional objectives were introduced, or would this necessitate modifications?*\\n\\nYes, the search phase would only require additional forward passes through the predictors estimating the new objectives, which consist of small neural networks and have negligible inference costs. Since we use a scalarized objective in the algorithm (line 6), we only require a single backward pass regardless of the number of objectives. The only modification that needs to be done is to train a new MetaPredictor to approximate the new objective. This is done **only once** before the MODNAS search, however is still cheap; for instance, it took 3h on a single RTX2080Ti GPU to train the latency and energy MetaPredictors.\"}", "{\"comment\": \"As the discussion period closes soon, we would like to again thank all the reviewers for their thorough evaluation of our work and their active engagement during the discussion phase. Their valuable feedback has significantly enhanced both the clarity of our paper and the depth of our experimental analysis. We also appreciate the overall positive reception reflected in the scores of our paper.\\n\\nWe would like to highlight the following additional updates, which are incorporated into the revised version of our paper (marked in blue):\\n\\n1. **Appendix O**: New experiments analyzing perplexity and memory usage objectives.\\n2. **Appendix P**: Discussion on the applicability of our approach to the object detection task.\\n\\nMoreover, as suggested by reviewer **ei5k**, we will refine **Section 5** of the paper to include a discussion on potential search space complexities.\\n\\nIf there are any further questions, we remain available for discussion. Thank you once again for your thoughtful contributions.\"}", "{\"title\": \"Official Response from Authors (1/3)\", \"comment\": \"We thank you for carefully reading our paper and your detailed feedback on our work. We appreciate your recognition of the ability of MODNAS to adapt to a wide variety of hardware deployment scenarios. Below, we address each of your questions:\\n\\n> ### *\\u201cUsing Hypernetworks for NAS is well known but doesn\\u2019t seem a promising solution. It is like a heuristic solution.\\u201d and \\u201cUsing hypernetwork for the NAS and pareto-frontier learning is well known.\\u201d*\\n\\nThank you for referring to the different works which adopt hypernetworks for NAS. However, we would like to point out the following key differences in the way [A], [B] and [D] use hypernetworks compared to MODNAS: \\n[A], [B] and [D] propose amortizing the cost of architecture training in NAS by learning a hypernetwork to directly generate the weights $w$ of a given architecture [A, D] or hyperparameter configuration [B]. MODNAS instead generates the parameters defining the distribution of an architecture i.e. the $\\\\alpha$ values which are only a handful, e.g. 30 parameters for the NB201 supernetwork.\\nGiven the **much** lower dimensionality of the architecture distribution space $\\\\alpha$ compared to the network parameter space $w$, the issues with scalability are greatly reduced. \\nFurthermore, the hypernetwork itself is a simple embedding layer with very few parameters, e.g., 0.03MB for NB201 and this makes optimizing them and initializing them easier. \\nSimilarly, HyperST-Net [C] derives the parameter weights in a cascading manner from temporal and spatial characteristics. Again, since it generates **network parameters** which are usually much larger in number, it faces the same issues as mentioned above. \\n\\n > ### *Hypernetwork is not performing well on unseen networks. https://arxiv.org/abs/2110.13100 With that, Could you elaborate on any specific constraints or limitations when transferring MODNAS to devices significantly different from the ones used in the training phase?*\\n\\nWe refer you to the **t-SNE plots (Figure 12 in the appendix)** of computed hardware bank similarity vectors for the 19 devices (13 train and 6 test) on NB201. We observe that the embeddings indeed reflect that devices with similar properties are co-located. We study MODNAS on qualitatively different hardwares (see Table 4 in appendix for a complete list) during test (unseen) and training stages and the observation of transfer across different hardware devices does hold in practice. In addition, MODNAS shows good generalization performance with even a smaller subset of training devices (check figure 25 in appendix). We attribute this to the conditioning of the hypernetwork on the hardware embedding and the attentive device pool in the architecture of the MetaHypernetwork. \\n\\n> ### *MiLeNAS paper (CVPR\\u201920) shows that the gradient errors caused by approximations of second-order methods in bi-level optimization results in suboptimality, in the sense that the optimization procedure fails to converge to a (locally) optimal solution. However, it seems that the authors still use a bi-level optimization technique. It seems using advanced techniques like MiLeNAS might be more useful.*\\n\\nWe use the **Reinmax** [1] method, which is a straight-through estimator achieving second-order accuracy by integrating second-order numerical methods. Reinmax does not require Hessian computation, thus having negligible computation overheads. As it is impractical to employ methods that **do not use** discrete architectural samples during the search, especially on larger search spaces such as the Transformer ones, we chose Reinmax (NeurIPS\\u201923), a state-of-the-art gradient estimation method for discrete variables. Nevertheless, since the type of architecture optimizer we use is a modular component, Reinmax can be replaced with the mixed-level formulation of MileNAS in our configurable experimental framework. We conducted such an experiment and added the result of MODNAS + MiLeNAS in the plots of Figure 15 (Appendix), where we compare it to Reinmax and GDAS as well. Reinmax is still the best performing optimizer.\\n\\n> ### *Can the MODNAS be applied to object detection tasks as well?*\\n\\nYes, the MODNAS algorithm is versatile if given an architecture search space and a set of hardware devices. In our paper we have selected benchmarks from previous works such as HELP [2], namely NB201, MobileNetv3 [3] for ImageNet classification, Transformer space for machine translation [4], where the ground truth latencies had been already profiled; process that took a significant effort from other researchers. We have also included the recent HW-GPT-Bench [5] for language modeling. If you are aware of any object detection benchmark which contains latency or energy measures on various hardware devices, we would be happy to include it in our experiment suite.\"}", "{\"title\": \"Response to followup questions\", \"comment\": \"### *5- Section 5 doesn\\u2019t discuss possible search space complexities that MODNAS struggles with.*\\n\\nThank you for raising this important point. We will update Section 5 in the paper with the following (after rebuttal since we need to restructure the paper to incorporate all the feedback from this discussion too): \\u201cWe also want to mention some search space complexities that MODNAS can potentially struggle with. One instance can be on very large search spaces such as Einspace [1], wherein weight sharing or entanglement cannot be exploited directly since one cannot fit a Supernetwork that contains all the architectures in a single GPU. This might require either supernetwork model parallelism across GPUs or the usage of other performance proxies instead of the supernetwork. We leave such avenues for future work.\\u201d\\n\\n### *6- Regarding power, I think you need to use power which is independent of latency. When you are using latency, if the latency is reduced, the energy also will be reduced if the power is fixed. So, if you optimize for latency, energy also is optimized. Using power makes your problem more interesting to solve and as you mentioned has more practical application.*\\n\\nWe agree with your point here, however, in the benchmarks we utilized in our experiments, we rely on previously evaluated benchmarks where the respective authors have measured energy and latency given a fixed power. We cannot use power in these benchmarks unfortunately. You raise an important point here though, suggesting a shift of focus from energy to power when developing a hardware-aware benchmark in the future.\\n\\nWe thank you again for your very detailed review and your followup questions. We hope that we were able to sufficiently address your followup questions and that you will consider increasing your score. We are also happy to engage in further discussion if you have more questions.\\n\\n[1] Ericsson, L., Espinosa, M., Yang, C., Antoniou, A., Storkey, A., Cohen, S.B., McDonagh, S. and Crowley, E.J., 2024. einspace: Searching for Neural Architectures from Fundamental Operations. arXiv preprint arXiv:2405.20838.\"}", "{\"title\": \"Thanks for those additional results\", \"comment\": \"Given the additional experimentation with MODNAS's ability to also optimise memory and remain competitive with other MOO baselines, as well as the adjustments made in response to my and the other reviewer's concerns, the paper is now in an acceptable state from my perspective.\"}", "{\"title\": \"Official Response from Authors (1/2)\", \"comment\": \"Thank you for the detailed review of our work. We appreciate that you highlight different positive aspects of our work including our motivation to extend zero-shot NAS to the HW-aware settings, as well as appreciating our thorough experimental evaluation. We respond to each of your questions below.\\n\\n> ### *...results shown are calculated using the estimated results from the \\\"MetaPredictors\\\"*\\n\\nWe would like to provide a clarification here. While we indeed use the MetaPredictor to guide the differentiable search, the final hypervolume and Pareto-Front results presented throughout the paper are actually **computed using the true accuracy, latency and energy values** corresponding to different the benchmarks [1,2,3,4] that we use for evaluation. \\n\\n> ### *What assumptions do the MetaPredictors make about the underlying devices when estimating latency and energy requirements?*\\n\\nWe use **precomputed** latency and energy benchmarks from other papers [1,2,3,4,5] that have already been peer-reviewed. The authors of these benchmarks carefully control for number of allocated CPU cores to ensure consistent profiling across hardware devices (see HW-GPT-Bench [4] for example). Our MetaPredictor architecture itself makes *no assumptions* about the procedure followed in profiling these hardware metrics. However, this could be an interesting idea for improving the predictive performance of the MetaPredictor by appending such features to the hardware embedding.\\n\\n> ### *Could the approach proposed by the authors be extended to include memory?*\\n\\nWe thank you for raising this important point. The short answer is: *Yes*. MODNAS is **agnostic to the type of hardware metric objective** as long as we use a predictor to estimate the metric. Supporting a new objective like memory consumption is about replacing the existing latency/energy predictor with a GPU memory predictor. Following your suggestion, we are currently running experiments using MODNAS for memory-perplexity optimization on the GPT-L space from HW-GPT-Bench [4]. *We will report back when this experiment finishes.*\\n\\n> ### *How exactly are \\\"preference vector\\\" and a \\\"hardware device feature vector\\\" defined?*\\n\\n- **Preference vectors:** During the training stage of MODNAS we sample a scalarization uniformly at random from the probability simplex (e.g. for 2 objectives a scalarization can be [0.24, 0.76])-- scalarizations are in [0,1] and their sum is 1. During inference, we sample 24 fixed points which are uniformly spaced on a circle (for 2 objectives) or a hypersphere (for >2 dimensions). Preference vectors are quantized ([0.24, 0.76] \\u2192[24, 76]) before being passed as input to the MetaHypernetwork.\\n- **Hardware device embeddings:** We use a simple scheme similar to HELP [5] to define the hardware embedding. We sample 10 fixed architectures from a search space, compute their hardware metric value (latency/energy) on a particular device and use this to compute the hardware embedding vector of size 10.\\n\\n> ### *Which FPGA did the authors use?*\\n\\nFor the experiments using latency and energy usage in FPGA, we utilize the precomputed values from HW-NAS-Bench [1], which describes the FPGA data collection procedure in their Appendix D.6. To quote the authors: \\u201c*We then compile all the architectures using the standard Vivado HLS toolflow (Xilinx Inc., a) and obtain the bottleneck latency, the maximum latency across all sub-accelerators (chunks) of the architectures on a Xilinx ZC706 development board with Zynq XC7045 SoC (Xilinx Inc., b).*\\u201d\\n\\n> ### *Fig. 5: Why is MODNAS shown as an upper threashold line (+/- variance)?*\\n\\nFigure 5 depicts MODNAS as a single line because we do only one search run and in the end of it we evaluate 24 architecture samples using the same MetaHypernetwork conditioned on hardware devices. On the contrary, other black-box methods based on accuracy and latency predictors, sample a single new architecture at every optimization step and they need to run on every device independently since they do not incorporate hardware information during the search. \\n\\nWe would like to thank you again for reading our paper and your feedback. We hope that we were able to address all your concerns and that you will consider increasing your score after reading our responses. Otherwise, if you have additional questions, we are happy to follow up the discussion.\\n\\n---\"}", "{\"summary\": \"The paper (MODNAS) presents an approach to Neural Architecture Search (NAS) that balances competing objectives\\u2014like performance, latency, and energy efficiency\\u2014across multiple hardware devices. By encoding user preferences as a scalarization vector, MODNAS efficiently searches for Pareto-optimal solutions across diverse devices in a single run. The method employs a hypernetwork to generate architectures for specific hardware configurations, leveraging multiple gradient descents for optimization. The method has been evaluated on MobileNetV3 on ImageNet-1k, an encoder-decoder transformer space for machine translation, and a decoder-only space for language modeling.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1-The method is adaptable to a range of devices by conditioning the hypernetwork on device embeddings, making it highly versatile for deployment on diverse hardware.\\n\\n2-MODNAS is tested on various hardware devices and tasks, including image classification, machine translation, and language modeling, showcasing its applicability across multiple domains.\", \"weaknesses\": \"1-Using Hypernetworks for NAS is well known but doesn\\u2019t seem a promising solution. It is like a heuristic solution.\\n\\n2- I think energy and latency are not necessarily conflicting metrics.\", \"questions\": \"__1-__ Using hypernetwork for the NAS and prate-frontier learning is well known.\\n\\nThat limits the novelty of the proposed method.\\n\\n[A]Brock, A., Lim, T., Ritchie, J., and Weston, N. (2018). SMASH: One-shot model architecture search through hypernetworks. In the International Conference on Learning Representations\\n\\n[B[ Lorraine, Jonathan, and David Duvenaud. \\\"Stochastic hyperparameter optimization through hypernetworks.\\\" arXiv preprint arXiv:1802.09419 (2018).\\n\\n[C] Pan, Z., Liang, Y., Zhang, J., Yi, X., Yu, Y., and Zheng, Y. (2018). Hyperst-net: Hypernetworks for spatio-temporal forecasting. arXiv preprint arXiv:1809.10889\\n\\n[D] Zhang, C., Ren, M., and Urtasun, R. (2019). Graph hypernetworks for neural architecture search. In International Conference on Learning Representations.\\n \\nHowever, there are major concerns about the performance, initialization, and scalability as stated here (Section 6 in the following paper):\\nChauhan, Vinod Kumar, Jiandong Zhou, Ping Lu, Soheila Molaei, and David A. Clifton. \\\"A brief review of hypernetworks in deep learning.\\\" Artificial Intelligence Review 57, no. 9 (2024): 250.\\n\\n\\n__2-__ Hypernrtwork is not performing well on unseen networks.\", \"https\": \"//arxiv.org/abs/2110.13100\\nWith that, Could you elaborate on any specific constraints or limitations when transferring MODNAS to devices significantly different from the ones used in the training phase?\\n\\n\\n__3-__ MiLeNAS paper (CVPR\\u201920) shows that the gradient errors caused by approximations of second-order methods in bi-level optimization results in suboptimality, in the sense that the optimization procedure fails to converge to a (locally) optimal solution. However, it seems that the authors still use a bi-level optimization technique. It seems using advanced techniques like MiLeNAS might be more useful.\\n \\n__4-__ Can the MODNSD be applied to object detection tasks as well?\\n\\n__5-__ MobileNet search space is pretty small DNN. How does it work on more complex DNN search space? \\n\\n__6-__ The paper mentions scalability but doesn\\u2019t mention potential bottlenecks. For example, are there search space complexities that MODNAS struggles with?\\n\\n\\n\\n__7-__ The paper doesn't provide the code. That limits reproducibility of the proposed method.\\n\\n\\n\\n__8-__ I am not sure how accurate is the energy model that the paper is using considering even Eyeriss is a pretty old paper (2016). The authors need to provide more details about energy modeling. \\n\\n__9-__ I think it is better to use power rather than energy. Considering energy is power x latency, when you minimize the latency, if the power is fixed, energy is minimized automatically. Considering power, accuracy, and latency can be a better metric.\\n\\n\\n__10-__ The paper employs the Frank-Wolfe solver for optimizing scalarizations. Could you discuss any trade-offs in this choice, and if other optimizers were considered?\\n\\n__11-__ Given that preference vectors affect the optimization landscape, an analysis of how different preference configurations influence the final architectures would clarify MODNAS\\u2019s versatility.\\n\\n\\n__12-__ MODNAS is tested for three objectives\\u2014accuracy, latency, and energy. Could this method scale effectively if additional objectives were introduced, or would this necessitate modifications?\\n\\n__13-__ The approach appears tailored to devices with GPU-like architectures. It would be valuable to understand how MODNAS performs on other architectures, like mobile hardware.\\n\\n\\n\\n\\n__14-__ Minor: \\n\\nA)The paper is pretty dense. I hardly can read the legends in Figures 7, 8.\\n\\nB)Minor Typo:\\nPareto front profiling in multi-objective optimization (MOO), i.e. finding a diverse set of Pareto optimal solutions, is challenging\\\" \\u2013 A comma after \\\"i.e.\\\" would improve readability: \\\"i.e., finding a diverse set...\\\"\\n\\n\\\"To search across devices, we frame the problem as a multi-task multi-objective optimization problem\\\" \\u2013 Add a comma, \\u201cmulti-task, multi-objective optimization problem.\\u201d\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Response from Authors (2/2)\", \"comment\": \"**\\u2013References\\u2013**\\n\\n [1] Li, C., Yu, Z., Fu, Y., Zhang, Y., Zhao, Y., You, H., Yu, Q., Wang, Y., Hao, C. and Lin, Y., HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark. In International Conference on Learning Representations.\\n\\n[2] Cai, H., Gan, C., Wang, T., Zhang, Z. and Han, S., Once-for-All: Train One Network and Specialize it for Efficient Deployment. In International Conference on Learning Representations.\\n\\n[3] Wang, H., Wu, Z., Liu, Z., Cai, H., Zhu, L., Gan, C. and Han, S., 2020, July. HAT: Hardware-Aware Transformers for Efficient Natural Language Processing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 7675-7688).\\n\\n[4] Sukthanker, R.S., Zela, A., Staffler, B., Klein, A., Purucker, L., Franke, J.K. and Hutter, F., 2024. HW-GPT-Bench: Hardware-Aware Architecture Benchmark for Language Models. In 38th Conference on Neural Information Processing Systems, NeurIPS 2024, Datasets and Benchmarks Track\\n\\n[5] Lee, H., Lee, S., Chong, S. and Hwang, S.J., 2021. HELP: Hardware-Adaptive Efficient Latency Prediction for NAS via Meta-Learning. In 35th Conference on Neural Information Processing Systems, NeurIPS 2021 (pp. 27016-27028).\"}", "{\"title\": \"Thank you for your feedback on our paper\", \"comment\": \"Thank you very much again for increasing your score and your detailed feedback on our work, which ultimately helped us enhance the quality and thoroughness of our paper and experiments.\"}", "{\"summary\": \"The authors propose a multi-objective HW aware NAS algorith that can emit optimized DNN architectures across multiple different target devices using a single search run.\\nTo overcome the challange of having to solve multiple different optimization runs for each considered HW platform, the authors propose using a hypernetwork to generate an architectural distribution across multiple devices based on a \\\"preference vector\\\" and a \\\"hardware device feature vector\\\" from which discrete architectures can then be sampled.\\nFor objective function evaluators, the authors use a Supernetwork as a standin for a accuracy and \\\"MetaPredictors\\\", one for each target device, to estimate hardware metrics.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The extension of existing zero-shot NAS techniques with Hypernetworks for HW aware NAS is motivated well and contextualized nicely for alredy existing techniques.\", \"The authors provide an extensive evaluation for different applications (language modeling, vision, translation), multiple well known NAS searchspaces, and different target spaces (2-3 dimensional).\"], \"weaknesses\": [\"The authors do not show how the proposed DNN architectures would actually perform on the different target systems. As far as I understand it, the HV results shown are calculated using the estimated results from the \\\"MetaPredictors\\\". While this still allows for a relative comparision with the other techniques and algorithms evaluated in the paper, it makes it hard to evaluate the actual usefulness and effectiveness of the apporach.\"], \"questions\": [\"What assumptions do the MetaPredictors make about the underlying devices when estimating latency and energy requirements? For example, do they assume a particular choice of operating system and process scheduler used, other parallel and system processes running, core utilisation of the DNN inference, runtime library, or HW design used to execute the DNN on the target device?\", \"Especially on smaller systems, memory requirements are often a major bottleneck of DNN inference: Could the approach proposed by the authors be extended to include memory? So far, the evaluation seems to focus only on accuracy, latency and energy.\", \"How exactly are \\\"preference vector\\\" and a \\\"hardware device feature vector\\\" defined? The authors cover many different types of HW in their evaluation: GPUs (e.g. 1080 ti), smarthpones (e.g. pixel 2, 3), edge devices (e.g. raspi 4) and dedicated HW accelerators (eyeriss, FPGA), which can be characterised by a number of different metrics, often exclusive to the type of HW considered. (e.g. #cores, processor speed, RAM, ... for the raspi4 vs. #streaming cores, VRAM, bus interface, ... for the 1080 ti vs. #LUTs, #BRAM, ... for the FPGA). So it would be really interesting to know how the authors put these metrics into relation and unified them into one vector.\", \"Which FPGA did the authors use? Since an FPGA is just freely programmable hardware, it would also be interesting to know which DNN accelerator design the authors implemented on the FPGA to perform their evaluation.\", \"Fig. 5: Why is MODNAS shown as an upper threashold line (+/- variance)? Based on the caption, I would have expected to see an HVI curve that stops after 24 evaluations, similar to the other curves in the plot.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response from authors\", \"comment\": \"Thank you again for your great feedback and positive score.\"}", "{\"title\": \"Official Response from Authors (1/2)\", \"comment\": \"We sincerely thank you for your review of our submission and the positive feedback on several aspects like the zero-shot transferability and a unified multi-objective differentiable framework of MODNAS. We are also glad to hear that you found our paper well written, well organized and our experiments thorough. We address your concerns and questions in the following.\\n\\n> ### *My only concern is that training differentiable NAS with multi-objective hardware metrics can often be unstable and may struggle to converge, leading to issues with reproducibility. While the authors have shared the hyperparameters and plan to release the implementation, it would be beneficial to include a discussion of the training techniques in the Appendix to address these challenges.*\\n\\nThank you for highlighting this critical point. We concur that differentiable NAS and hypernetworks often demand careful tuning of hyperparameters (discussed briefly in the limitations paragraph of Sectoin 5 as well). For hyperparameters specific to the supernetwork, we align them exactly with those used in the original search spaces and benchmarks. Overall, we find the hypernetwork to be reasonably robust to the hyperparameter choices. While there is always room for improvement, in our experiments, there were 3 components that made MODNAS robust and to work reliably across benchmarks:\\n\\n1. **The choice of MetaHypernetwork update scheme**: this played a pivotal role in the performance of MODNAS. While other gradient update strategies (see lines 406-412) underperformed or started diverging (Figure 6), MGD converged relatively quickly to a hypervolume close to that of the global Pareto front. The convergence of MGD to a pareto stationary point is discussed in Desideri [1] and Zhang et al. [2]. The convergence of MGD in bilevel optimization is an open research topic (see recent results from Ye et al. [3] and Yang et al. [4]). One potential scenario when MGD could fail is when the gradient directions of the objectives it is optimizing point in different opposing directions, however this becomes practically unlikely, especially as the number of objectives grows (in our case we use it to find the common gradient across devices, which is for instance 13 devices on NB201).\\n2. **The choice of gradient estimation method in the Architect**: In Section 3, lines 270-278, we discuss our choice for the method that enables gradient estimation through discrete variables (as architectures are discrete variables). We noticed that the ReinMax [5] estimator always outperformed previous estimators such as the one in GDAS [6] (Figure 15 in the appendix), so we believe this choice is crucial.\\n3. **Weight entanglement vs. weight sharing in the Supernetwork**: In early experiments on NB201 we noticed that weight sharing in the Supernetwork, was not only more expensive, but much more unstable as well when compared to weight entanglement [7, 8], even yielding diverging solutions quite often (common pattern seen in differentiable NAS with shared weights as you mention; see Zela et al. [9] for instance).\\n\\nWe hypothesize that all design choices mentioned above play an implicit regularization effect on the upper level optimization in the bi-level problem, leading to a faster convergence and robustness [9, 10, 11]. We have updated the paper with Appendix J containing the points we discussed above.\\n\\n> ### *Could you provide the training and validation loss curves?*\\n\\nWe have added the training and validation loss curves for NAS-Bench-201 in **Appendix L**. At each mini-batch iteration we plot the average cross entropy loss across all devices. As expected both training and validation cross entropy go down and we do not notice any overfitting. The high noise is common for sample-based NAS optimizers, since a sampled different architecture is activate at each mini-batch iteration.\\n\\n> ### *In Figure 13, do you quantize the probability of $r_m$ to obtain the vector in the embedding layer $e_m$\\u200b? As I understand it, embedding layers typically take indices as inputs.*\\n\\nYes, this observation is correct. We quantize the continuous sampled $r_m \\\\in [0, 1]$ to the discrete $[0, 1, \\u2026 100]$ interval (see line 14 in https://anonymous.4open.science/r/MODNAS-1CB7/hypernetworks/models/hpn_nb201.py).\\nWe have added a sentence (highlighted in blue) in Appendix E.2 to clarify this in the updated version of the paper.\\n\\n> ### *In Figure 13, if I understand correctly, $e_{\\\\phi_0}$ is a linear layer that maps the device feature vector to a k-dimensional vector. If this is the case, referring to it as an 'embedding' layer may be somewhat confusing, given the use of $e_m$ for embeddings.*\\n\\nYes, that is correct. We apologize for the confusion and we have already fixed this in the text by referring to it as a linear layer instead. Thank you.\"}", "{\"summary\": \"The paper introduces a novel framework for differentiable neural architecture search (NAS) that integrates a generalizable hardware embedding with multi-objective embeddings during supernet training. This approach enables the NAS framework to generalize to unseen hardware platforms and efficiently sample Pareto-optimal subnets based on user preferences. To achieve this, the authors design a MetaHypernetwork that uses a sampled preference vector and representative hardware embeddings to guide architecture sampling within the supernet. During training, instead of randomly sampling subnets, the framework selects and trains subnets based on the hardware and objective embeddings, resulting in a better Pareto front. Extensive experiments demonstrate that this framework consistently produces superior Pareto-optimal subnets across various devices, outperforming several state-of-the-art NAS frameworks.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"The differentiable NAS framework proposed in the paper introduces a novel hypernetwork that enables zero-shot transferability to new devices, which, to the best of my knowledge, is an original contribution. I appreciate how this framework addresses key challenges faced by the NAS community. Previous approaches directly optimize network structures and hardware metrics using differentiable operators for one specific platform, while two-stage NAS frameworks separate supernet training from subnet searching. However, the search process in two-stage frameworks is often costly, as it relies on hardware-specific predictors and estimators for each objective on each platform. Although some works, such as [1, 2], have attempted to reduce the search cost in two-stage frameworks, I believe that a unified solution that provides zero-shot transferability and multi-objective optimization across new platforms has been lacking. This paper fills that gap by proposing a unified framework that combines hardware and objective embeddings with differentiable NAS through the use of a hypernetwork. I believe this is an exciting progress for neural architecture search.\\n\\nThe paper is well-presented and well-organized. The extensive experiments that compare it to several state-of-the art NAS frameworks and the in-depth analysis of the framework fully support the claims made in the paper.\\n\\n[1] HELP: Hardware-Adaptive Efficient Latency Prediction for NAS via Meta-Learning\\n\\n[2] BRP-NAS: Prediction-based NAS using GCNs\", \"weaknesses\": \"My only concern is that training differentiable NAS with multi-objective hardware metrics can often be unstable and may struggle to converge, leading to issues with reproducibility. While the authors have shared the hyperparameters and plan to release the implementation, it would be beneficial to include a discussion of the training techniques in the Appendix to address these challenges.\", \"questions\": [\"Could you provide the training and validation loss curves?\", \"In Figure 13, do you quantize the probability of $r_m$\\u200b to obtain the vector in the embedding layer $e_m$\\u200b? As I understand it, embedding layers typically take indices as inputs.\", \"In Figure 13, if I understand correctly, $e_{\\\\phi_0}$ is a linear layer that maps the device feature vector to a k-dimensional vector. If this is the case, referring to it as an 'embedding' layer may be somewhat confusing, given the use of $e_m$ for embeddings.\", \"Suppose an application requires a network with a latency of under 30 ms on a given platform. How does the user preference vector, where each element is a probability within [0, 1], map to specific latency (and energy consumption) in real-world scenarios?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. The authors have addressed all my concerns. I look forward to the release of your implementation.\"}", "{\"comment\": \"Thank you for your responses and for providing more content.\", \"a_few_more_comments\": \"1-Regarding utilizing hypernetwork, if the search space is small ( i.e., 30 alpha parameters) why do we need to use hypernetwork? Can\\u2019t we use simple heuristic, bayesian, or evolutionary algorithms? What benefits do hypernetworks bring to the table?\\n\\n2-I am not expecting the authors to provide results for object detection, but it would be great to have a discussion in the appendix on how to apply the proposed method in object detention and what challenges need to be solved. The following work might be useful:\\n[X] Benchmarking Deep Learning Models for Object Detection on Edge Computing Devices\\n[Y] YOLOBench: Benchmarking Efficient Object Detectors on Embedded Systems\\n[Z] Virtuoso: Energy- and Latency-aware Streamlining of Streaming Videos on Systems-on-Chips\\n\\n3- I still believe that Eyeriss is an old work and the authors need to use more recent work/numbers with 2-4 nm technology size y to get a more accurate estimation. However, I do understand this is out of the scope of this paper. So, no need to take action on this. \\n\\n4- I was referring to different DNN NAS compared to the MobileNet search space. For example, DARTS search space and not GPT ones.\\n\\n5- Section 5 doesn\\u2019t discuss possible search space complexities that MODNAS struggles with.\\n\\n6- Regarding power, I think you need to use power which is independent of latency. When you are using latency, if the latency is reduced, the energy also will be reduced if the power is fixed. So, if you optimize for latency, energy also is optimized. Using power makes your problem more interesting to solve and as you mentioned has more practical application.\"}", "{\"title\": \"Official Response from Authors (2/2)\", \"comment\": \"> ### *Suppose an application requires a network with a latency of under 30 ms on a given platform. How does the user preference vector, where each element is a probability within [0, 1], map to specific latency (and energy consumption) in real-world scenarios?*\\n\\nWe agree with you that this is a very practical and relevant scenario. Therefore, in the paper we have conducted an experiment with a slightly different version of MODNAS that allows us to incorporate user constraints without the need to map them to the [0, 1] probability simplex. You can find the description of the method under the \\u201c*MODNAS vs. constrained single-objective optimization*\\u201d paragraph of **Section 4 (lines 420-436)**. You can find the empirical results in Figure 7. Note that in the legend we write e.g. \\u201cMODNAS (0.2)\\u201d, however, the algorithm is utilizing the actual hardware constraint (which we map to the preference vector value 0.2 for the plotting). In this particular example, since we knew the entire NB201 space exhaustively, we made sure to select hardware constraints that would map to equidistant preference vectors, however, this should not matter in a more practical case. To recap, running MODNAS using hardware constraints results in only one change in the algorithm: If the predicted hardware metric value from the MetaPredictor is smaller than the constraint (e.g. 30 ms), the gradient in lines 6 and 14 of Algorithm 1 will not be computed (constraint satisfied), otherwise they will (constraint violated) and the algorithm will try to optimize this metric.\\n\\n\\nWe hope that our response addresses your concerns and strengthens the paper. If any points remain unclear, we are open to providing further explanation. Thank you again for your time and valuable feedback. We appreciate your consideration of these responses in your evaluation.\\n\\n---\\n\\n**\\u2013References\\u2013**\\n\\n[1] Jean-Antoine D\\u00e9sid\\u00e9ri. Multiple-Gradient Descent Algorithm (MGDA). [Research Report] RR-6953, 2009. ffinria-00389811v2f\\n\\n[2] Zhang, Q., Xiao, P., Ji, K. and Zou, S., 2024. On the Convergence of Multi-objective Optimization under Generalized Smoothness. arXiv preprint arXiv:2405.19440.\\n\\n[3] Ye, F., Lin, B., Cao, X., Zhang, Y. and Tsang, I.W., 2024. A first-order multi-gradient algorithm for multi-objective bi-level optimization. In ECAI 2024 (pp. 2621-2628). IOS Press. \\n\\n[4] Yang, X., Yao, W., Yin, H., Zeng, S. and Zhang, J., 2024. Gradient-based algorithms for multi-objective bi-level optimization. Science China Mathematics, pp.1-20.\\n\\n[5] Liu, L., Dong, C., Liu, X., Yu, B. and Gao, J., 2024. Bridging discrete and backpropagation: Straight-through and beyond. Advances in Neural Information Processing Systems, 36.\\n\\n[6] Dong, X. and Yang, Y., 2019. Searching for a robust neural architecture in four gpu hours. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1761-1770).\\n\\n[7] Sukthanker, R.S., Krishnakumar, A., Safari, M. and Hutter, F., 2023. Weight-Entanglement Meets Gradient-Based Neural Architecture Search. International Conference in Automated Machine Learning 2024\\n\\n[8] Cai, H., Gan, C., Wang, T., Zhang, Z. and Han, S., 2019. Once-for-all: Train one network and specialize it for efficient deployment. arXiv preprint arXiv:1908.09791.\\n\\n[9] Zela, A., Elsken, T., Saikia, T., Marrakchi, Y., Brox, T. and Hutter, F., Understanding and Robustifying Differentiable Architecture Search. In International Conference on Learning Representations.\\n\\n[10] Chen, X. and Hsieh, C.J., 2020, November. Stabilizing differentiable architecture search via perturbation-based regularization. In International conference on machine learning (pp. 1554-1565). PMLR.\\n\\n[11]Smith, S.L., Dherin, B., Barrett, D. and De, S., On the Origin of Implicit Regularization in Stochastic Gradient Descent. In International Conference on Learning Representations.\"}", "{\"comment\": \"We are really happy that we have addressed your concerns and that you will increase your score (which you might have forgotten to do since it is not updated in OpenReview). Ultimately, your feedback was very helpful and made our submission stronger. Thank you very much.\"}", "{\"title\": \"Official Response from Authors (1/2)\", \"comment\": \"Thank you for carefully reading our paper, your detailed feedback and the positive score. We are encouraged to see that you identify several positive aspects of the work. We address your concerns and respond to each of your questions below.\\n\\n> ### *Regulating the trade-off among user preferences with scalarization: In Figure 4, solutions on the Pareto frontier can be achieved by modulating user preferences. However, in HDX [1], which is a one-shot NAS with hardware constraints, claims that modulating hyperparameters linearly doesn\\u2019t lead to linearly distributed results. Is MetaHypernetwork free from this problem? Can real experimental results be plotted like Figure 4 to substantiate the integrity of MetaHypernetwork?*\\n\\nFollowing your suggestion, we have now plotted the Pareto front with the respective preference vectors in the same plot (https://anonymous.4open.science/r/MODNAS-1CB7/pareto_rays.png). We utilize one of our runs on the NAS-Bench-201 test devices, namely Eyeriss. As seen in the plots, the preference vectors and the points on the Pareto-Front are very aligned with each other, hence substantiating the integrity of the MetaHyperNetwork. Moreover, we have added a **new Appendix K in the updated PDF** describing this experiment. Thank you very much for this suggestion. Ultimately, this experiment helped us make our case stronger by validating the integrity of the generated solutions.\\n\\n> ### *\\u201dnetwork architectures the same as or near ground truth solutions may hard to be reach with, where other works can reach with huge search costs.\\u201d and \\u201cWhat the review wonders is whether can other works find near-GT solutions with a larger time budget?\\u201d*\\n\\nWe agree with you that blackbox multi-objective optimizers can potentially reach the global Pareto front if the compute resources are not a concern and given enough time, it is not practical to train or even evaluate these architectures, especially for larger model sizes (e.g. Transformer spaces from HW-GPT-Bench). Sometimes in practice, the user wants to get a quick estimation of the Pareto front, and this is the use-case where MODNAS shines. Given enough budget, even random search will find a near optimal solution. For instance in NB201, the size of the search space is K=15625 architectures. The optimal theoretical number of random search steps $n$ to achieve a success probability $\\\\alpha$ is approximately: $n \\\\geq K ln(1/1-\\\\alpha)$, therefore for random search to have a success probability more than 0.5 it requires $n \\\\geq 10781$ iterations in theory. For the other guided search methods, this number is even smaller, though similar to MODNAS, they have the same limitation that they can converge to a local minimum. Nevertheless, we conducted the same experiment as the one in Figure 3 in the paper, but this time with the baselines given 4x more budget than MODNAS. You can find the results in this link: https://anonymous.4open.science/r/MODNAS-1CB7/radar_hypervolume.pdf. We updated the paper with **Appendix M with this new result and the above discussion**.\\n\\n> ### *How are user preference and hardware device embeddings designed?*\\n\\n- **Preference vectors:** During the training stage of MODNAS we sample a scalarization uniformly at random from the probability simplex (e.g. for 2 objectives a scalarization can be [0.24, 0.76])-- scalarizations are in [0,1] and their sum is 1. During inference, we sample 24 fixed points which are uniformly spaced on a circle (for 2 objectives) or a hypersphere (for >2 dimensions). Preference vectors are quantized ([0.24, 0.76] \\u2192[24, 76]) before being passed as input to the MetaHypernetwork.\\n- **Hardware device embeddings:** We use a simple scheme similar to HELP [4] to define the hardware embedding. We sample 10 fixed architectures from a search space, compute their hardware metric value (latency/energy) on a particular device and use this to compute the hardware embedding vector of size 10.\"}", "{\"title\": \"Thank you for feedback on our paper\", \"comment\": \"Thank you very much again for a positive score and your detailed review, which helped us greatly improve our work.\"}", "{\"title\": \"General response to all reviewers\", \"comment\": \"We would like to thank all the reviewers for reading our paper and their insightful feedback. We also appreciate the positive average score. As a general response, we want to emphasize the following main changes in the submission PDF (highlighted in blue):\\n1. **Appendix J**: additional discussion on the robustness of MODNAS (*addressing Reviewer KeBz*)\\n2. **Appendix K**: alignment of preference vectors with pareto front (*addressing Reviewer ugEA and ei5k*)\\n3. **Appendix L**: training and validation loss curves (*addressing Reviewer KeBz*)\\n4. **Appendix M**: Multi-objective optimization baselines with more budget (*addressing Reviewer ugEA*)\\n5. **Appendix N**: Additional Details on the Architect (*addressing Reviewer ugEA*)\\n6. Updated Figure 15 with the MODNAS + MiLeNAS baseline (*addressing Reviewer ei5k*)\\n\\nWe hope that after reading our responses your concerns will be addressed and you will consider increasing your scores. Thank you very much for your time.\"}" ] }
9mOs2Bxd3Q
Extending Stability Analysis to Adaptive Optimization Algorithms Using Loss Surface Geometry
[ "Ashish Dubey" ]
Adaptive optimization algorithms, such as Adam Kingma & Ba (2015) and RM-SProp Tieleman & Hinton (2012), have become integral to training deep neu-ral networks, yet their stability properties and impact on generalization remain poorly understood Wilson et al. (2017). This paper extends linear stability anal-ysis to adaptive optimizers, providing a theoretical framework that explains their behavior in relation to loss surface geometry Wu et al. (2022); Jastrz˛ebski et al.(2019). We introduce a novel generalized coherence measure that quantifies the interaction between the adaptive preconditioner and the Hessian of the loss func-tion. This measure yields necessary and sufficient conditions for linear stability near stationary points, offering insights into why adaptive methods may converge to sharper minima with poorer generalization. Our analysis leads to practical guidelines for hyperparameter tuning, demon-strating how to improve the generalization performance of adaptive optimizers. Through extensive experiments on benchmark datasets and architectures, includ-ing ResNet He et al. (2016) and Vision Transformers Dosovitskiy et al. (2020), we validate our theoretical predictions, showing that aligning the adaptive precon-ditioner with the loss surface geometry through careful parameter selection can narrow the generalization gap between adaptive methods and SGD Loshchilov & Hutter (2018).
[ "Adaptive Optimization", "Linear Stability Analysis", "Generalization", "Loss Surface Geometry", "Deep Neural Networks" ]
Reject
https://openreview.net/pdf?id=9mOs2Bxd3Q
https://openreview.net/forum?id=9mOs2Bxd3Q
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wC41KtbJR6", "rSoX7d6DA9", "kQtrosqShL", "gNhOemUxcZ", "SepaAoGEUt", "IsMKMx0slE" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "meta_review", "decision" ], "note_created": [ 1730714853686, 1731272213635, 1729687710183, 1729787181374, 1734657348941, 1737524287952 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13895/Reviewer_wp95" ], [ "ICLR.cc/2025/Conference/Submission13895/Reviewer_an1a" ], [ "ICLR.cc/2025/Conference/Submission13895/Reviewer_c15z" ], [ "ICLR.cc/2025/Conference/Submission13895/Reviewer_PX69" ], [ "ICLR.cc/2025/Conference/Submission13895/Area_Chair_5C1J" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"The paper provides a theoretical framework for studying the stability properties of adaptive optimization algorithms such as Adam and RMSProp. The key idea is to introduce a generalized coherence measure, quantifying the interaction between the preconditioner and the Hessian of the loss function. The analysis presents one justification for why adaptive may often converge to sharper minima, leading to worse generalization performance. The authors demonstrate how the proposed framework could be used to tune the hyperparameters for Adam. Empirically, the authors justify their framework on standard image classification tasks by showing the relationship between test accuracy, sharpness, and maximum eigenvalue of different optimizers.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written, has clear motivation, and presents valuable contributions to the ICLR community. The background section is particularly thorough and accessible.\", \"The theoretical analysis appears correct and provides practical guidelines for tuning Adam's hyperparameters.\"], \"weaknesses\": [\"While the linear stability analysis is valuable, the framework may not fully capture why adaptive optimization algorithms lead to larger generalization gaps. SGD and Adam exhibit different implicit biases and optimization trajectories, which could be the primary factors. What would happen if one replaces SGD on an Adam-trained network near convergence?\", \"Pan et al. [1] also show that Adam leads to sharp minima (they are more stable in sharp regions). Could the authors clarify how this analysis differs?\", \"Key assumptions are not empirically justified, particularly the assumption about preconditioner convergence to a constant at the training's end. It would be helpful to describe the limitations of the proposed framework.\", \"The empirical analysis feels limited, focusing mainly on image classification tasks. Validation on other domains (NLP, RL) would strengthen the claims. However, although this is one weakness of the paper, I did not put much weight on it. Several key details are missing to reproduce the experiments in the paper (e.g., how the authors chose the hyperparameters). I believe that properly describing these details is important for optimization (or analysis) works. I am willing to increase my score if these details are properly described.\", \"[1] Zhou, Pan, et al. \\\"Towards theoretically understanding why sgd generalizes better than adam in deep learning.\\\" Advances in Neural Information Processing Systems 33 (2020): 21285-21296.\"], \"questions\": [\"How does the framework extend to other adaptive optimization methods beyond Adam and RMSProp?\", \"According to this analysis, why does Adam perform better on transformer-based architectures (e.g., language modeling)?\", \"(Minor) Figure number is missing in line 82.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper expands the scope of linear stability analysis to encompass adaptive optimization algorithms such as Adam and RMSProp. It establishes a theoretical framework that links the stability characteristics of these algorithms to the topography of the loss landscape. The paper introduces an innovative generalized coherence metric that assesses the interplay between the adaptive preconditioner and the Hessian matrix of the loss function. This metric yields both necessary and sufficient conditions for linear stability in the vicinity of stationary points. The study's results indicate that adaptive optimizers have the capacity to handle sharper minima but may suffer from inferior generalization when compared to conventional methods like SGD. The paper also provides actionable advice on hyperparameter tuning to address this potential drawback.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper presents an extensive theoretical framework that broadens the conventional stability analysis to include adaptive optimization algorithms, deepening our comprehension of their operational dynamics in the context of the loss surface's geometric properties.\\n\\nIt introduces a groundbreaking coherence measure that quantifies the relationship between the adaptive preconditioner and the Hessian matrix, shedding light on the prerequisites for linear stability and their impact on convergence patterns.\\n\\nThe study offers practical recommendations for hyperparameter tuning, designed to bolster the generalization capabilities of adaptive optimizers. This addresses the noted disparity in generalization performance when compared to stochastic gradient descent (SGD).\", \"weaknesses\": \"The contribution of this work appears limited, as it seems to build incrementally on existing stability analyses of SGD. The primary difference highlighted is that while the precondition matrix in SGD is an identity matrix, in the adaptive method, it is a diagonal matrix with positive values.\\n\\nIn the theoretical proof, the precondition matrix is assumed to be constant. However, in the experiments, when the model weights converge, the precondition matrix may change slowly rather than remaining stable. It is unclear whether these small changes would affect the validity of the proof provided in the paper.\\n\\nSome details of the experiment are unclear, particularly concerning Figure 1 and Table 2. Were these results obtained by training from scratch? Were the experiments run multiple times to ensure consistency? In Figure 1, the results suggest that the method converges only during the initial phase of the training period, indicating that further tuning may be needed.\", \"questions\": \"Could the authors consider making their code publicly available to facilitate reproduction of the study?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper provides necessary and sufficient conditions for the linear stability of adaptive optimization methods (such as RMSProp, Adam). The authors introduce a generalized coherence measure to capture the interaction between the adaptive preconditioner and the Hessian, which is then used to derive a linear stability condition. The theoretical findings are supported by experiments conducted on real-world datasets.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The paper is easy to follow and tackles the important problem of analyzing stability conditions of adaptive methods, well-motivated by a preliminary experiment comparing the generalization ability of SGD and Adam.\", \"weaknesses\": \"This paper contains several significant technical flaws, with mathematical arguments that lack both rigor and clarity. Key issues include:\\n\\n1. **The notion of \\\"linear stability\\\" is not clearly defined**. \\n - The concept of linear stability is central to the paper's claims, yet it is never explicitly defined. Since there are a few diffenet flavors of linear stability in the literature, this omission weakens the theoretical foundation. The authors should provide a precise definition before presenting any theorems or formal arguments based on it.\\n\\n2. **The linear stability condition for SGD is incorrect**. \\n - In Section 2.1, the paper presents the condition $\\\\lVert I - \\\\eta H(\\\\theta^*) \\\\rVert < 1$ as the linear stability criterion for SGD (Eq. 2). While this holds for (full-batch) GD, it does not apply to SGD due to the noise introduced by stochastic sampling. As demonstrated by prior works such as [Wu et al., 2018] and [Wu et al., 2022], the stability of SGD also depends on factors such as noise covariance and batch size, none of which are accounted for in the paper's condition.\\n\\n3. **The stability condition for adaptive methods is incorrectly derived**. \\n - In Section 3.3.1, the paper introduces $p_i$ as the diagonal element of $P^*$ corresponding to the eigenvalue $\\\\lambda_i$ of $H(\\\\theta^*)$. This reasoning holds only when the eigenvectors of the Hessian are aligned with the coordinate axes, i.e., when the Hessian is diagonal. However, in general, the eigenvectors are not aligned with the coordinate system, making Eq. (13) and the subsequent analysis incorrect. As this forms the basis of the paper's linear stability condition for adaptive methods, the main theoretical result is flawed.\\n\\n4. **The proof of convergence for the Adam preconditioner in Appendix B is incorrect**. \\n - The paper claims to prove that the Adam preconditioner $P_t$ converges to a constant matrix $P^*$ as $t \\\\to \\\\infty$. However, the proof in Appendix B lacks rigor. In Line 672, the argument that exponential decay of earlier gradients implies that $v_t$ reaches a steady state is unsubstantiated. For such a claim to be valid, the assumptions must be explicitly stated, and the proof must be strengthened with a more rigorous analysis.\\n\\nAdditionally, the paper omits citation of a relevant work, [Cohen et al., 2022], which analyzed the stability condition of adaptive methods (assuming stationary preconditioners) on quadratic problems. According to their results, the stability condition for RMSProp (and Adam) is that the preconditioned sharpness $\\\\lambda_{\\\\max}(P^{-1}H)$ remains below $2/\\\\eta$ ($38/\\\\eta$ for Adam). This paper should cite [Cohen et al., 2022] and provide a careful discussion comparing and contrasting their results with the findings in this work.\\n\\n---\\n\\n**References**\\n\\n[Wu et al., 2018] How SGD Selects the Global Minima in Over-parameterized Learning: A Dynamical Stability Perspective, NeurIPS 2018.\\n\\n[Wu et al., 2022] The alignment property of SGD noise and how it helps select flat minima: A stability analysis, NeurIPS 2022.\\n\\n[Cohen et al., 2022] Adaptive Gradient Methods at the Edge of Stability, arXiv preprint 2022.\", \"questions\": \"Could you provide more details on the experimental setup for Tables 1 and 2? Specifically, what batch size and number of epochs were used, and were the hyperparameters carefully tuned for each setting? Additionally, how was sharpness computed in the experiments according to Eq. (24)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this work, the authors extend the linear stability analysis to adaptive optimization algorithms. They use this analysis to hypothesize why adaptive optimizers converge to sharper minima compared to SGD, which has been associated with poorer generalization performance [1],[2]. The authors also introduce a new measure, which they term the Generalized Coherence Measure, where they also show a correlation between a lower generalized coherence measure and better test accuracy\\nThe authors provide some experiments on vision tasks, where it is also observed that better test accuracy is associated with smaller maximum eigenvalue and lower sharpness.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The authors discuss an important topic on the dynamics of adaptive methods. Also, the paper is very easy to follow.\", \"weaknesses\": [\"Overall the paper is poorly written, and a section on related work is missing. Important related work such as [1] was not mentioned.\", \"I believe that Eq. (13), which is a key part of this paper, is simply wrong. One cannot decompose the eigenvalues of\", \"$M = I - \\\\eta P^{\\\\star-1} H(\\\\theta^{\\\\star})$\", \"as $1 - \\\\eta \\\\frac{\\\\lambda_i}{p_i}$ because this requires $P^{\\\\star-1} $ and $H (\\\\theta^{\\\\star})$ to be co-diagonalizable, which is generally not the case.\", \"The novelty of this paper is quite limited. Apart from introducing a new measure, which seems to correlate with the test accuracy, the authors mainly just confirm the connections that has already been observed previously (such as the hypothesized connection between lower sharpness and better generalization).\", \"Miscellaneous errors, such as: in line 83: Figure ?? not referenced correctly, in line 117: $\\\\rho$ not defined, in Figure 2: only one minimum is shown...\", \"[1] Cohen, Jeremy M., et al. \\\"Adaptive gradient methods at the edge of stability.\\\" arXiv preprint arXiv:2207.14484 (2022).\"], \"questions\": \"see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper extends linear stability analysis to include adaptive optimization algorithms like Adam and RMSProp. It develops a theoretical model connecting the stability of these algorithms to the shape of the loss function's landscape. It also introduces a new index to measure how well the algorithm's adaptive preconditioner aligns with the curvature of the loss function. This index is used to determine stability around stationary points and offers insights into why adaptive methods may converge to sharper minima with poorer generalization. The theory is supported by experiments on benchmark datasets and architectures.\\n\\nThe reviewers appreciate that the presented theoretical framework can broaden conventional stability analysis to include adaptive optimization algorithms, deepening our comprehension of their operational dynamics in the context of the loss surface's geometric properties. They also appreciate the new coherence measure, which quantifies the relationship between the adaptive preconditioner and the Hessian matrix. They also find the paper clearly motivated and with a thorough and accessible background section.\\n\\nDespite these strengths, the paper has issues that need to be addressed before it can be published. In particular, reviewer c15z believes the paper has significant technical flaws, with mathematical arguments that lack both rigor and clarity, and provides a detailed list of areas where these issues arise. Similarly, reviewer px69 believes that Eq. (13), which is a key part of this paper, is wrong. Reviewers an1a and px69 both ask for the author's clarification on the novelty of the contributions of the paper: they both believe the current contributions build incrementally on existing stability analyses of SGD. Reviewers an1a and wp95 also express concern about the assumption that the preconditioner matrix is constant in the theoretical proofs. \\n\\nThe authors decided not to respond to the feedback, which leaves the issues unresolved. Given the unresolved concerns, especially around errors and flaws in the theory part of the paper, the paper cannot be accepted in its current form. I encourage the authors to consider resubmitting their work after fixing the problems mentioned in this review cycle.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer c15z believes the paper has significant technical flaws, with mathematical arguments that lack both rigor and clarity, and provides a detailed list of areas where these issues arise. Similarly, reviewer px69 believes that Eq. (13), which is a key part of this paper, is wrong. Reviewers an1a and px69 both ask for the author's clarification on the novelty of the contributions of the paper: they both believe the current contributions build incrementally on existing stability analyses of SGD. Reviewers an1a and wp95 also express concern about the assumption that the preconditioner matrix is constant in the theoretical proofs.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
9mO9CNgNrh
TableTextGrad: A Reflexive Framework for Tabuar Understanding
[ "Chufan Gao", "Jintai Chen", "Jimeng Sun" ]
Table understanding is a complex task that requires not only grasping the semantics of free-form questions but also accurately reasoning over semi-structured tables. Recently, promising approaches designed sophisticated prompts that leverage large language models (LLMs) by combining Chain-of-Thought strategies with function calls, consequently demonstrating competitive results without requiring fine-tuning. However, creating sufficiently effective prompts remains a challenge. Without fine-tuning, all necessary priors must be incorporated directly into the initial prompt, making prompt design even more critical. Motivated by the recent advancements in the ''textual gradient'' space, we introduce TableTextGrad, a novel framework that enables automatic prompt optimization by leveraging the ``differentiation'' of prompting pipelines through textual gradients. Concretely, according to the feedback of LLMs, TableTextGrad iteratively refines each function within the Chain-of-Thought steps and function calls, resulting in more accurate and reliable table reasoning outcomes. Experiments on table question-answering datasets demonstrate that our integrated approach achieves significant improvements, setting new state-of-the-art results on the WikiTableQA benchmark. Our TableTextGrad not only enhances the reasoning capabilities of LLMs in the table reasoning task but also lays a groundwork for more robust and generalizable prompting pipelines due to its simplicity and effectiveness.
[ "Tabular Understanding", "Table QA", "Prompting" ]
https://openreview.net/pdf?id=9mO9CNgNrh
https://openreview.net/forum?id=9mO9CNgNrh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vCwFEOypEV", "uideCRDq8t", "h5ec1eZwZ9", "WCCyXwPxZM", "TlKXzPbEOA", "SISzURrwrW", "MYdlGBsuN4", "MErkkSvFES", "JEDBadLpVU", "HLgfzdbFmd", "GA7D3IQ4Oa", "FQha6GDr6i", "8WEU2VnWJO", "0cY57vla9y" ], "note_type": [ "comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733860565555, 1730611976619, 1732983084265, 1732689175846, 1732620622400, 1731041312970, 1730690174054, 1732620708861, 1732984546845, 1730229426964, 1732620754116, 1732621079409, 1732620562689, 1733212580131 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13850/Authors" ], [ "ICLR.cc/2025/Conference/Submission13850/Reviewer_nLgd" ], [ "ICLR.cc/2025/Conference/Submission13850/Reviewer_mGii" ], [ "ICLR.cc/2025/Conference/Submission13850/Reviewer_nLgd" ], [ "ICLR.cc/2025/Conference/Submission13850/Authors" ], [ "ICLR.cc/2025/Conference/Submission13850/Reviewer_mGii" ], [ "ICLR.cc/2025/Conference/Submission13850/Reviewer_aaBn" ], [ "ICLR.cc/2025/Conference/Submission13850/Authors" ], [ "ICLR.cc/2025/Conference/Submission13850/Reviewer_aaBn" ], [ "ICLR.cc/2025/Conference/Submission13850/Reviewer_ieKk" ], [ "ICLR.cc/2025/Conference/Submission13850/Authors" ], [ "ICLR.cc/2025/Conference/Submission13850/Authors" ], [ "ICLR.cc/2025/Conference/Submission13850/Authors" ], [ "ICLR.cc/2025/Conference/Submission13850/Reviewer_ieKk" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents a prompt optimisation technique, named TableTextGrad, for the table QA understanding task. Specifically, TableTextGrad uses the *Chain of Table* method as the basis and applies the TextGrad idea to it to automatically optimises the chain of prompts used.\", \"evaluation_is_done_on_two_benchmark_datasets\": \"WikiTQ and TabFact, and TableTextGrad achieves the best performance, compared against a number of fine-tuning and inference-only methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Training-free methods are a useful approach to improving LLM performance without having to access their parameters.\", \"Table understanding is an interesting and practical task.\", \"The proposed technique makes intuitive sense, and shows strong performance.\"], \"weaknesses\": [\"The proposed technique is incremental, and the novelty is a bit limited. It essentially applies TextGrad to the *Chain of Table* backbone. While effective, this is not exactly surprising that it works nor groundbreaking.\"], \"questions\": [\"In Figure 1, what is the right small rectangle in the \\\"Gradient Update Example\\\" part of the figure? What is its relationship with the left rectangle?\", \"In Sec. 4.2, you mention \\\"the results\\\" in line 396-397. However, I don't see it being referred to in the paper. Thus, in which table/figure are the results shown?\", \"You discussed training efficiency in Sec. 4.5. However, the discussion is abstract without empirical evidence. I'd like to see a comparison of running time & token consumption/cost against the baselines.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your reply. I am willing to keep my score.\"}", "{\"comment\": \"Thanks for providing your response and additional empirical results.\\n\\nSome of my questions have been addressed. However, my concern on novelty (shared by reviewers mGii and aaBn) still stands. Since my original score is already high, I will keep my original evaluation.\"}", "{\"comment\": \"Thank you for your feedback, we have addressed some points as follows.\\n\\n- Novelty\", \"the_novelty_of_tabletextgrad_lies_in_extending_textgrad_principles_to_a_new_and_significantly_more_complex_paradigm\": \"the optimization of conditional branching prompt pipelines. Unlike standard applications of TextGrad, which focus on one-shot text reponses, our work adapts and extends this methodology to function-based Chain-of-Thought reasoning in table understanding tasks. This required novel contributions, including the definition of differentiable multi-step reasoning and their integration into an iterative optimization framework capable of refining hierarchical and branching reasoning paths. These challenges, which are unique to the structured and conditional nature of table QA tasks, have not been addressed in prior work.\\n\\n- Additional Evaluations\\n\\nFor further evaluation of our method, we have added a suite of additional evaluations highlighted in red in the updated PDF, including experiments beating baselines on FeTaQA (Appendix A.3), poor prompt initialization to show that our framework can recover and optimize reasoning performance even under suboptimal conditions (4.4), noisy questions to show that TableTextGrad can decipher intent (4.5), and relevant row/columns identification (Appendix A.4). We hope that this offers a more comprehensive evaluation of our effectiveness! \\n\\nThank you again for your comments--we hope that our responses offer a reconsideration of your scoring.\"}", "{\"summary\": \"The paper presents TableTextGrad, a framework for table understanding. Motivated by the recent advancements in the \\u201ctextual gradient\\u201d space, this paper introduces TableTextGrad, a novel framework that enables automatic prompt optimization by leveraging the\\n\\u201cdifferentiation\\u201d of prompting pipelines through textual gradients. In the process, an initial LLM (Agent 1) generates table operations iteratively for table understanding. After each step, the table is updated based on the generated function calls and arguments. Then, in the validation phase, a second LLM agent (Agent 2) evaluates the predicted answers. If the answers are incorrect, natural language feedback on how to improve the prompt is backpropagated as textual gradients. These gradients are backpropagated to every prompting step used in generating the answer, including those for function selection, argument generation, and the final table query. The framework is tested on datasets like WikiTableQA and TabFact and compared with various baseline methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper adapts the TextGrad technique to the table understanding field, which leverages the Automatic Prompt Updating pipeline, which refines prompts through natural language feedback and gradient updates on training data. The contribution of this paper is simple and easy to understand.\\n\\nFrom the experimental results, we observe that this paper demonstrates its ability to significantly improve performance in table question-answering tasks. Leveraging the \\u201cdifferentiation\\u201d of prompting pipelines refines each function within the Chain-of-Thought steps and function calls, leading to more accurate and reliable table reasoning outcomes. The results show that it achieves new state-of-the-art performance on benchmarks like WikiTableQA and TabFact, outperforming many existing methods.\", \"weaknesses\": \"Although this paper shows its merits in Dynamic Prompt Optimization and Soft Selection of Table Elements, it has some weaknesses that need improvement.\\n\\n1. The novelty of borrowing the idea of TextGrad to the table understanding domain is limited since the contribution lies in the domain adaptation of the methodology, not belonging to the original contribution from scratch.\\n\\n2. The performance of TableTextGrad decreases significantly when dealing with large tables. The increased complexity and token context required for large tables seem to lead to issues with memory and attention span within the model. If the authors have considered techniques like table chunking or hierarchical attention mechanisms to address the limitations with large tables.\\n\\n3. The core reasoning capabilities of TableTextGrad rely on large language models like GPT - 4o and LLaMA 3.1, which are resource-intensive. Also, the improvements over these large models are slight and have not been evaluated with significant tests.\", \"questions\": \"Are there any plans to address the limitations of TableTextGrad in future work? For example, how to deal with large tables?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces TableTextGrad, a framework that enhances large language models' ability to reason over tables by automatically optimizing prompts through textual gradients. It combines the flexibility of inference-only techniques with data-driven learning, achieving state-of-the-art results on WikiTableQA and TabFact benchmarks. The approach refines prompts iteratively based on LLM feedback, improving accuracy in table reasoning tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The writing is relatively clear.\\n2. The model has achieved good performance on two datasets.\", \"weaknesses\": \"1. The evaluation datasets are relatively small, with tests conducted on only two datasets, and the evaluation tasks are quite limited.\\n\\n2. The authors claim to utilize the textgrad method. It appears that tab textgrad is merely an application of textgrad.\", \"questions\": \"1. Can this method be extended to more table tasks, such as text-to-SQL hybrid table question answering?\\n2. Can they explain the significant innovations that this method introduces based on textgrad?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for reviewing our paper--we share the sentiment that this is interesting and practical! Our responses are below:\\n\\n- Novelty:\", \"the_novelty_of_tabletextgrad_lies_in_extending_textgrad_principles_to_a_new_and_significantly_more_complex_paradigm\": \"the optimization of conditional branching prompt pipelines. Unlike standard applications of TextGrad, which focus on static text sequences, our work adapts and extends this methodology to function-based Chain-of-Thought reasoning in table understanding tasks. This required novel contributions, including the definition of differentiable \\\"feedback spaces\\\" for multi-step reasoning and their integration into an iterative optimization framework capable of refining hierarchical and branching reasoning paths. These challenges, which are unique to the structured and conditional nature of table QA tasks, have not been addressed in prior work.\\n\\nFor further evaluation of our method, we have added a suite of additional evaluations highlighted in red in the updated PDF, including experiments beating baselines on FeTaQA (Appendix A.3), poor prompt initialization to show that our framework can recover and optimize reasoning performance even under suboptimal conditions (4.4), noisy questions to show that TableTextGrad can decipher intent (4.5), and relevant row/columns identification (Appendix A.4). We hope that this offers a more comprehensive evaluation of our effectiveness as well! \\n\\n- \\\"Gradient Update Example\\\" \\n\\nThat Rectangle indicates the further backpropagation of textual gradients to previous prompts in the pipeline before the final query! We have clarified this in the new figure.\\n\\n- Missing table reference\\n\\nUpdated to correctly refer to table's ablations!\\n\\n- Training efficiency\\n\\nThis is a good point, and it is indeed difficult to measure our efficiency gains given that most baselines do not include the efforts of manual prompt running. We have added the relevant table run-time results in Appendix A.2, where we make it clear that TableTextGrad operates in a space that is essentially separate from traditional inference time-costs (which is the same as Chain-of-Table). We ran experiments with up to 100 validation samples and 128 training samples in total, over 32 iterations (validation samples are also reran every training iteration). That would make our training cost around 3328 * (<= 25) * 32*10 prompts.\\n\\nThank you again for your comments-- we hope that our responses offer a reconsideration of your scoring.\"}", "{\"comment\": \"Thank you very much for your explanation. Based on my concerns about the novelty, I still maintained my rating.\"}", "{\"summary\": \"This paper introduces *TableTextGrad*, a framework for enhancing table reasoning in LLMs through automated prompt optimization. It refines prompts iteratively using \\\"textual gradients\\\" based on model feedback, improving multi-step reasoning without extensive manual prompt engineering. Key innovations include extending Chain-of-Thought prompting with adaptive prompt adjustments and employing soft selection to retain broader table context, enabling more accurate comprehension in complex table tasks with minimal computational overhead.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. **Good Motivation for Automated Prompt Optimization**: The paper is well-motivated, highlighting the challenges of manual prompt engineering in table reasoning tasks and effectively positioning automated prompt optimization as a practical solution. The use of \\\"textual gradients\\\" for iterative refinement addresses the need for adaptive prompt tuning in large language models, making the approach both relevant and impactful.\\n\\n2. **Comprehensive Ablation Study and Analysis**: The paper includes a thorough ablation study, providing valuable insights into the contributions of different components, such as soft versus hard selection and tuning all prompts versus only the final prompt. This detailed analysis strengthens the understanding of TableTextGrad\\u2019s performance and demonstrates the robustness and versatility of the proposed method across various model configurations and datasets.\", \"weaknesses\": \"1. **Questionable SOTA Claim**: The paper asserts achieving state-of-the-art performance; however, this claim is **not entirely accurate**. According to results in the E5 paper (Table 3) [1], E5 with GPT-4 achieves a score of 88.77 on TabFact, surpassing the 88.75 reported here. This discrepancy **raises concerns about the rigor of this paper's experimental claims** and the reliability of its benchmarking methodology.\\n\\n2. **Limited Dataset Evaluation**: The evaluation is restricted to only two Table QA datasets (WikiTableQA and TabFact), which may not sufficiently demonstrate the generalization of the approach. Including a more complex and realistic dataset, such as HiTab [2], would provide a more robust assessment of the framework's applicability to diverse, real-world tabular data.\\n\\n[1] E5: Zero-shot Hierarchical Table Analysis using Augmented LLMs via Explain, Extract, Execute, Exhibit and Extrapolate, NAACL 2024\\n[2] HiTab: A Hierarchical Table Dataset for Question Answering and Natural Language Generation, ACL 2022\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for reviewing our paper and finding it well motivated!\\n\\n- SOTA claim\\n\\nWe appreciate the reviewer bringing the E5 paper to our attention. We were not aware of this paper during the preparation of our submission, and we acknowledge their reported result of 88.77 on TabFact, which slightly surpasses our score of 88.75 by 0.02 points. While our claim of state-of-the-art performance was made based on the studies we were aware of at the time, we recognize that this result challenges that claim.\\n\\nThat said, we emphasize that the difference of 0.02 is negligible in practical terms and within the margin of variability often observed in such evaluations. Our primary contribution is not merely achieving strong performance but introducing a novel and generalizable framework, TableTextGrad, which extends the TextGrad methodology to optimize hierarchical and branching reasoning paths in table QA. \\nWe will update our manuscript to address this discrepancy!\\n\\n\\n- Limited Dataset Evaluation\\n\\nThank you for this valuable suggestion. For further evaluation of our method, we have added a suite of additional evaluations highlighted in red in the updated PDF, including experiments beating baselines on FeTaQA (Appendix A.3), poor prompt initialization to show that our framework can recover and optimize reasoning performance even under suboptimal conditions (Section 4.4), noisy questions to demonstrate that TableTextGrad can decipher intent (Section 4.5), and relevant row/column identification experiments (Appendix A.4). We hope these additions provide a more comprehensive evaluation of our approach's effectiveness.\\n\\nRegarding the inclusion of HiTab, we agree that it represents an important and challenging dataset for table QA. However, incorporating HiTab into our evaluation was not feasible within the time constraints of this submission. Moreover, HiTab is hierarchical in nature, which differs significantly from the flat table structure of WikiTableQA, TabFact, and FeTaQA. While TableTextGrad is designed to address flat and semi-structured tables, extending it to hierarchical tables like those in HiTab would require additional adaptations, which we identify as an exciting direction for future work in the new limitations section.\\n\\nWe hope these clarifications and new results offer a reconsideration of your scoring. Thank you again for your thoughtful feedback!\"}", "{\"comment\": \"We sincerely apologize for the delay in our response, as we needed additional time to conduct new experiments, particularly on FeTaQA and noisy questions, to address your valuable feedback thoroughly. We have updated the PDF, with all changes and additions highlighted in red for your convenience.\", \"below_are_the_key_updates_we_have_made_to_enhance_the_comprehensiveness_of_our_evaluation_and_address_the_concerns_raised\": [\"FeTaQA Evaluation: Added experiments demonstrating that our framework beats baselines on FeTaQA, a free-form table QA dataset (Appendix A.3).\", \"Poor Prompt Initialization: Included tests showing that TableTextGrad can recover and optimize reasoning performance even under suboptimal initial prompts (Section 4.4).\", \"Noisy Questions: Conducted experiments illustrating that TableTextGrad is capable of deciphering intent and reasoning effectively despite noisy input questions (Section 4.5).\", \"Relevant Row/Column Identification: Added evaluations to demonstrate our framework\\u2019s ability to accurately identify relevant rows and columns (Appendix A.4).\", \"We hope these additions address your concerns. Thank you for your patience and for your thoughtful feedback, which has significantly strengthened this work!\"]}", "{\"comment\": \"Thank you for reviewing our paper and for finding our work easy to understand!\\n\\n- Novelty\", \"the_novelty_of_tabletextgrad_lies_in_extending_textgrad_principles_to_a_new_and_significantly_more_complex_paradigm\": \"the optimization of conditional branching prompt pipelines. Unlike standard applications of TextGrad, which focus on static text sequences, our work adapts and extends this methodology to function-based Chain-of-Thought reasoning in table understanding tasks. This required novel contributions, including the definition of differentiable \\\"feedback spaces\\\" for multi-step reasoning and their integration into an iterative optimization framework capable of refining hierarchical and branching reasoning paths. These challenges, which are unique to the structured and conditional nature of table QA tasks, have not been addressed in prior work.\\n\\n- Large Tables\\n\\nWe thank the reviewer for bringing attention to the challenge of handling large tables and for suggesting techniques like table chunking and hierarchical attention mechanisms. While these techniques are indeed promising and worth exploring in future work, we would like to clarify several key points about the performance of TableTextGrad in this context.\\n\\nDespite the inherent challenges posed by large tables, we achieved some performance improvements over its baseline, Chain-of-Table across a fair comparison. The new results are seen in Appendix A.5, where we beat chain-of-table by around 8 points in small tables, 3 points in medium tables, and 6 points on large tables.\\n\\nWhile TableTextGrad achieves gains over existing methods, we recognize that token context and memory constraints remain a bottleneck when working with very large tables. Techniques like table chunking and hierarchical attention mechanisms represent promising directions for addressing these issues in future work and have been added to the limitations section.\\n\\n- Reliance on LLMs\\n\\nWe thank the reviewer for raising concerns regarding the resource-intensive nature of the underlying large language models (LLMs) and the magnitude of the performance improvements achieved by TableTextGrad. We acknowledge that TableTextGrad depends on LLMs like GPT-4o and LLaMA 3.1, which are indeed resource-intensive. However, it is important to note that our framework has the same requirements as other baselines for inference, and the number of training steps is up to the user. TableTextGrad would replace the otherwise required costs of manually tuning prompts. \\n\\nThe benchmarks we target, such as WikiTableQA and TabFact, are already saturated with high-performing models, making incremental improvements particularly challenging.\\nFor example, TableTextGrad achieves statistically significant improvements over baseline methods, with gains of +2.3\\\\% on WikiTableQA, setting a new state-of-the-art. Even slight improvements in these competitive benchmarks represent substantial progress due to their complexity.\\n\\nFor further evaluation of our method, we have added a suite of additional evaluations highlighted in red in the updated PDF, including experiments beating baselines on FeTaQA (Appendix A.3), poor prompt initialization to show that our framework can recover and optimize reasoning performance even under suboptimal conditions (4.4), noisy questions to show that TableTextGrad can decipher intent (4.5), and relevant row/columns identification (Appendix A.4). We hope that this offers a more comprehensive evaluation of our effectiveness! \\n\\nThank you again for your comments--we hope that our responses offer a reconsideration of your scoring.\"}", "{\"title\": \"Response to the rebuttal\", \"comment\": \"Thanks for your clarification. However, I do not think it directly resolves my concerns. As a result, I will keep my rating.\"}" ] }
9mBodivRIo
LocoVR: Multiuser Indoor Locomotion Dataset in Virtual Reality
[ "Kojiro Takeyama", "Yimeng Liu", "Misha Sra" ]
Understanding human locomotion is crucial for AI agents such as robots, particularly in complex indoor home environments. Modeling human trajectories in these spaces requires insight into how individuals maneuver around physical obstacles and manage social navigation dynamics. These dynamics include subtle behaviors influenced by proxemics - the social use of space, such as stepping aside to allow others to pass or choosing longer routes to avoid collisions. Previous research has developed datasets of human motion in indoor scenes, but these are often limited in scale and lack the nuanced social navigation dynamics common in home environments. To address this, we present LocoVR, a dataset of 7000+ two-person trajectories captured in virtual reality from over 130 different indoor home environments. LocoVR provides accurate trajectory and precise spatial information, along with rich examples of socially-motivated movement behaviors. For example, the dataset captures instances of individuals navigating around each other in narrow spaces, adjusting paths to respect personal boundaries in living areas, and coordinating movements in high-traffic zones like entryways and kitchens. Our evaluation shows that LocoVR significantly enhances model performance in three practical indoor tasks utilizing human trajectories, and demonstrates predicting socially-aware navigation patterns in home environments.
[ "Dataset", "Human trajectory", "Indoor locomotion", "Virtual reality", "Social motion behavior" ]
Accept (Poster)
https://openreview.net/pdf?id=9mBodivRIo
https://openreview.net/forum?id=9mBodivRIo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zxbeov6kW4", "ywIxvEWvPd", "xKpi4fwzI5", "xE57dp8pmU", "scF8kbxNVx", "rQwQTRK54s", "rPTKRglk7B", "pjAEJcsQFn", "mgVeeIvtah", "le4MG7iPel", "jmUVayvDIw", "jS3OkD9FjA", "iuVpmN08QK", "if4O33Brb5", "g873NVrSkb", "enULmdlmua", "ZJqtCnW93T", "YPJbRwvGsD", "XXVzP2b2pQ", "V1pBtC3tSi", "RP9GrRLgmL", "PROCEjpSwP", "O0mTQPCCBd", "HzjHtflmit", "GOrRBa21JG", "GMULFeyR42", "F5wvF4uwTX", "F2WZzy64Su", "EHzZQyvLec", "ED9Knp0Umn", "Bcz8dm93or", "A3wXncbwmh", "9Pxyo33Zlp", "9Hu4oRjDhC", "98BW7QX8Ox", "1Dr3GNw9Le", "0l7NE3VPIw" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734738072889, 1732344386781, 1733193936322, 1730697729324, 1732548066883, 1732691714895, 1732687551371, 1732774359893, 1732344228854, 1732547866068, 1732782294793, 1732271164673, 1730693840589, 1732242078180, 1732242835343, 1732241887933, 1732242492952, 1732344300232, 1732691830506, 1730696816932, 1732643393200, 1732692090171, 1730886135411, 1732548006607, 1732274950653, 1732242381767, 1732242772872, 1732242999047, 1737524227077, 1732243094126, 1732548123416, 1732242683619, 1732242527766, 1730406872690, 1732548248399, 1732691952467, 1733193860211 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12967/Area_Chair_xEea" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Reviewer_6iZX" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Reviewer_1K21" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Reviewer_PhCB" ], [ "ICLR.cc/2025/Conference/Submission12967/Reviewer_Fpz9" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Reviewer_1K21" ], [ "ICLR.cc/2025/Conference/Submission12967/Reviewer_6iZX" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Reviewer_AaG4" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Reviewer_PhCB" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ], [ "ICLR.cc/2025/Conference/Submission12967/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"The submission is about a new dataset of two-person trajectories captured in VR. Reviewers acknowledged the usefulness of the dataset; they also raised some concerns, primarily about the validity of data collected in VR. Post rebuttal, most reviewers were convinced and supported acceptance. Reviewer AaG4 remained negative but did not engage in discussions. The AC agreed with the majority and recommended acceptance. The authors should revise the submission in the camera ready to address the remaining concerns.\", \"additional_comments_on_reviewer_discussion\": \"The discussion convinced most reviewers to support the submission's acceptance.\"}", "{\"title\": \"Common response to Reviewer and the Area Chair (1/3)\", \"comment\": \"## **Dear Reviewers and the Area Chair,**\\n\\nWe sincerely appreciate the time and effort you dedicated to reviewing our work. We have made our best effort to address the reviewers' concerns and improve the quality of the paper. To facilitate a clearer understanding of our work, we provide a shared Q&A section for the reviewers and the area chair, along with additional experimental results.\\n\\n---\\n\\n## **General questions:**\\n\\n---\\n\\n- **What is the novelty and contribution of our work?:**\\n \\nOur contribution is the creation of the first dataset that records **a wide variety of human trajectories reflecting social motion dynamics within diverse indoor environments**. To achieve this diversity efficiently, we utilized a VR-based data collection system. LocoVR is designed to facilitate research on indoor human trajectories and **serves as a foundational resource for exploring relationships between motion patterns, goal positions, and indoor scene geometries**. Our experiments highlight LocoVR's utility in key tasks, such as socially aware navigation and goal prediction. Additionally, the dataset holds potential for extended applications, such as inferring indoor layouts from trajectory data or studying space utilization in shared environments. By focusing on indoor social motion behaviors, LocoVR provides a unique resource for advancing research on human-centered motion modeling, particularly in confined, interaction-driven settings.\\n\\n---\\n\\n- **What is the advantage in collecting locomotion data in VR?:**\\n\\n**Challenge in indoor scenes:**\\nCollecting data in physical home environments is **inherently time-intensive, requiring experimenters and participants to travel to the designated location, capture the room layout, set up cameras**, and conduct the experiment. Additionally, overhead cameras mounted on ceilings with limited heights are prone to blind spots caused by obstacles, making it **challenging to accurately track participants' positions**. **These challenges have contributed to the lack of diverse indoor locomotion datasets across various scenes.**\\n\\n**Advantage in VR:**\\nIn contrast, our VR system enables **seamless scene switching with a single button click, eliminating the need for physical layout measurements and ensuring precise capture of participants' positions**. Thus, collecting two-person trajectory data in VR offers significant advantages, in terms of **efficiency and diversity**, allowing for the collection of trajectory data across various scenarios in a controlled and repeatable manner. \\n\\n---\\n\\n\\n- **Is there any influence of the gap between VR/Real on the dataset?:**\\n\\nWe think that the impact of the gap between VR and real-world environments varies depending on the task type. For locomotion tasks, we argue that this impact is minimal and does not affect the overall contribution of our dataset for the following reasons:\\n\\n(1) In our experiment, **participants were aware that the virtual avatars were synchronized with real humans sharing the same physical space**. Also, **the avatar enables participants to percept relative position between their body and surrounding objects**. These awareness **discouraged socially or physically inappropriate behavior, mitigating the potential impact of the VR/real gap**, as demonstrated in a recent study on VR locomotion[1][2]. In addition, we have introduced a filter to detect instances of users passing through virtual objects to remove such data from the dataset.\\n\\n(2) Our evaluation used locomotion data collected in physical spaces as test data. Models trained on the LocoVR dataset outperformed those trained on other physically collected datasets (GIMO/THOR-MAGNI), demonstrating that **VR-collected data is effective when applied to real-world scenarios.**\\n\\n[1] H. Yun, Y. Watanabe, A. Yamada, \\\"Exploring the Role of Expected Collision Feedback in Crowded Virtual Environments,\\\" Proc. IEEE Conf. Virtual Reality and 3D User Interfaces, 2024.\\n\\n[2] A. L. Simeone, I. Mavridou, and W. Powell, \\\"Altering user movement behaviour in virtual environments,\\\" IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 4, pp. 1312\\u20131321, 2017.\"}", "{\"title\": \"Summary of the review (1/2)\", \"comment\": [\"# **Summary of the review: Key strengths highlighted by the reviewers**\", \"**1. Significance of the dataset:**\", \"Introduction of motion proxemics in a large-scale dataset which is **useful for downstream tasks** such as studying human-human interactions and potentially human-robot interactions. (Reviewer 1K21)\", \"Data of two-person trajectories is **useful to study motion proxemics** (Reviewer 1K21)\", \"The extensive collection of two-person human trajectory (with motion capture) data in indoor scenes, is a **valuable resource for the community**. (Reviewer 6iZX)\", \"This open-source dataset could **benefit the research community focused on social navigation**. (Reviewer AaG4)\", \"I can see implications of this work not just for virtual agents, games etc, but also **as we move to further robotic presence in our homes this type of dataset can help train their trajectories**. (Reviewer Fpz9)\", \"This paper captures two-person goal-reaching motions which include the social navigation behaviors such as adjusting the path to respect personal boundaries or side steps to give way to another person. Such social navigation behavior is **not covered in most previous datasets** and is **important for understanding multi-human social navigation and potential human-robot interactions.** (Reviewer PhCB)\", \"**2. Significance of the VR-based data collection approach:**\", \"Utilizing virtual reality for data collection is a **promising approach**, allowing more diverse 3D scenes when resources are limited. (Reviewer 6iZX)\", \"The dataset is based on real human subject studies, and the use of VR environments **facilitates data collection efforts**. (Reviewer AaG4)\", \"The proposed VR capture solution **eliminates the high cost of physically setting up indoor scenes** and capturing human movements, which facilitates scaling up locomotion capture to many more scenes compared to previous datasets. (Reviewer PhCB)\", \"**3. Validity of the evaluation:**\", \"**Rigorously quantitatively evaluated against strong baselines**; **baseline configurations and settings are fair and well-documented** (Reviewer 1K21)\", \"**The dataset is well evaluated.** The paper evaluates the dataset on three trajectory-based tasks\\u2014global path prediction, trajectory prediction, and goal prediction. The results show that models trained on LocoVR outperform those trained on other datasets, particularly in predicting realistic, socially aware navigation paths in complex environments. (Reviewer Fpz9)\", \"**4. Other comments:**\", \"The authors introduce an **exciting/ relevant problem**. The problem is **well-motivated while the execution and evaluations are strong**. I see **many potential downstream applications** and I believe that this will make a **huge impact in the robotics field**. (Reviewer 1K21)\"]}", "{\"summary\": \"The paper introduces LocoVR, a virtual reality dataset that captures approximately 7,000 two-person trajectories across 130 indoor home scenes. The authors demonstrate the utility of LocoVR through three applications: global path prediction, trajectory prediction, and goal area prediction. The model trained on this dataset exhibits socially and geometrically aware navigation patterns within indoor scenes.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The extensive collection of two-person human trajectory (with motion capture) data in indoor scenes, is a valuable resource for the community.\\n2. Utilizing virtual reality for data collection is a promising approach, allowing more diverse 3D scenes when resources are limited.\\n3. The data and code will be released.\", \"weaknesses\": \"1. While using VR to collect human trajectory data is helpful, this paper would benefit from a discussion in the related works section about VR and human motion. For instance, referencing works like \\\"QuestEnvSim: Environment-aware Simulated Motion Tracking from Sparse Data\\\" in SIGGRAPH 2023 which uses VR for motion tracking and \\\"Strategy and Skill Learning for Physics-based Table Tennis Animation\\\" in SIGGRAPH 2024 which involves interaction between human and humanoid agents.\\n2. I notice authors utilize motion capture to provide whole body motion, and I wonder the reason to consider only experiments of path, trajectory and goal prediction. The occasionally unnatural motion observed in the video could be explained.\\n3. The use of A* baselines seems inappropriate for two-person interaction scenarios. I notice this dataset mainly focuses on obstacle avoidance. There appears to be a lack of interactive behaviors between the two persons. It may not be enough if two persons just operate independently and avoid the other person within the same space. I think it doesn't reflect scenarios often seen in real life. Can the authors provide more information on the distribution of action types within the dataset? Given this is a dataset paper, more statistics and descriptions would be beneficial.\", \"questions\": \"The paper seems to focus on locomotion. Without interactions like sitting on a sofa or standing up from a chair, does the goal prediction remain compelling?\\nWith many figures presented in 2D planes, would a bird's eye view semantic map provide enough information for the prediction tasks? What's the importance of 3D geometry?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Discussion period approaches its conclusion in the coming days\", \"comment\": \"**We have done our utmost to address the concerns raised and improve our work based on your valuable comments.**\\n\\n**If you have any additional questions or points of clarification, we would be delighted to engage in further discussion to ensure that all your concerns are thoroughly resolved.**\\n\\n**Thank you once again for your invaluable contributions, and we look forward to hearing from you.**\"}", "{\"title\": \"24 hours remaining before the paper revision deadline\", \"comment\": \"Dear Reviewer AaG4,\\n\\nWe hope this message finds you well. With 24 hours remaining before the revision deadline, we kindly request your feedback on the remaining concerns based on our responses. We understand the review process is time-consuming, but your feedback is invaluable in shaping the final outcome. Thank you again for your time and effort in reviewing our work.\"}", "{\"comment\": \"Thank you for your thoughtful review and for recognizing our improvements. We greatly appreciate your feedback and the revised score reflecting your positive evaluation.\"}", "{\"comment\": \"Thank you for the response! A1 and A3 was well addressed. Additionally, thanks for the insight on relationships within the user study! A2 doesn't quite explain the phenomenon of the probability distribution narrowing down.\\n\\nI will maintain my score as I believe that this dataset will be useful to the community!\"}", "{\"title\": \"Common response to Reviewers and the Area Chair (3/3)\", \"comment\": \"---\\n\\n## **Additional experiment results:**\\n\\n---\\n\\n- **Influence of scene information types - Difference in performance with Binary obstacle map / Semantic map / Height map:**\\n\\nWhile our main claim is not on the geometry with 3D and semantic information, we expect these features to enhance the utility of our dataset. To explore this, we conducted a small experiment to evaluate how replacing binary obstacle maps with 3D height maps and semantic maps affects performance.\\n\\nTable.1 presents the results of the global path prediction task using the UNet+A* model. Each model was trained and tested on LocoVR with binary maps, height maps, and semantic maps, over three trials. **The results indicate that models trained with height and semantic maps clearly outperformed those trained with binary maps.**\\n\\nAlthough we do not yet have a detailed analysis of these findings, they **potentially suggest that human trajectories could be influenced by object attributes inferred from height and semantic information**. For instance, participants might unconsciously maintain a distance from movable objects, such as chairs or doors, or adjust their trajectories based on the visual clearance provided by different object types. For example, walls, kitchen counters, and low tables offer varying degrees of vision clearance, with lower clearance potentially exerting subtle psychological pressure on trajectory planning. A detailed analysis on influence of variational scene information on the human trajectories could provide valuable insights from the perspectives of cognitive and behavioral sciences. We have included this result and the discussion in the revised manuscript to mention further potential of our dataset. (Appendix.D.2, highlighted in blue)\\n\\nTable.1: Accuracy of global path prediction in different range of traveled distance (mean value \\u00b1 std over 3 trials)\\n| | 0m < d \\u2264 3m | 3m < d \\u2264 6m | 6m < d | \\n|--------------|--------------|--------------|--------------|\\n| **binary map** | 0.138\\u00b10.0006 | 0.183\\u00b10.0024 | 0.286\\u00b10.0113 | \\n| **semantic map** | 0.137\\u00b10.0004 | 0.170\\u00b10.0046 | 0.216\\u00b10.0278 | \\n| **height map** | **0.136\\u00b10.0011** | **0.165\\u00b10.0068** | **0.201\\u00b10.0219** |\\n\\n---\\n\\n- **Demographics in participants and variations in locomotion pairs:**\\n\\nThere were 32 participants in total, comprising 21 males and 11 females, with ages ranging from 18 to 42. From this pool, pairs were formed to conduct 25 experiments, each involving a unique pair (Table.2). The experiments included various combinations of male-male, female-female, and male-female pairs, as well as pairs of friends and nonfriends, as shown in the table.3.\\n\\nAs the reactions between pairs in close proximity are influenced by attributes and interpersonal relationships, **further data analysis may provide new insights into the relationship between these attributes, relationships, and behavioral patterns**. It could be an intriguing study from the perspective of cognitive and behavioral sciences. We have added the information shown in this reply in the revised manuscript. (Appendix.I.2, highlighted in blue)\\n\\nTable.2: User demographics (categorized by gender and age)\\n| Age | Male | Female | \\n|------------|----------|----------|\\n| **under 20** | 5 | 3 |\\n| **20 to 29** | 15 | 8 |\\n| **over 30** | 1 | 0 |\\n\\n\\nTable.3: Diversity of pairs (categorized by gender and relationship)\\n| | Male-Male | Female-Female | Male-Female | \\n|--------------|-------------|---------------|-------------|\\n| **Friends** | 2 | 2 | 5 | \\n| **Non-friends** | 9 | 1 | 6 |\\n\\n\\n---\\n\\n- **Additional data statistics in LocoVR:**\\n\\nIn the revision, we have included additional statistics, following the prior work on the social navigation dataset [1], which serves as one of the benchmarks in our evaluation. We have included **Path efficiency (trajectory complexity)**, **Motion speed**, and **Minimal distance between individuals**, as outlined in the referenced paper. Additionally, we have introduced **Relative speed between individuals** and **Number of speed changes along the trajectory**, to further quantify the characteristics of our dataset. The updates are included in the Appendix I.1 on the revised manuscript, highlighted in blue color.\\n\\n[1]Schreiter, Tim, et al. \\\"TH\\u00d6R-MAGNI: A large-scale indoor motion capture recording of human movement and robot interaction.\\\" The International Journal of Robotics Research (2024).\"}", "{\"title\": \"Discussion period approaches its conclusion in the coming days\", \"comment\": \"**We have done our utmost to address the concerns raised and improve our work based on your valuable comments.**\\n\\n**If you have any additional questions or points of clarification, we would be delighted to engage in further discussion to ensure that all your concerns are thoroughly resolved.**\\n\\n**Thank you once again for your invaluable contributions, and we look forward to hearing from you.**\"}", "{\"comment\": \"Thank you once again for your time and effort in reviewing our work. We greatly appreciate your thoughtful and supportive comments in the review process.\\n\\nRegarding A2, we aim to provide a more comprehensive explanation to address the reviewer's question as follows. We hope this not only clarifies the reviewer's concerns but also benefits others who may have similar questions in the review.\\n\\n---\\n- **Additional explanation for A2:** \\n\\nTo effectively predict goals based on human trajectories and scene layouts, it is crucial to accurately model the interdependent relationships between human trajectories, goal positions, and scene layouts. This necessitates datasets containing a large number of diverse combinations of human trajectories, goal positions, and scene layouts, collected across a wide variety of scenes to ensure robust generalization performance.\\n\\nHowever, existing datasets face challenges such as limited scene variation [1,2], lack of scene complexity [1], or insufficient numbers of trajectories [2], which often result in degraded performance when applied to unseen scenes. These limitations stem from the inherent difficulties of collecting data in real-world environments, where the process is time-consuming and constrained by accuracy issues due to blind spots and other observational limitations.\\n\\nIn contrast, the LocoVR dataset contains a large number of trajectories spanning over 130 scenes, enabling improved generalization performance by effectively modeling the general relationships between human trajectories, goal positions, and scene layouts in indoor home environments.\\n\\nOur experiments demonstrate that models trained on LocoVR outperform those trained on other datasets[1,2] in goal prediction tasks, highlighting its advantages for achieving superior performance in diverse and unseen scenarios. (4.5.3 Table.4, Appendix D.1 Table.10)\\n\\n**Reference:**\\n\\n[1]Schreiter, Tim, et al. \\\"TH\\u00d6R-MAGNI: A large-scale indoor motion capture recording of human movement and robot interaction.\\\" The International Journal of Robotics Research (2024).\\n\\n[2]Zheng, Yang, et al. \\\"Gimo: Gaze-informed human motion prediction in context.\\\" European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.\"}", "{\"comment\": \"Thanks for the response and discussions. Given the quality of the provided full-body pose data, I still recommend positioning the dataset as a trajectory dataset or clearly stating that the full-body poses are auxiliary, inaccurate estimations from sparse trackers.\"}", "{\"summary\": \"LocoVR is a virtual reality-based dataset aimed at improving the modeling of human locomotion in complex indoor environments. This dataset specifically focuses on multi-user indoor navigation, capturing over 7,000 two-person trajectories within more than 130 different home-like scenes. The main goal of the LocoVR dataset is to enhance the ability of AI systems, like home robots, to understand and predict human movement patterns that incorporate both spatial and social navigation dynamics.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"By using VR, the dataset captures detailed spatial data and full-body motion in diverse home environments. The VR setup enables controlled capture of social navigation behaviors, such as maintaining personal space and avoiding collisions in shared spaces like entryways.\\n\\nThe dataset is well evaluated. The paper evaluates the dataset on three trajectory-based tasks\\u2014global path prediction, trajectory prediction, and goal prediction. The results show that models trained on LocoVR outperform those trained on other datasets, particularly in predicting realistic, socially aware navigation paths in complex environments.\\n\\nI can see implications of this work not just for virtual agents, games etc, but also as we move to further robotic presence in our homes this type of dataset can help train their trajectories.\", \"weaknesses\": \"I m not sure about the Non-Verbal Social Cues: such as gaze direction or facial expressions\\u2014that influence social navigation.\\n\\nThe two agent approach perhaps limits also its scope in multi-user indoor scenarios common in real homes with multiple occupants.\", \"questions\": \"I would like future work to discuss how this type of work can be merged with other approaches like weighted interpolations to define trajectories of avatars indoors: https://www.microsoft.com/en-us/research/publication/avatarpilot-decoupling-one-to-one-motions-from-their-semantics-with-weighted-interpolations/\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer PhCB (1/2)\", \"comment\": \"We sincerely thank the reviewer for their valuable feedback, which has greatly enhanced the clarity and quality of our paper.\\nIn response to the reviewer's concerns, we have carefully addressed them as outlined below.\\n\\n---\\n\\n> **R1:\\nAlthough this paper claims to provide full body pose data (L20), the human motion capture is far from realistic according to the supplementary video (0:00-0:30). If aiming for capturing full body poses, it may be necessary to change from the HTC VIVE tracking to marker-based motion capture as in CIRCLE (Araujo et al., 2023). With the current presented results, I recommend removing the claims of full body pose data since all experiments only use trajectory data.**\\n\\n**A1:**\\n\\n**Full-body pose is auxiliary information but not our main focus:**\\nWe incorporated the motion capture system primarily to visualize avatar motions, allowing participants to recognize the movements of others. Consequently, including full-body motion data in the dataset is not our primary focus. In the main paper, we mentioned full-body motion as auxiliary information included in the dataset, noting that head pose (yaw direction) from the raw motion tracking data was incorporated to maximize performance in our evaluation tasks. The head pose offers valuable insights for inferring human intentions, which facilitates more accurate future predictions, as demonstrated in the ablation study (Appendix C).\\n\\n**Inaccuracies in avatar motions:**\\nAs the reviewer noted, avatar motions may occasionally display unnatural joint movements. This issue arises from the performance of the inverse kinematics (IK) software (FINAL-IK), which reconstructs avatar motion using sparse motion trackers placed on the body (head, waist, hands, and feet).\\nHowever, these slight inaccuracies in body motion do not impact the contribution of our experiment, as our focus is on room-scale human dynamics rather than fine-grained body movements.\\nOur dataset currently includes raw data from sparse motion trackers, which is highly accurate (within a few millimeters). For users requiring precise avatar motion, applying state-of-the-art IK algorithms to the raw tracker data would reconstruct more accurate avatar movements than those displayed in our video.\\n\\n---\\n\\n> **R2:\\nThe VR capture system is limited to simple behaviors assuming a flat floor scene and no contact-based close object interactions. It can only capture locomotion or reaching behaviors as in CIRCLE (Araujo et al., 2023). The VR capture system can not work for behaviors like lying on a sofa or walking up stairs. When the humans try to do such interactions in VR, they are actually interacting with air in the real world and will fall. This virutal-real inconsistency can also cause the subjects to slightly walk into obstacles as discussed in paper.**\\n\\n**A2:**\\nWe agree with the reviewer that there is a gap between VR and reality, and it could influence the human behavior. However, we believe this gap has minimal influence on our locomotion experiment and does not make impact on the overall contribution of our dataset for the following reasons:\\n\\n(1) In our experiment, participants were aware that the virtual avatars were synchronized with real humans sharing the same physical space. Also, the avatar enables participants to percept relative position between their bodies and surrounding objects. These awareness discouraged socially or physically inappropriate behavior, mitigating the potential impact of the VR/real gap, as demonstrated in a recent study on VR locomotion[1][2]. In addition, we have introduced a filter to detect instances of users passing through virtual objects to remove such data from the dataset.\\n\\n(2) Our evaluation used locomotion data collected in physical spaces as test data. Models trained on the LocoVR dataset outperformed those trained on other physically collected datasets (GIMO/THOR-MAGNI), demonstrating that VR-collected data is effective when applied to real-world scenarios.(Tested on LocoReal: Main paper Section.4, tested on GIMO: Appendix D.1)\\n\\nWe have included above discussion in the revised manuscript to clarify the influence of VR/Real gap issue. (Appendix.J, highlighted in blue)\\n\\n[1] H. Yun, Y. Watanabe, A. Yamada, \\\"Exploring the Role of Expected Collision Feedback in Crowded Virtual Environments,\\\" Proc. IEEE Conf. Virtual Reality and 3D User Interfaces, 2024.\\n\\n[2] A. L. Simeone, I. Mavridou, and W. Powell, \\\"Altering user movement behaviour in virtual environments,\\\" IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 4, pp. 1312\\u20131321, 2017.\"}", "{\"title\": \"Response to Reviewer 6iZX (3/3)\", \"comment\": \"---\\n\\n> **R5:\\nWith many figures presented in 2D planes, would a bird's eye view semantic map provide enough information for the prediction tasks? What's the importance of 3D geometry?**\\n\\n**A5:**\\nWhile our main claim is not on the geometry with 3D and semantic information, we expect these features to enhance the utility of our dataset. To explore this, we conducted a small experiment to evaluate how replacing binary obstacle maps with 3D height maps and semantic maps affects performance.\\n\\nTable.1 presents the results of the global path prediction task using the UNet+A* model. Each model was trained and tested on LocoVR with binary maps, height maps, and semantic maps, over three trials. The results indicate that models trained with height and semantic maps clearly outperformed those trained with binary maps.\\n\\nAlthough we do not yet have a detailed analysis of these findings, they potentially suggest that human trajectories could be influenced by object attributes inferred from height and semantic information. For instance, participants might unconsciously maintain a distance from movable objects, such as chairs or doors, or adjust their trajectories based on the visual clearance provided by different object types. For example, walls, kitchen counters, and low tables offer varying degrees of vision clearance, with lower clearance potentially exerting subtle psychological pressure on trajectory planning.\\nA detailed analysis on influence of variational scene information on the human trajectories could provide valuable insights from the perspectives of cognitive and behavioral sciences.\\nWe have included this result and the discussion in the revised manuscript to mention further potential of our dataset. (Appendix.D.2, highlighted in blue)\\n\\nTable.1: Accuracy of global path prediction in different range of traveled distance (mean value \\u00b1 std over 3 trials)\\n| | 0m < d \\u2264 3m | 3m < d \\u2264 6m | 6m < d | \\n|--------------|--------------|--------------|--------------|\\n| **binary map** | 0.138\\u00b10.0006 | 0.183\\u00b10.0024 | 0.286\\u00b10.0113 | \\n| **semantic map** | 0.137\\u00b10.0004 | 0.170\\u00b10.0046 | 0.216\\u00b10.0278 | \\n| **height map** | **0.136\\u00b10.0011** | **0.165\\u00b10.0068** | **0.201\\u00b10.0219** |\\n\\n---\\n\\n**[Final Note]** Thank you once again for the insightful review. We believe the revision has strengthened the quality of our paper. If there is anything else we can clarify or elaborate on, please do not hesitate to let us know.\"}", "{\"title\": \"Response to Reviewer Fpz9\", \"comment\": \"We sincerely thank the reviewer for their valuable feedback, which has greatly enhanced the clarity and quality of our paper.\\nIn response to the reviewer's concerns, we have carefully addressed them as outlined below.\\n\\n---\\n\\n> **R1:\\nI'm not sure about the Non-Verbal Social Cues: such as gaze direction or facial expressions\\u2014that influence social navigation.**\\n\\n**A1:**\\nWe consider the lower fidelity of the SMPL avatar as a potential factor contributing to the gap between VR and real-world scenarios, although its impact appears to be minor. For example, we often rely on observing others' gaze to predict their heading direction or use facial expressions to communicate when yielding a path. Incorporating more realistic and expressive avatars to enhance the integrity of the VR-based data collection framework remains an avenue for future work.\\n\\n---\\n\\n> **R2:\\nThe two agent approach perhaps limits also its scope in multi-user indoor scenarios common in real homes with multiple occupants.**\\n\\nWe agree that variations in locomotion patterns involving more than two individuals may occur in real-world scenarios, particularly in open public spaces. However, our research focuses on private indoor settings where the number of pedestrians is typically very limited, and individuals generally move independently. We believe this focus does not diminish the contribution of our dataset, as scenarios in private indoor settings are both common and essential in real-world contexts, yet they have been largely overlooked by most existing datasets. Please see the following for a detailed discussion.\\n\\n**A2:**\\n - **Our goal:**\\nIn contrast to conventional studies emphasizing crowd dynamics in open public spaces, our research primarily focuses on social motion behaviors in room-scale private settings. In such environments, individuals exhibit more individualized social behaviors constrained by narrow geometries, such as taking longer detours to avoid others, yielding paths, or maintaining social distance while passing.\\nGiven the lack of existing datasets addressing this specific problem setting, our goal is to provide a fundamental resource that is both targeted and extensible, serving as a stepping stone for future datasets that could scale up to include more individuals or different interaction settings.\\n\\n - **Two-person navigation scenarios in real-world relevance:**\\nIn home environments, interactions involving more than two people are relatively uncommon. Recent census data indicates that 60% of households in the U.S. consist of two people or fewer. Even in households with three or more members, scenarios where more than two individuals navigate a space simultaneously are rare, as residents generally move independently rather than engaging in collaborative or coordinated movements within the private setting. Similarly, other private spaces such as small offices, clinics, or hotel rooms are common examples of everyday environments where two-person navigation is prevalent.\\nThis relevance underscores the utility of our dataset for studying interaction dynamics that are directly applicable to these contexts.\\n\\n - **Potential utility of two-person navigation dataset:**\\nTwo-person interactions are foundational to understanding more complex multi-person dynamics, as they allow us to study detailed interpersonal behaviors such as proxemics, trajectory negotiation, and mutual space adaptation without the confounding variables introduced by larger groups.\\nIn the future, researchers can build upon our dataset to study two-person interactions in isolation or as a basis for modeling interactions in more complex, multi-person environments. \\n\\n\\nWe thank the reviewer for highlighting this point, and we ensure that our intentions and the real-world relevance of two-person interactions have been included in the paper.\\n(Appendix.I.3, highlighted in blue)\\n\\n---\\n\\n> **R3:\", \"i_would_like_future_work_to_discuss_how_this_type_of_work_can_be_merged_with_other_approaches_like_weighted_interpolations_to_define_trajectories_of_avatars_indoors\": \"https://www.microsoft.com/en-us/research/publication/avatarpilot-decoupling-one-to-one-motions-from-their-semantics-with-weighted-interpolations/**\\n\\n**A3:**\\nWe thank the reviewer for providing insightful suggestions to enhance the clarity of our contribution. We have added the discussion of future applications, referring to the paper the reviewer suggested. (Section.5, highlighted in blue)\\n\\n\\n---\\n\\n**[Final Note]** Thank you once again for the insightful review. We believe the revision has strengthened the quality of our paper. If there is anything else we can clarify or elaborate on, please do not hesitate to let us know.\"}", "{\"title\": \"Response to Reviewer 1K21 (1/2)\", \"comment\": \"We sincerely thank the reviewer for their valuable feedback, which has greatly enhanced the clarity and quality of our paper. In response to the reviewer's concerns, we have carefully addressed them as outlined below.\\n\\n---\\n\\n> **R1:\", \"potential_overfitting_to_vr_specific_biases\": \"I am curious what the authors have done to further minimize the gap between real-world scenes vs VR scenes. Are there any obstacle perception features e.g., vibrational feedback when participants bump into objects in the scene?**\\n\\n**A1:**\\nWe consider the virtual avatar as an effective tool to mitigate the VR/Real gap issue. In addition, we introduced a filter to remove data with inappropriate behaviors. In the following, we would like to discuss the influence of the gap issue.\\n\\n(1) In our experiment, participants were aware that the virtual avatars were synchronized with real humans sharing the same physical space. Also, the avatar enables participants to percept relative position between their body and surrounding objects. These awareness discouraged socially or physically inappropriate behavior, mitigating the potential impact of the VR/real gap, as demonstrated in a recent study on VR locomotion[1][2]. In addition, we have introduced a filter to detect instances of users passing through virtual objects to remove such data from the dataset.\\n\\n(2) Our evaluation used locomotion data collected in physical spaces as test data. Models trained on the LocoVR dataset outperformed those trained on other physically collected datasets (GIMO/THOR-MAGNI), demonstrating that VR-collected data is effective when applied to real-world scenarios.\\n\\nWe also agree with the reviewer that user interaction cues such as haptic devices could reduce the gap between VR/Real. We anticipate that future advancements in VR user interfaces (VR-UI) will further contribute to bridging this gap.\\n\\nWe have included above discussion in the revised manuscript to clarify the influence of VR/Real gap issue. (Appendix.J, highlighted in blue) \\n\\n[1] H. Yun, Y. Watanabe, A. Yamada, \\\"Exploring the Role of Expected Collision Feedback in Crowded Virtual Environments,\\\" Proc. IEEE Conf. Virtual Reality and 3D User Interfaces, 2024.\\n\\n[2] A. L. Simeone, I. Mavridou, and W. Powell, \\\"Altering user movement behaviour in virtual environments,\\\" IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 4, pp. 1312\\u20131321, 2017.\\n\\n---\\n\\n> **R2:\", \"qualitative_results\": \"\\u2018as the trajectory progresses, the probability distribution of the goal area narrows down near the true goal object\\u2019 I think that it is reasonable to assume that humans narrow the probability distribution in LocoVR closer to the true goal object. However, I would like to understand exactly why this observation arises from LocoVR\\u2019s dataset. Is it merely the size of the dataset or is it the difference in data collected in LocoVR vs the other datasets? Are the authors claiming that this is a strength as it mimics the probability distribution of human trajectories better? If so, I would like to see analysis of this phenomenon in the different datasets.**\\n\\n**A2:**\", \"the_improved_performance_in_goal_prediction_using_locovr_compared_to_other_datasets_can_be_attributed_to_two_key_factors\": \"(1) the dataset\\u2019s large size and diverse range of scenes, and (2) the complexity of those scenes. The extensive variety of human trajectories captured in diverse and complex environments helps the model learn the relationship between trajectories, scene layouts, and goal positions. This enables robust performance even in unseen home environments.\\n\\nTo validate this, we evaluated the model trained on LocoVR using GIMO as the test dataset (Appendix.D.1, Table.10). The results consistently show that the model trained on LocoVR outperforms models trained on other datasets even in unseen scenes. It highlights LocoVR\\u2019s ability to enhance the model\\u2019s generalization by providing rich and diverse scene and trajectory data.\"}", "{\"title\": \"Common response to Reviewer and the Area Chair (2/3)\", \"comment\": \"---\\n\\n- **What is the motivation of the problem setting? Why two-person?:**\\nWe agree that variations in locomotion patterns involving more than two individuals may occur in real-world scenarios, particularly in open public spaces. However, our research focuses on private indoor settings where the number of pedestrians is typically very limited, and individuals generally move independently. We believe this focus does not diminish the contribution of our dataset, as scenarios in private indoor settings are both common and essential in real-world contexts, yet they have been largely overlooked by most existing datasets. Please see the following for a detailed discussion.\\n\\n - **Our goal:** In contrast to conventional studies emphasizing crowd dynamics in open public spaces, our research primarily focuses on social motion behaviors in room-scale private settings. In such environments, individuals exhibit more individualized social behaviors constrained by narrow geometries, such as taking longer detours to avoid others, yielding paths, or maintaining social distance while passing. Given the lack of existing datasets addressing this specific problem setting, our goal is to provide a fundamental resource that is both targeted and extensible, serving as a stepping stone for future datasets that could scale up to include more individuals or different interaction settings.\\n\\n - **Two-person navigation scenarios in real-world relevance:** In home environments, interactions involving more than two people are relatively uncommon. Recent census data indicates that 60% of households in the U.S. consist of two people or fewer. Even in households with three or more members, scenarios where more than two individuals navigate a space simultaneously are rare, as residents generally move independently rather than engaging in collaborative or coordinated movements within the private setting. Similarly, other private spaces such as small offices, clinics, or hotel rooms are common examples of everyday environments where two-person navigation is prevalent. This relevance underscores the utility of our dataset for studying interaction dynamics that are directly applicable to these contexts.\\n\\n - **Potential utility of two-person navigation dataset:** Two-person interactions are foundational to understanding more complex multi-person dynamics, as they allow us to study detailed interpersonal behaviors such as proxemics, trajectory negotiation, and mutual space adaptation without the confounding variables introduced by larger groups. In the future, researchers can build upon our dataset to study two-person interactions in isolation or as a basis for modeling interactions in more complex, multi-person environments. \\n\\n---\\n\\n- **What is the role of full-body motion in our work?:**\\n\\nWe incorporated the motion capture system primarily to visualize avatar motions, allowing participants to recognize the movements of others. Consequently, **including full-body motion data in the dataset is not our primary focus**. In the main paper, we mentioned full-body motion as **auxiliary information included in the dataset**, noting that head pose (yaw direction) from the raw motion tracking data was incorporated to maximize performance in our evaluation tasks. The head pose offers valuable insights for inferring human intentions, which facilitates more accurate future predictions, as demonstrated in the ablation study (Appendix C).\\n\\n---\\n\\n- **Why inaccuracy in avatar motion occur?:**\\n\\nAs seen in video, avatar motions may occasionally display unnatural joint movements. **This issue arises from the performance of the inverse kinematics (IK) software (FINAL-IK)**, which reconstructs avatar motion using sparse motion trackers placed on the body (head, waist, hands, and feet). However, these slight inaccuracies in body motion **do not impact the contribution of our experiment**, as our focus is on room-scale human dynamics rather than fine-grained body movements.\\n\\nOur dataset currently includes raw data from sparse motion trackers, which is highly accurate (within a few millimeters). For users requiring precise avatar motion, applying state-of-the-art IK algorithms to the raw tracker data would reconstruct more accurate avatar movements than those displayed in our video.\"}", "{\"title\": \"24 hours remaining before the paper revision deadline\", \"comment\": \"We hope this message finds you well. With 24 hours remaining before the revision deadline, we kindly request your feedback on the remaining concerns based on our responses. We understand the review process is time-consuming, but your feedback is invaluable in shaping the final outcome. Thank you again for your time and effort in reviewing our work.\"}", "{\"summary\": \"1) LocoVR introduces a large dataset with 7000 two-person interactions across 130 diverse indoor environments in VR -- includes full body pose data and spatial information.\\n3) LocoVR improves model performance in 3 indoor tasks including human trajectories and predicting socially aware navigation patterns in home environments.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1) LocoVR\\u2019s large, indoor locomotion dataset records two-person interactions across 130 diverse indoor environments\\n2) Introduction of motion proxemics in a large-scale dataset which is useful for downstream tasks such as studying human-human interactions and potentially human-robot interactions\\n3) Rigorously quantitatively evaluated against strong baselines; baseline configurations and settings are fair and well-documented \\n4) Data of two-person trajectories is useful to study motion proxemics \\n\\nThe authors introduce an exciting/ relevant problem. The problem is well-motivated while the execution and evaluations are strong. I see many potential downstream applications and I believe that this will make a huge impact in the robotics field.\", \"weaknesses\": \"1) Potential Overfitting to VR-Specific Biases:\\nI am curious what the authors have done to further minimize the gap between real-world scenes vs VR scenes. Are there any obstacle perception features e.g., vibrational feedback when participants bump into objects in the scene? \\n\\n2) *Qualitative results: \\u2018as the trajectory progresses, the probability distribution of the goal area narrows down near the true goal object\\u2019*\\nI think that it is reasonable to assume that humans narrow the probability distribution in LocoVR closer to the true goal object. However, I would like to understand exactly why this observation arises from LocoVR\\u2019s dataset. Is it merely the size of the dataset or is it the difference in data collected in LocoVR vs the other datasets? Are the authors claiming that this is a strength as it mimics the probability distribution of human trajectories better? If so, I would like to see analysis of this phenomenon in the different datasets. \\n\\n3) Cultural/ societal biases of motion proxemics:\\nSince motion proxemics is influenced by cultural norms, authors should show aggregated user demographics. It would also be interesting to see motion proxemics based on the aggregated clusters of people.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for the response.\", \"comment\": \"After reading the rebuttal and other reviews, I have raised my rate to 6.\"}", "{\"title\": \"24 hours remaining before the paper revision deadline\", \"comment\": \"Dear Reviewer PhCB,\\n\\nWe hope this message finds you well. With 24 hours remaining before the revision deadline, we kindly request your feedback on the remaining concerns based on our responses. We understand the review process is time-consuming, but your feedback is invaluable in shaping the final outcome. Thank you again for your time and effort in reviewing our work.\"}", "{\"summary\": \"This paper introduces a dataset for multi-user indoor navigation collected in a Virtual Reality (VR) environment. The dataset includes 7,071 trajectories with 2.5 million frames across 131 scenes. Baseline methods, such as A* and U-Net, are used to demonstrate and analyze the proposed dataset.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The dataset is based on real human subject studies, and the use of VR environments facilitates data collection efforts.\", \"This open-source dataset could benefit the research community focused on social navigation.\"], \"weaknesses\": [\"Novelty and contributions are key concerns. Although the human subject studies require significant time and the dataset a reasonable sample size, the dataset has not been demonstrated, for example, to train state-of-the-art neural network models. The demonstrated methods are relatively simple. The dataset is also limited to two-person navigation scenarios.\", \"Given that the objective of the dataset is to estimate user trajectories and goal positions, without addressing the estimation of human body motions, why is VR more advantageous than an overhead camera with a bird-eye view?\", \"How are real human full-body motions in the physical world synchronized with the virtual environment?\", \"Research (e.g., in human-robot interaction) has shown that humans respond differently to virtual agents compared to physical agents. The authors are encouraged to provide a study and analysis on whether this difference exists and, if so, how significant it is.\", \"The work mentions robotics as a motivation and application scenario; however, related research on social robot navigation is not well reviewed.\"], \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Discussion period approaches its conclusion in the coming days\", \"comment\": \"**We have done our utmost to address the concerns raised and improve our work based on your valuable comments.**\\n\\n**If you have any additional questions or points of clarification, we would be delighted to engage in further discussion to ensure that all your concerns are thoroughly resolved.**\\n\\n**Thank you once again for your invaluable contributions, and we look forward to hearing from you.**\"}", "{\"title\": \"Thank you for the response!\", \"comment\": [\"We appreciate the reviewer\\u2019s constructive suggestion. After careful consideration, we concluded that the revision better highlights the primary contribution of our dataset. The revised sentences in the manuscript are as follows:\", \"Abstract (L20): Revised \\\"full-body motion\\\" to **\\\"accurate trajectory data\\\"**.\", \"Introduction (L43): Revised \\\"full-body motions\\\" to **\\\"trajectories\\\"**.\", \"Figure 1 caption (L89): Revised \\\"full-body motion\\\" to **\\\"trajectory\\\"**.\", \"Section 3.1 Overview (L154): Revised \\\"it includes full-body human poses, along with head orientation data in addition to trajectories\\\" to **\\\"it includes body tracker data on head/waist/hands/feet as auxiliary information\\\"**.\", \"Conclusion (L531): Revised \\\"full-body motions\\\" to **\\\"accurate trajectory\\\"**.\", \"All revisions have been highlighted in blue in the manuscript.\", \"---\"]}", "{\"title\": \"Response to Reviewer PhCB (2/2)\", \"comment\": \"---\\n\\n> **R3:\\nThis paper only focus on a very simple social navigation scenario of two persons avoiding each other. However, the social navigation behaviors can be much more complex. For example, humans do not only avoid each other but also collaborates and coordinates, consider the cases where one person is leading the way and other persons follow the leading one, and two persons walk to each other to talk. It is also necessary to include social scenarios with more than two persons.**\\n\\n**A3:**\\nWe agree that variations in locomotion patterns involving more than two individuals may occur in real-world scenarios, particularly in open public spaces. However, our research focuses on private indoor settings where the number of pedestrians is typically very limited, and individuals generally move independently. We believe this focus does not diminish the contribution of our dataset, as scenarios in private indoor settings are both common and essential in real-world contexts, yet they have been largely overlooked by most existing datasets. Please see the following for a detailed discussion.\\n\\n - **Our goal:**\\nIn contrast to conventional studies emphasizing crowd dynamics in open public spaces, our research primarily focuses on social motion behaviors in room-scale private settings. In such environments, individuals exhibit more individualized social behaviors constrained by narrow geometries, such as taking longer detours to avoid others, yielding paths, or maintaining social distance while passing. Given the lack of existing datasets addressing this specific problem setting, our goal is to provide a fundamental resource that is both targeted and extensible, serving as a stepping stone for future datasets that could scale up to include more individuals or different interaction settings.\\n\\n - **Two-person navigation scenarios in real-world relevance:**\\nIn home environments, interactions involving more than two people are relatively uncommon. Recent census data indicates that 60% of households in the U.S. consist of two people or fewer. Even in households with three or more members, scenarios where more than two individuals navigate a space simultaneously are rare, as residents generally move independently rather than engaging in collaborative or coordinated movements within the private setting. Similarly, other private spaces such as small offices, clinics, or hotel rooms are common examples of everyday environments where two-person navigation is prevalent.\\nThis relevance underscores the utility of our dataset for studying interaction dynamics that are directly applicable to these contexts.\\n\\n - **Potential utility of two-person navigation dataset:**\\nTwo-person interactions are foundational to understanding more complex multi-person dynamics, as they allow us to study detailed interpersonal behaviors such as proxemics, trajectory negotiation, and mutual space adaptation without the confounding variables introduced by larger groups.\\nIn the future, researchers can build upon our dataset to study two-person interactions in isolation or as a basis for modeling interactions in more complex, multi-person environments. \\n\\nWe thank the reviewer for highlighting this point, and we included the discussion in the paper.\\n(Appendix.I.3, highlighted in blue)\\n\\n---\\n\\n> **R4:\\nIn appendix H, why are time windows and intervals set as the presented numbers? Are there any motivation or empirical study?**\\n\\n**A4:**\\nWe thank the reviewer for highlighting the motivation of the parameter settings, which allowed us to improve the clarity of the paper.\\nThe parameter settings are primarily motivated by the characteristics of each task and the statistics of the dataset. For detailed information, please refer to Appendix H in the revised main paper (Appendix.H, highlighted in blue).\\n\\n---\\n\\n> **R5:\\nL860, figures should be tables?**\\n\\n**A5:**\\nWe appreciate the reviewer for pointing out the mistake.\\nWe have modified the word, from \\\"Figures\\\" to \\\"Tables\\\". (Appendix.D, highlighted in blue)\"}", "{\"title\": \"Response to Reviewer 6iZX (2/3)\", \"comment\": \"---\\n\\n> **R3:\\nThe use of A\\\\* baselines seems inappropriate for two-person interaction scenarios. I notice this dataset mainly focuses on obstacle avoidance. There appears to be a lack of interactive behaviors between the two persons. It may not be enough if two persons just operate independently and avoid the other person within the same space. I think it doesn't reflect scenarios often seen in real life. Can the authors provide more information on the distribution of action types within the dataset? Given this is a dataset paper, more statistics and descriptions would be beneficial.**\\n\\n**A3:**\\n\\n**- On the Use of A\\\\* Baselines**\\nThe original A* algorithm is not designed to handle two-person interaction scenarios; however, we employed it as a foundational baseline to benchmark basic trajectory prediction and obstacle avoidance capabilities. In contrast, we used A*+Unet baselines to account for social motion behaviors, enabling dataset comparisons. Specifically, A* serves as a deterministic trajectory generator, guided by the probabilistic trajectory distributions produced by Unet models trained on the benchmark datasets. For this reason, we consider A* as a valuable baseline algorithm for our evaluation.\\n\\n**- Perceived Focus on Obstacle Avoidance** \\nWhile obstacle avoidance is one component, our dataset is not limited to this focus. The trajectories reflect behaviors that go beyond mere avoidance, capturing the nuanced adjustments people make in response to shared space constraints. These include: \\n- Dynamic negotiation of personal space in motion. \\n- Proximity-based trajectory adjustments that align with real-world social norms.\\n\\nIt is called **social navition**, which has been a hot research topic in the robotics field. We offered a new dataset that contains **two-person trajectories across diverse indoor scenes**, which could make impact on the community. To clarify our contribution on the social navigation, we have modified related works in the manuscript. Please refer Section 2.1 on the manuscript, highlighted in blue color.\\n\\nFor more information on social motion behavior in home environments, please visit [our anonymized website](https://sites.google.com/view/locovr?usp=sharing) to explore typical examples of social motion behaviors featured in LocoVR (Figure 2) and those generated by our trained models (Figure 3). Furthermore, our manuscript (Figure 6 in Appendix E) compares the performance of socially-aware trajectory prediction using single-person and two-person data. The results show that models trained on two-person data successfully predict socially-aware trajectories, while those trained on single-person data do not.\\n\\n**- Distribution of Action Types and Additional Statistics** \\nWe appreciate the suggestion to provide a more detailed breakdown of the behaviors captured in the dataset. In the revision, we have included additional statistics, following the prior work on the social navigation dataset [1], which serves as one of the benchmarks in our evaluation.\\nWe have included **Path efficiency (trajectory complexity)**, **Motion speed**, and **Minimal distance between individuals**, as outlined in the referenced paper. Additionally, we have introduced **Relative speed between individuals** and **Number of speed changes along the trajectory**, to further quantify the characteristics of our dataset.\\nThe updates are included in the Appendix I.1 on the revised manuscript, highlighted in blue color.\\n\\n[1]Schreiter, Tim, et al. \\\"TH\\u00d6R-MAGNI: A large-scale indoor motion capture recording of human movement and robot interaction.\\\" The International Journal of Robotics Research (2024).\\n\\n---\\n\\n> **R4:\\nThe paper seems to focus on locomotion. Without interactions like sitting on a sofa or standing up from a chair, does the goal prediction remain compelling?**\\n\\n**A4:**\\nWe consider goal prediction to be one of the crucial tasks in home environments, particularly in its application to human action prediction. For instance, if a person is walking with having a glass, several possible subsequent actions could be inferred based on the scene context: pouring milk at the fridge, placing the glass on the dining table to set up breakfast, bringing it to the couch to hand it to someone, and so on. Our trajectory-based goal prediction approach helps narrow down these candidate actions by predicting target object based on the past trajectory, thereby improving the accuracy of action predictions.\\n\\nIt is important to note that LocoVR is designed to facilitate research on indoor human trajectories and serves as a foundational resource for exploring relationships between motion patterns, goal positions, and indoor scene geometries. While our paper highlights LocoVR's contribution in social navigation, capability of LocoVR is is not limited in that task. We believe LocoVR holds potential for extended applications, such as inferring indoor layouts from trajectory data or studying space utilization in shared environments.\"}", "{\"title\": \"Response to Reviewer AaG4 (1/2)\", \"comment\": \"We sincerely thank the reviewer for their valuable feedback, which has greatly enhanced the clarity and quality of our paper. In response to the reviewer's concerns, we have carefully addressed them as outlined below.\\n\\n---\\n\\n> **R1: Novelty and contributions are key concerns. Although the human subject studies require significant time and the dataset a reasonable sample size, the dataset has not been demonstrated, for example, to train state-of-the-art neural network models. The demonstrated methods are relatively simple.**\\n\\n**A1:**\\n\\n**Evaluation with state-of-the-art models:**\\nAlthough many studies proposed methods to learn human trajectories, most are designed to capture the dynamics of multi-person trajectories in open-space environments and struggle to handle the complexities of indoor navigation due to the limited capability of considering complex room layouts. we employed Ynet [2] as a state-of-the-art benchmark since it is the most recent method capable of predicting human trajectories while accounting for complex indoor geometries, based on the best of our knowledge. Ynet has also been used in recent robotics research [3] as a state-of-the-art benchmark for evaluating trajectory prediction models in indoor scenes. Additionally, we implemented UNet-based models tailored to our specific tasks to compare the performance of the models trained on relevant datasets.\\n\\n[2]K. Mangalam, et al., \\\"From goals, waypoints and paths to long term human trajectory forecasting.\\\" Proceedings of ICCV. 2021.\\n\\n[3]G. Nicolas, et al., \\u201cLong-Term Human Trajectory Prediction Using 3D Dynamic Scene Graphs\\u201d, IEEE RA Letters, 2024, 9(12), pp.10978-10985\\n\\n**Novelty and contribution of our work:**\\nOur contribution is the creation of the first dataset that records a wide variety of human trajectories reflecting social motion dynamics within diverse indoor environments. To achieve this diversity efficiently, we utilized a VR-based data collection system. LocoVR is designed to facilitate research on indoor human trajectories and serves as a foundational resource for exploring relationships between motion patterns, goal positions, and indoor scene geometries. Our experiments highlight LocoVR's utility in key tasks, such as socially aware navigation and goal prediction. Additionally, the dataset holds potential for extended applications, such as inferring indoor layouts from trajectory data or studying space utilization in shared environments. By focusing on indoor social motion behaviors, LocoVR provides a unique resource for advancing research on human-centered motion modeling, particularly in confined, interaction-driven settings.\\n\\n---\\n\\n> **R2:\\nThe dataset is also limited to two-person navigation scenarios.**\\n\\n**A2:**\\n - **Our goal:** In contrast to conventional studies emphasizing crowd dynamics in open public spaces, our research primarily focuses on social motion behaviors in room-scale private settings. In such environments, individuals exhibit more individualized social behaviors constrained by narrow geometries, such as taking longer detours to avoid others, yielding paths, or maintaining social distance while passing. Given the lack of existing datasets addressing this specific problem setting, our goal is to provide a fundamental resource that is both targeted and extensible, serving as a stepping stone for future datasets that could scale up to include more individuals or different interaction settings.\\n\\n - **Two-person navigation scenarios in real-world:** In home environments, interactions involving more than two people are relatively uncommon. Recent census data indicates that 60% of households in the U.S. consist of two people or fewer. Even in households with three or more members, scenarios where more than two individuals navigate a space simultaneously are rare, as residents generally move independently rather than engaging in collaborative or coordinated movements within the private setting. Similarly, other private spaces such as small offices, clinics, or hotel rooms are common examples of everyday environments where two-person navigation is prevalent. This relevance underscores the utility of our dataset for studying interaction dynamics that are directly applicable to these contexts.\\n\\n - **Potential utility of two-person navigation dataset:** Two-person interactions are foundational to understanding more complex multi-person dynamics, as they allow us to study detailed interpersonal behaviors such as proxemics, trajectory negotiation, and mutual space adaptation without the confounding variables introduced by larger groups. In the future, researchers can build upon our dataset to study two-person interactions in isolation or as a basis for modeling interactions in more complex, multi-person environments. \\n\\nWe thank the reviewer for highlighting this point, and we included the motivation of two-person setting in the paper.\\n(Appendix.I.3, highlighted in blue)\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer AaG4 (2/2)\", \"comment\": \"---\\n\\n> **R3:\\nGiven that the objective of the dataset is to estimate user trajectories and goal positions, without addressing the estimation of human body motions, why is VR more advantageous than an overhead camera with a bird-eye view?**\\n\\n**A3:**\\nCollecting data in physical home environments is inherently time-intensive, requiring experimenters and participants to travel to the designated location, capture the room layout, set up cameras, and conduct the experiment. Additionally, overhead cameras mounted on ceilings with limited heights are prone to blind spots caused by obstacles, making it challenging to accurately track participants' positions. These challenges have contributed to the lack of diverse indoor locomotion datasets across various scenes. \\n\\nIn contrast, our VR system enables seamless scene switching with a single button click, eliminating the need for physical layout measurements and ensuring precise capture of participants' positions. Thus, collecting two-person trajectory data in VR offers significant advantages, in terms of efficiency and diversity, allowing for the collection of trajectory data across various scenarios in a controlled and repeatable manner. \\n\\n---\\n\\n> **R4:\\nHow are real human full-body motions in the physical world synchronized with the virtual environment?**\\n\\n**A4:**\\nThe 6-point positions/poses of the body (head/waist/hands/feet) are tracked using the HTC VIVE motion capture system. These tracked points are then translated into avatar motions in the VR space through IK (Inverse Kinematics) software (FINAL-IK).\\n\\nWhile the avatar motion is primarily used for visualization purposes in our study, we have included the raw motion tracking data (6 points on the body) in the dataset. This data is highly accurate, with deviations within a few millimeters, and is available for further users to reconstruct the avatar motions through state of the art IK algorithms.\\n\\n---\\n\\n> **R5:\\nResearch (e.g., in human-robot interaction) has shown that humans respond differently to virtual agents compared to physical agents. The authors are encouraged to provide a study and analysis on whether this difference exists and, if so, how significant it is.**\\n\\n**A5:**\\nWe think that the impact of the gap between VR and real-world environments varies depending on the task type. For locomotion tasks, we argue that this impact is minimal and does not affect the overall contribution of our dataset for the following reasons:\\n\\n(1) In our experiment, participants were aware that the virtual avatars were synchronized with real humans sharing the same physical space. This awareness discouraged socially or physically inappropriate behavior, mitigating the potential impact of the VR/real gap, as shown in the studies on VR locomotion[1][2]. \\n\\n(2) Our evaluation used locomotion data collected in physical spaces as test data. Models trained on the LocoVR dataset outperformed those trained on other physically collected datasets (GIMO/THOR-MAGNI), demonstrating that VR-collected data is effective when applied to real-world scenarios. (Tested on LocoReal: Main paper Section.4, tested on GIMO: Appendix D.1)\\n\\nWe have included above discussion in the revised manuscript to clarify the influence of VR/Real gap issue. (Appendix.J, highlighted in blue)\\n\\n[1] H. Yun, Y. Watanabe, A. Yamada, \\\"Exploring the Role of Expected Collision Feedback in Crowded Virtual Environments,\\\" Proc. IEEE Conf. Virtual Reality and 3D User Interfaces, 2024.\\n\\n[2] A. L. Simeone, I. Mavridou, and W. Powell, \\\"Altering user movement behaviour in virtual environments,\\\" IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 4, pp. 1312\\u20131321, 2017.\\n\\n---\\n\\n> **R6:\\nThe work mentions robotics as a motivation and application scenario; however, related research on social robot navigation is not well reviewed.**\\n\\n**A6:**\\nWe appreciate the reviewer for this insightful suggestion. We agree with the reviewer that incorporating the domain of social robot navigation into the related work provides a more comprehensive context for our contributions. We have added works relevant to social robot navigation in the related works section. (Section 2.1, highlighted in blue)\\n\\n---\\n\\n**[Final Note]** Thank you once again for the insightful review. We believe the revision definitely has strengthened the quality of our paper. If there is anything else we can clarify or elaborate on, please do not hesitate to let us know.\"}", "{\"title\": \"Discussion period approaches its conclusion in the coming days\", \"comment\": \"**We have done our utmost to address the concerns raised and improve our work based on your valuable comments.**\\n\\n**If you have any additional questions or points of clarification, we would be delighted to engage in further discussion to ensure that all your concerns are thoroughly resolved.**\\n\\n**Thank you once again for your invaluable contributions, and we look forward to hearing from you.**\"}", "{\"title\": \"Response to Reviewer 6iZX (1/3)\", \"comment\": \"We sincerely thank the reviewer for their valuable feedback, which has greatly enhanced the clarity and quality of our paper. In response to the reviewer's concerns, we have carefully addressed them as outlined below.\\n\\n---\\n\\n> **R1:\\n> While using VR to collect human trajectory data is helpful, this paper would benefit from a discussion in the related works section about VR and human motion. For instance, referencing works like \\\"QuestEnvSim: Environment-aware Simulated Motion Tracking from Sparse Data\\\" in SIGGRAPH 2023 which uses VR for motion tracking and \\\"Strategy and Skill Learning for Physics-based Table Tennis Animation\\\" in SIGGRAPH 2024 which involves interaction between human and humanoid agents.**\\n\\n**A1:**\\nThank you for your insightful comment. We agree with the reviewer that incorporating the domain of VR and human motion into the related work provides a more comprehensive context for our contributions.\\nWe have created a new section and included relevant works that analyze human behavior using VR including the two papers suggested by the reviewer after careful consideration. Please see Section.2.3 in our revised manuscript (highlighted in blue).\\n\\n---\\n\\n> **R2:\\nI notice authors utilize motion capture to provide whole body motion, and I wonder the reason to consider only experiments of path, trajectory and goal prediction. The occasionally unnatural motion observed in the video could be explained.**\\n\\n**A2:**\\n**Full-body pose is auxiliary information but not our main focus:**\\nWe incorporated the motion capture system primarily to visualize avatar motions, allowing participants to recognize the movements of others. Consequently, including full-body motion data in the dataset is not our primary focus. In the main paper, we mentioned full-body motion as auxiliary information included in the dataset, noting that head pose (yaw direction) from the raw motion tracking data was incorporated to maximize performance in our evaluation tasks. The head pose offers valuable insights for inferring human intentions, which facilitates more accurate future predictions, as demonstrated in the ablation study (Appendix C).\\n\\nAdditionally, after careful consideration, we have revised the manuscript by replacing 'full-body motion' with alternative terms or explicitly clarifying it as auxiliary data. We believe these revisions better emphasize our primary contribution. The updated sentences in the manuscript are as follows:\\n\\n- Abstract (L20): Revised \\\"full-body motion\\\" to \\\"accurate trajectory data\\\".\\n\\n- Introduction (L43): Revised \\\"full-body motions\\\" to \\\"trajectories\\\".\\n\\n- Figure 1 caption (L89): Revised \\\"full-body motion\\\" to \\\"trajectory\\\".\\n\\n- Section 3.1 Overview (L154): Revised \\\"it includes full-body human poses, along with head orientation data in addition to trajectories\\\" to \\\"it includes body tracker data on head/waist/hands/feet as auxiliary information\\\".\\n\\n- Conclusion (L531): Revised \\\"full-body motions\\\" to \\\"accurate trajectory\\\".\\n\\nAll revisions have been highlighted in blue in the manuscript.\\n\\n**Inaccuracies in avatar motions:**\\nAs the reviewer noted, avatar motions may occasionally display unnatural joint movements. This issue arises from the performance of the inverse kinematics (IK) software (FINAL-IK), which reconstructs avatar motion using sparse motion trackers placed on the body (head, waist, hands, and feet).\\nHowever, these slight inaccuracies in body motion do not impact the contribution of our experiment, as our focus is on room-scale human dynamics rather than fine-grained body movements.\\n\\nOur dataset currently includes raw data from sparse motion trackers, which is highly accurate (within a few millimeters). For users requiring precise avatar motion, applying state-of-the-art IK algorithms to the raw tracker data would reconstruct more accurate avatar movements than those displayed in our video.\"}", "{\"title\": \"Response to Reviewer 1K21(2/2)\", \"comment\": \"---\\n\\n> **R3:\\nCultural/ societal biases of motion proxemics: Since motion proxemics is influenced by cultural norms, authors should show aggregated user demographics. It would also be interesting to see motion proxemics based on the aggregated clusters of people.**\\n\\n**A3:**\\nThere were 32 participants in total, comprising 21 males and 11 females, with ages ranging from 18 to 42. From this pool, pairs were formed to conduct 25 experiments, each involving a unique pair (Table.1). \\nThe experiments included various combinations of male-male, female-female, and male-female pairs, as well as pairs of friends and nonfriends, as shown in the table.2.\\n\\nAs the reviewer pointed out, reactions between pairs in close proximity are influenced by attributes and interpersonal relationships. Further data analysis may provide new insights into the relationship between these attributes, relationships, and behavioral patterns. It could be an intriguing study from the perspective of cognitive and behavioral sciences. We appreciate the reviewer for suggesting the valuable insight, and have added the information shown in this reply in the revised manuscript. (Appendix.I.2, highlighted in blue)\\n\\nTable.1: User demographics (categorized by gender and age)\\n| Age | Male | Female | \\n|------------|----------|----------|\\n| under 20 | 5 | 3 |\\n| 20 to 29 | 15 | 8 |\\n| over 30 | 1 | 0 |\\n\\n\\nTable.2: Diversity of pairs (categorized by gender and relationship)\\n| | Male-Male | Female-Female | Male-Female | \\n|--------------|-------------|---------------|-------------|\\n| Friends | 2 | 2 | 5 | \\n| Non-friends | 9 | 1 | 6 |\\n\\n\\n\\n\\n---\\n\\n**[Final Note]** Thank you once again for the insightful review. We believe the revision has strengthened the quality of our paper. If there is anything else we can clarify or elaborate on, please do not hesitate to let us know.\"}", "{\"summary\": \"This paper focuses on understanding human locomotion behaviors in indoor environments, with the specific scenario of two persons walking two separate goals in an indoor room. The main contribution is a new dataset LocoVR capturing sequences of two persons walking to their goal locations in 3D indoor rooms. To overcome the high cost of capturing locomotion in physical scenes, this paper proposes a VR-based solution where the subjects wear a VR device and navigate through a virtual 3D room displayed in VR. The human locomotion trajectories are tracked using VR devices. This VR capture solution scales up locomotion capture to 130+ scenes. The authors then conducted experiments validating that the collected LocoVR dataset outperforms existing datasets in three tasks: global path prediction, trajectory prediction, and goal prediction.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed VR capture solution eliminates the high cost of physically setting up indoor scenes and capturing human movements, which facilitates scaling up locomotion capture to many more scenes compared to previous datasets.\\n\\n2. This paper captures two-person goal-reaching motions which include the social navigation behaviors such as adjusting the path to respect personal boundaries or side steps to give way to another person. Such social navigation behavior is not covered in most previous datasets and is important for understanding multi-human social navigation and potential human-robot interactions.\\n\\n3. Experiments on three navigation-related tasks show that models trained on the LocoVR dataset consistently outperform models trained on existing datasets when tested on a real-world two-person locomotion test set.\", \"weaknesses\": \"1. Although this paper claims to provide full body pose data (L20), the human motion capture is far from realistic according to the supplementary video (0:00-0:30). If aiming for capturing full body poses, it may be necessary to change from the HTC VIVE tracking to marker-based motion capture as in CIRCLE (Araujo et al., 2023). With the current presented results, I recommend removing the claims of full body pose data since all experiments only use trajectory data.\\n\\n2. The VR capture system is limited to simple behaviors assuming a flat floor scene and no contact-based close object interactions. It can only capture locomotion or reaching behaviors as in CIRCLE (Araujo et al., 2023). The VR capture system can not work for behaviors like lying on a sofa or walking up stairs. When the humans try to do such interactions in VR, they are actually interacting with air in the real world and will fall. This virutal-real inconsistency can also cause the subjects to slightly walk into obstacles as discussed in paper.\\n\\n3. This paper only focus on a very simple social navigation scenario of two persons avoiding each other. However, the social navigation behaviors can be much more complex. For example, humans do not only avoid each other but also collaborates and coordinates, consider the cases where one person is leading the way and other persons follow the leading one, and two persons walk to each other to talk. It is also necessary to include social scenarios with more than two persons.\", \"questions\": \"1. In appendix H, why are time windows and intervals set as the presented numbers? Are there any motivation or empirical study?\\n\\n2.L860, figures should be tables?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you once again for the insightful review. We believe the revision has strengthened the quality of our paper. If there is anything else we can clarify or elaborate on, please do not hesitate to let us know.\", \"title\": \"Welcome for further discussions!\"}", "{\"title\": \"24 hours remaining before the paper revision deadline\", \"comment\": \"Dear Reviewer Fpz9,\\n\\nWe hope this message finds you well. With 24 hours remaining before the revision deadline, we kindly request your feedback on the remaining concerns based on our responses. We understand the review process is time-consuming, but your feedback is invaluable in shaping the final outcome. Thank you again for your time and effort in reviewing our work.\"}", "{\"title\": \"Summary of the review (2/2)\", \"comment\": [\"# **Summary of the review: Common concerns raised by the reviewers**\", \"**1. The gap between VR and real-world settings could influence the motion behavior of participants (Reviewer AaG4, Reviewer 1K21, Reviewer PhCB):**\", \"**Short answer:** We think that the impact of the gap between VR and real-world environments varies depending on the task type. For locomotion tasks, we argue that this impact is minimal and does not affect the overall contribution of our dataset for the following reasons: (1) the use of a real human-based experimental setting, and (2) the demonstrated performance on real-world data.\", \"**Detailed discussions are described in the response to each reviewer.**\", \"**2. Focus on two-person navigation may limit its applicability in real-world scenarios. (Reviewer AaG4, Reviewer Fpz9, Reviewer PhCB):**\", \"**Short answer:** We agree that variations in locomotion patterns involving more than two individuals may occur in real-world scenarios, particularly in open public spaces. However, our research focuses on private indoor settings where the number of pedestrians is typically very limited, and individuals generally move independently. We believe this focus does not diminish the contribution of our dataset, as scenarios in private indoor settings are both common and essential in real-world contexts, yet they have been largely overlooked by most existing datasets. **Detailed discussions are described in the response to each reviewer.**\", \"**3. Role of the full-body motion is not clear (Reviewer 6iZX, Reviewer PhCB):**\", \"**Short answer:** We incorporated the motion capture system primarily to visualize avatar motions, allowing participants to recognize the movements of others. Consequently, full-body motion data is auxiliary information and not our primary focus of our dataset. **Detailed discussions are described in the response to each reviewer.**\", \"**4. Avatar motions in the video partly look unnatural (Reviewer 6iZX, Reviewer PhCB):**\", \"**Short answer:** Unnatural joint movements in avatar motions occasionally appear due to the limitations of the inverse kinematics (IK) software (FINAL-IK) performance, however, these minor inaccuracies do not impact the contribution of our experiment, as our focus is on room-scale human dynamics rather than fine-grained body movements. **Detailed discussions are described in the response to each reviewer.**\", \"---\", \"## **Paper revisions:**\", \"---\", \"**Newly included discussions:**\", \"Discussion on the influence of the gap between VR/Real: Appendix J (Reviewer AaG4, Reviewer 1K21, Reviewer PhCB)\", \"Discussion on the motivation of our problem setting (two-person locomotion): Appendix I.3 (Reviewer AaG4, Reviewer Fpz9, Reviewer PhCB)\", \"Discussion on the occasional inaccuracy in avatar motion: Appendix F.5 (Reviewer 6iZX, Reviewer PhCB)\", \"Description on the prior works on the \\\"robot social navigation\\\" in the related works: Section 2.1 (Reviewer AaG4)\", \"Description on the prior works on the \\\"VR-based human behavior analysis\\\" in the related works: Section 2.3 (Reviewer 6iZX)\", \"Discussion on potential future applications in the future work: Section 5 (Reviewer Fpz9)\", \"Description on the motivation of parameter settings in data augmentation: Appendix H (Reviewer PhCB)\", \"**Newly included data:**\", \"Influence of scene information types - Difference in performance with Binary obstacle map / Semantic map / Height map: Appendix D.2 (Reviewer 6iZX)\", \"Demographics in participants and variations in locomotion pairs: Appendix I.2 (Reviewer 1K21)\", \"Additional data statistics in LocoVR: Appendix I.1 (Reviewer 6iZX)\", \"**Modifications:**\", \"Modified the statements on the \\\"full-body motion\\\" to clarify it is auxiliary information in the dataset, not our prior focus: (Reviewer 6iZX, Reviewer PhCB)\", \"Abstract (L20): Revised \\\"full-body motion\\\" to \\\"accurate trajectory data\\\".\", \"Introduction (L43): Revised \\\"full-body motions\\\" to \\\"trajectories\\\".\", \"Figure 1 caption (L89): Revised \\\"full-body motion\\\" to \\\"trajectory\\\".\", \"Section 3.1 Overview (L154): Revised \\\"it includes full-body human poses, along with head orientation data in addition to trajectories\\\" to \\\"it includes body tracker data on head/waist/hands/feet as auxiliary information\\\".\", \"Conclusion (L531): Revised \\\"full-body motions\\\" to \\\"accurate trajectory\\\".\", \"Modified of a typo: Appendix D (Reviewer PhCB)\", \"---\"]}" ] }
9ljHiYuRHl
Failure Modes of LLMs for Causal Reasoning on Narratives
[ "Khurram Yamin", "Shantanu Gupta", "Gaurav Rohit Ghosal", "Zachary Chase Lipton", "Bryan Wilder" ]
In this work, we investigate the causal reasoning abilities of large language models (LLMs) through the representative problem of inferring causal relationships from narratives. We find that even state of the art language models rely heavily on unreliable shortcuts, both in terms of the narrative presentation and their parametric knowledge. For example, LLMs tend to determine causal relationships based on the temporal ordering of events (i.e., earlier events cause later ones), resulting in lower performance whenever events are not narrated in their exact causal order. Similarly, we demonstrate that LLMs struggle with long-term causal reasoning — they often fail when the narratives are longer and contain many events. As an additional failure mode, we show LLMs appear to heavily rely on their parametric knowledge at the expense of reasoning over the provided narrative. This degrades their abilities whenever the narrative opposes parametric knowledge. We extensively validate these failure modes through carefully controlled synthetic experiments, as well as evaluations on real-world narratives. Finally, we observe that explicitly generating a causal graph generally improves performance while naive chain-of-thought is ineffective. Collectively, our results distill precise failure modes of current state-of-the art models and can pave the way for future techniques to enhance causal reasoning in LLMs.
[ "Causal Inference", "Large Language Models", "Reasoning", "Narratives" ]
https://openreview.net/pdf?id=9ljHiYuRHl
https://openreview.net/forum?id=9ljHiYuRHl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "gjgCIcayXI", "PlAZftfinr", "Mgiv2q3sQi", "9fh4qMp3y5", "8Gg0LYFtXj", "12tBPCFGNi" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1730487921992, 1730655847787, 1730659926434, 1730550443129, 1732534555912, 1732534578326 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12689/Reviewer_Ah2k" ], [ "ICLR.cc/2025/Conference/Submission12689/Reviewer_M8kG" ], [ "ICLR.cc/2025/Conference/Submission12689/Reviewer_PNEG" ], [ "ICLR.cc/2025/Conference/Submission12689/Reviewer_NqdV" ], [ "ICLR.cc/2025/Conference/Submission12689/Authors" ], [ "ICLR.cc/2025/Conference/Submission12689/Authors" ] ], "structured_content_str": [ "{\"summary\": \"In their work, the author aim to inspect the causal reasoning capabilities of LLMs in natural language texts. The authors study effects on predictive performance in long-term reasoning and investigate performance degradation when reason over contextual information that is counterfactual to their learned 'parametric' knowledge.\\n\\nThe authors utilize LLMs to generate (semi-synthetic) narratives about events resembling causal chains from CauseNet. In that, 'anti-causal' relations are purposefully injected, which stand in contrast to factual observations of the real-world. Finally, reverse topological narratives are created in which items of the causal chain appear in reverse order.\\n\\nThe authors measure LLM performance, by tasking the LLMs to restore the causal graph G from the narrative, using CoT and In-Context learning. In summary, the authors find that performance degradates when presenting narratives in reverse causal order, for 'anti-causal' relations and with longer chains of events. CoT and in-context learning do not boost performance, while providing graph information does seem to help even for longer chains.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors provide a series of evaluations on generated, semi-synthetic and real-world causal relations, that sufficiently support the claims of the paper. The paper provides a more detailed analysis on causal reasoning capabilities than previous works and experiments are generally well described.\\n\\nBy reasoning over counterfactual observations, the authors successfully show that LLM have trouble incorporating contextual information of the prompt that stands in contrast to learned knowledge. While previous work already identified general shortcomings in causal reasoning capabilities of LLM, the authors provide a more detailed breakdown of particular factors. The found confounding between predictive performance and order of narrative might be particular important to overcome.\\n\\nWhile the authors test LLMs in a rather restricted settings, the presented results provide clear evidence towards the claimed effects. Limitations on the causal graph structure and types of inspected effects are discussed sufficiently, and one would expect results to only worsen when applied to more complex graphs.\", \"weaknesses\": \"While I agree on many interpretations of the presented results, I have several questions regarding the soundness of some experimental setups and the provision of graph information:\\n\\n1) In section 3.2, the authors give an example of a \\\"Film festival \\u2192 Food truck rally \\u2192 Trampoline park party\\\" narrative. While the goal of generating a reversed narrative was pursued, the term 'Trampoline park party' was again mentioned after the 'Film festival' in the last sentence, establishing a relation in the correct relation order. Enforcing items of the causal chain to appear in reversed order, might pose a challenging task, as item positions can get switched due to the syntax of natural language. It would be interesting to know, how often such generation 'errors' occur within the data. The paper could be improved by analyzing to which extend LLM generations obey the desired ordering of events.\\n2) Throughout the paper, the authors refer to 'anti-causal' effects to indicate inversion of the causal effect *strength*, e.g. \\\"cancer->longer life\\\". The term 'anti-causal', is commonly reserved to indicate a reversed causal *direction* e.g. \\\"cancer<-longer life\\\". This is quite confusing, and given that it appears in the prompt presented in A.1.3, might have an impact on narrative generation. I would like to suggest to consider an alternative wording, e.g. counterfactual effects.\\n3) The authors provide causal graph information to the LLM during several experiments. While this seems to improve LLM performance, I could not find any details on the exact format of the provided graph information. It could be, that graph information is provided in the correct causal order, which would again improve LLM performance due to ordering bias and not due to reasoning capabilities of the LLM. The paper could be improved by further specifying and, if needed, randomizing the order of the provided causal effects in the experiments.\\n4) The reordering of relations in the experimental setup of section 3.3 might conceal the true influence of the anti-causal edges on the predictive performance of LLMs. As reordering was previously shown to reduce performance, it becomes unclear whether anti-causality or reordering of relation is the true cause for the performance degradation. I believe that effects of anti-causal relations might be better demonstrated when keeping the ordering intact and simply varying the amount of anti-causal relations within the chains.\\n5) To the best of my understanding, the problem of identifying and reasoning about causal relations in texts is commonly known as event causality identification (ECI; in the field of NL) or simply event causality (when moving closer to classical Pearlian causality). While the authors are able to produce new insights on the failure modes of LLM, several prior works derived similar results and are not discussed. This regards general reasoning with counterfactual contexts (e.g. [1,2,3]), as well as general causal text understanding (e.g. [4,5]). While these works certainly form a niche within the bigger field of causality and LLMs, I would recommend the authors to consider these relations.\\n\\n\\n**Minor**\\n\\n* Unnecessary brackets are added for citations in Sec. 3.1 (l156/157) and in Sec. A.1.9.\\n* The resolution of figure 3 is quite low, such that the legend is unpleasant to read. I would like to suggest to either embed the figure as a vector graphic/PDF or increase its resolution.\\n* Some words seem to be missing in line 248.\\n\\n\\n[1] Li, Jiaxuan, Lang Yu, and Allyson Ettinger. \\\"Counterfactual reasoning: Do language models need world knowledge for causal inference?.\\\" NeurIPS 2022 Workshop on Neuro Causal and Symbolic AI (nCSI). 2022. \\n[2] Li, Jiaxuan, Lang Yu, and Allyson Ettinger. \\\"Counterfactual reasoning: Testing language models' understanding of hypothetical scenarios.\\\" arXiv preprint arXiv:2305.16572 (2023). \\n[3] Frohberg, J\\u00f6rg, and Frank Binder. \\\"Crass: A novel data set and benchmark to test counterfactual reasoning of large language models.\\\" arXiv preprint arXiv:2112.11941 (2021). \\n[4] Gao, Jinglong, et al. \\\"Is chatgpt a good causal reasoner? a comprehensive evaluation.\\\" arXiv preprint arXiv:2305.07375 (2023). \\n[5] Ashwani, Swagata, et al. \\\"Cause and Effect: Can Large Language Models Truly Understand Causality?.\\\" arXiv preprint arXiv:2402.18139 (2024).\", \"questions\": \"My questions mainly regard the points mentioned in the weaknesses above. In particular, I would like to ask the authors to comment on the following questions:\\n\\n1. I would like to ask the authors to comment on point 1 of the above weaknesses. In particular, I would like to know whether the generated narratives adhere to the underlying causal structure and ordering. Are there differences in predictive performance of the LLM for samples that deviate from the desired ordering?\\n2. Regarding point 3 of the above weaknesses: Does the order of the provided graph information cohere to the ordering of the causal chain and would this be suited to leak information to the model, possibly improving its performance? Could the authors please elaborate and provide evidence for or against this scenario?\\n3. I would like to ask the authors elaborate on the necessity of reordering events in the experiments of section 3.3. How can the effects of reordering and anti-causal edges be better differentiated?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors study the effectiveness of using LLMs to learn the cause effect relationships in a chain graph in both synthetic and real-world settings. The authors report the various modes through which the LLMs fails in this task. These failure modes can be summarized as follows:\\n1. LLMs rely heavily on the order in which the causal relationships are verbalized in the narrative.\\n2. LLMs use their parametric knowledge as a shortcut to answer causal questions.\\n3. LLMs fail more often when the narrative becomes larger.\\nThe authors also investigate various prompting strategies that help avoid some of these pitfalls.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper provides a formal characterization of the failure modes of an LLM towards causal reasoning tasks.\", \"Some of these failure modes are quite interesting, for example, the one where the narrative introduces the causal variables in reverse order leads to poor performance.\", \"The paper is well presented and written, making it easy to follow.\"], \"weaknesses\": [\"In all the experiments, the narratives provided to the LLM convey significant insight on the causal structure of the chain graph under consideration. Using phrases like \\u201cresults in\\u201d, \\u201cleads to\\u201d, \\u201ccauses\\u201d in the narrative directly places a causal link between the variables. This approach would fail in scenario where such information is not already available. This makes the role of LLM quite straightforward (at least in the forward narrative setting).\", \"There was no explicit characterization of the error in the chain graph G\\u2019 that is produced by the LLM, without this it makes it unclear as to where the error stem in this setting stems from.\", \"It would help to provide more insight in to why there are inconsistencies between the answers to the causal reasoning task and the chain graph G\\u2019 predicted by the LLM\", \"-in light of existing works in use of LLM for causal discovery, some of these failures have been observed by past works and hence advocated hybrid approaches by combining with data driven methods.\"], \"questions\": [\"In Figure 4, Claude showcases a more significant divide between the forward and reverse settings for the narratives compared to ChatGPT, any insight as to why that is the case?\", \"What would happen if the narrative doesn\\u2019t explicitly state the causal relations using phrases like \\u201cresults-in\\u201d, \\u201ccauses\\u201d?\", \"What was the accuracy of LLM recovering the correct chain graph in the settings \\u201cForward/Reverse-Graph\\u201d, \\u201cCausal/Anti-Causal Graph\\u201d?\", \"Why not combine both chain of thought reasoning and G\\u2019 to see if there is any improvement in performance?\", \"-clearly prompt based solutions fails across many reasoning tasks as reported heavily by past works, and causal discovery is no exception. Several past works suggest to combine the data driven method with LLM for causal discovery. Relying entirely on inherent knowledge of LLM have proven to be problematic in reasoning tasks and not sure this paper puts significant lights on as how we devise better methods for causal reasoning,\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics review needed.\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper investigates the causal reasoning capabilities of recent LLMs; through various settings; via synthetic, semi-synthetic and real world narratives, comparing their consistency in forward and reverse causal narrative, against their parametric knowledge (pre-trained model knowledge) via direct prompting, CoT prompting, and graph extracting, with varying length of causal (anti-causal) chains. Results suggest that LLMs show considerable bias towards their parametric knowledge, and are more successful in queries that align with them. They do in general perform pretty weak in causal reasoning task. The extracted graph mostly leverages their strength since it helps them to spend more attention to the whole of the narrative text.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"- Very relevant line of research, with high impact. (spotting the failure modes of LLMs on causal reasoning ).\\n-Good writing and. exposition in general and easy to follow. \\n- The proposed setting of approach is interesting, and the depth and multitude of the analysis is sufficient.\\n-Overall, results are interesting, intuitive and worth considering. I find the results expected but not trivial. I am particularly pleased to see the positive effect of the causal graph, which can help the CoT practice as well, as they put it in the limitations sections.\", \"weaknesses\": \"My main concern is that the experiments/results can be misinterpreted and therefore may not be strongly conclusive for the intended tasks, due to the followings:\\n\\n1) Vagueness in expressions: This is mainly due to vagueness in expression formulation sespecially in the reverse part. : Take the given example given in the paper: \\\"The film festival set the stage for the rest of the events, creating a desire for cultural experiences and community gatherings that ultimately led to the trampoline park party.\\\" Here the \\\"The film festival set the stage for the rest of the events\\\" sounds as if it took place afterwards due to \\\"rest\\\" of the events, and the part (desire and community gatherings) of the sentence mentioned before the \\\"trampoline park party\\\" races against the \\\"film festival\\\" and distracts the attention, hence the overall degrading in the causal reasoning estimation is likely. \\n\\n2) Conformity to human communication: Another aspect is that LLMs can mis-read your intention, since they can falsely \\\"assume\\\" or \\\"expect\\\" you do typos or miswrite/communicate. This is a strategic choice in their design (and perhaps even their nature due to their training corpora) to not make them brittle, and work with real human conversations. \\n\\n\\n3) Referral mismatch. You say: \\u201cAlthough this relationship is displayed in the chain of events in the narrative, it is not logical and counter-intuitive\\u201d and answer incorrectly.\\\" Here, although you told the LLMs to ignore the parametric knowledge (or as you put it \\\"explicit instruction to ignore outside information, allow for ill-logical relationships, and answer solely based on the hypothetical narrative\\\"), it may still refer to the \\\"meta-explanation\\\" when it says it is not logical or counter-intuitive since it \\\"informs\\\" the user. So there might be a spurious mismatch between the task you expect vs. the setting LLM assume itself in. \\n\\n4) On conflicts (unknown vs. opposite): You say: \\\"streambank erosion to higher prices, but this contradicts the LLM\\u2019s parametric knowledge since this causal effect may not typically exist\\\" Here again I find this rather vague conflict, and LLM might likely to conform the given example, when it cannot find counter examples, and can simply hallucinate. I would expect rather a hard conflict e.g., inverse causation, something like, rooster cause the sun rise. \\n\\nI appreciate the paper's work, but the above concerns is due to inherent complexity of dealing with natural language, and its discrepancy against the intended task (hence we should be ourself causally-careful.)\", \"typos\": \"- would would\\n-the our dataset\", \"questions\": \"1) I wonder your takes on the issues I point out above.\\n\\n2) Moreover, you use GPT-4o only in real world experiments, why is that? \\n\\n3) Why does the overall accuracy scores decrease in Figure 5, in real world narratives compared to Semi-synthetic narratives?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper investigates the causal reasoning abilities of Large Language Models (LLMs) in narratives. It identifies several limitations, including reliance on event topological order, struggles with long context, and over-reliance on parametric knowledge. Furthermore, the authors demonstrate that explicitly prompting LLMs to generate causal graphs can improve performance.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. The paper tackles the crucial issue of LLM's causal reasoning ability, a topic of significant current interest.\\n2. The experiment includes both synthetic and real-world data, which provides a more comprehensive assessment.\", \"weaknesses\": \"1. The core contributions and findings of this paper appear to heavily overlap with the previously published work *LLMs Are Prone to Fallacies in Causal Inference*, which **appeared on arXiv in June 2024** and was subsequently **published at EMNLP 2024**. However, this highly relevant and similar prior work is not cited or discussed. Moreover, the technical details and experimental validation are less comprehensive than in the aforementioned paper. Ensuring proper citation of relevant prior work is crucial for advancing academic discourse and providing a transparent foundation for new contributions.\\n2. The evaluation uses yes/no questions, meaning the random guess baseline is 0.5. It is concerning that many results for GPT-4, GPT-4o, and Claude-3.5-Sonnet are below this baseline. While this might be theoretically possible under certain circumstances, the paper does not provide sufficient explanation or analysis of this phenomenon. This lack of explanation weakens the validity of the experimental findings and necessitates further investigation and a detailed error analysis.\\n3. The paper lacks essential details about the evaluation dataset, making it difficult to assess the validity of the results. Crucially, the number of samples tested, the number of nodes in each graph, and the specific types of events present in the narratives are not clearly specified. While line 149 mentions that \\\"events are real-world phenomena\\\", this description is too broad and lacks necessary citations. Furthermore, the inconsistency between Figure 5(b) (maximum chain length 9) and Figure 6(b) (maximum chain length 10) raises questions about whether different datasets were employed for the synthetic and real-world narratives, or is this an oversight? Clarification on these points is essential.\\n4. The paper does not adequately address how the quality of the generated narratives is ensured. How are the narratives verified to ensure they conform to the specified chain structures? What mechanisms are in place to check and assess the data quality? Furthermore, given the identified issue of parametric knowledge conflicts within the models, how does the generation process prevent the introduction of factual errors or inconsistencies into the narratives? This lack of detail raises concerns about the reliability of the experimental data.\\n5. While the paper's focus on chain causal relationships provides a starting point, the scope remains relatively narrow. Exploring more complex scenarios is crucial for a more comprehensive understanding of LLM's causal reasoing ability.\\n6. The model selection is quite limited, using only GPT-4, GPT-4o, and Claude-3.5-Sonnet for both data generation and evaluation. This raises concerns about the generalizability of the findings. Including leading open-access models, such as Llama 3.1, would strengthen the analysis and provide insights into whether the observed phenomena are specific to limited-access models or hold across a wider range of LLMs.\\n7. The paper's logical structure could be improved to enhance clarity and readability. Additionally, the writing also requires significant revision to ensure the work is accessible and understandable to the reader.\\n8. The poor quality of the figures hinders understanding of the presented work. For example, Figure 3 is blurry and difficult to interpret. \\n9. The referencing of both articles and figures needs substantial improvement. Several errors were noted, including incorrect article citations on lines 156-157 and incorrect figure references on lines 248, 304, and 320.\", \"questions\": \"Please refer to the 'Weaknesses' section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Withdrawal\", \"comment\": \"Hi reviewers, we would like to thank you for your commentary on our paper. We are appreciative of the time taken to provide constructive criticism, and review the contributions and potential shortcomings of the paper. In order to take the time to properly update our paper and model, we will withdraw our submission and work on addressing the concerns that have been brought up as well as work on clarifying how our paper provides a different contribution from the papers you have mentioned. Thanks so much.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Hi reviewers, we would like to thank you for your commentary on our paper. We are appreciative of the time taken to provide constructive criticism, and review the contributions and potential shortcomings of the paper. In order to take the time to properly update our paper and model, we will withdraw our submission and work on addressing the concerns that have been brought up as well as work on clarifying how our paper provides a different contribution from the papers you have mentioned. Thanks so much.\"}" ] }
9klRFLY2TT
DNABERT-S: Pioneering Species Differentiation with Species-Aware DNA Embeddings
[ "Zhihan Zhou", "Weimin Wu", "Harrison Ho", "Jiayi Wang", "Lizhen Shi", "Ramana V Davuluri", "Zhong Wang", "Han Liu" ]
We introduce DNABERT-S, a tailored genome model that develops species-aware embeddings to naturally cluster and segregate DNA sequences of different species in the embedding space. Differentiating species from genomic sequences (i.e., DNA and RNA) is vital yet challenging, since many real-world species remain uncharacterized, lacking known genomes for reference. Embedding-based methods are therefore used to differentiate species in an unsupervised manner. DNABERT-S builds upon a pre-trained genome foundation model named DNABERT-2. To encourage effective embeddings to error-prone long-read DNA sequences, we introduce Manifold Instance Mixup (MI-Mix), a contrastive objective that mixes the hidden representations of DNA sequences at randomly selected layers and trains the model to recognize and differentiate these mixed proportions at the output layer. We further enhance it with the proposed Curriculum Contrastive Learning (C2LR) strategy. Empirical results on 23 diverse datasets show DNABERT-S's effectiveness, especially in realistic label-scarce scenarios. For example, it identifies twice more species from a mixture of unlabeled genomic sequences, doubles the Adjusted Rand Index (ARI) in species clustering, and outperforms the top baseline's performance in 10-shot species classification with just a 2-shot training.
[ "DNA embedding", "Species Differentiation", "Metagenomics Binning" ]
Reject
https://openreview.net/pdf?id=9klRFLY2TT
https://openreview.net/forum?id=9klRFLY2TT
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zXs1O9Pa8T", "y4ejtxU8Xc", "u6roeykUNf", "pWWIXbUeqk", "gj5xkuQ2TE", "bXAuffr2Zy", "XpjJIXtTJ3", "RbYhZvEyZd", "QxTk7DIqhb", "MK6XfhgqZv", "FktZj8xuJu", "DdqcO2wEJU", "AkwJTQtKwc", "7I3KkveCqM", "4zt04tlwGP" ], "note_type": [ "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1731930243281, 1731930395684, 1737523904661, 1730253044167, 1731930736360, 1730702554507, 1731930662654, 1731930148736, 1731930795268, 1731930357626, 1731930706030, 1730686144354, 1732039289846, 1734885063618, 1731930293687 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8379/Authors" ], [ "ICLR.cc/2025/Conference/Submission8379/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8379/Reviewer_n3kh" ], [ "ICLR.cc/2025/Conference/Submission8379/Authors" ], [ "ICLR.cc/2025/Conference/Submission8379/Reviewer_UzBN" ], [ "ICLR.cc/2025/Conference/Submission8379/Authors" ], [ "ICLR.cc/2025/Conference/Submission8379/Authors" ], [ "ICLR.cc/2025/Conference/Submission8379/Authors" ], [ "ICLR.cc/2025/Conference/Submission8379/Authors" ], [ "ICLR.cc/2025/Conference/Submission8379/Authors" ], [ "ICLR.cc/2025/Conference/Submission8379/Reviewer_U4Wt" ], [ "ICLR.cc/2025/Conference/Submission8379/Authors" ], [ "ICLR.cc/2025/Conference/Submission8379/Area_Chair_9Kc2" ], [ "ICLR.cc/2025/Conference/Submission8379/Authors" ] ], "structured_content_str": [ "{\"title\": \"Authors' reply (Part 2)\", \"comment\": \"## W5: Ablation study discussion\\n\\n\\nThanks for highlighting this! Based on our motivation behind the method design (discussed in reply to W1), we conduct a case study to empirically validate our intuition. Specifically, we collect 50 5000-bp genome sequences from 3 species: human, monkey, and a randomly selected bacteria named Salmonella enterica. We compute the embedding of each genome sequence, and achieve the species embedding by averaging the embedding of all its 50 sequences. We then compute the cosine distance between human-monkey (H-M), human-bacteria (H-B), and monkey-bacteria (M-B). We then compute the relative distance between humans and bacteria (H-B/H-M) and monkeys and bacteria (M-B/H-M). As shown in the table below, models trained with MI-Mix loss naturally segregate very dissimilar species like humans and bacteria further while keeping similar species like humans and monkeys closer. We observe the same pattern in several different bacteria. These case studies can illustrate why MI-Mix is more suitable than SimCLR for metagenomics data, besides the scores in the ablation study. \\n\\n\\n\\n\\n| | H-M | H-B | M-B | H-B/H-M | M-B/H-M |\\n| ------------------------------ | :----: | :-----: | :-----: | :------------------: | :-------------------: |\\n| W. SimCLR only | $0.0929$ | $0.7310$ | $0.7722$ | $7.87$ | $8.31$ |\\n| MI-Mix only | $0.0807$ | $0.8308$ | $0.8907$ | $\\\\bf{10.29}$ | $\\\\bf{11.04}$ |\\n| DNABERT-S (W. SimCLR + MI-Mix) | $0.0761$ | $0.7376$ | $0.7649$ | $\\\\underline{9.70}$ | $\\\\underline{10.06}$ |\\n\\n\\n\\n\\n\\n## W6: Unclear parameter justification and redundancy reduction\\n\\n\\nThanks for indicating this! We agree that supplementing the parameter selection is helpful and have included them in the revised version (`line 263-272`).\\n\\n1. **Datasets for metagenomics binning.** To mimic real-world applications, we use the raw datasets without any data filtering for the metagenomics binning experiments. \\n2. **Datasets for species classification and clustering.** We apply data filtering only to the datasets used for species classification and clustering. Clustering algorithms and few-shot classification are often sensitive to unbalanced data. Therefore, filtering allows us to rule out the data balance factor and fairly compare embedding quality. We chose the threshold of 100 sequences per species based on dataset statistics, which allows us to maintain a sufficient number of species while ensuring enough sequences in each species for reliable analysis. The selection of 100 sequences from each species is purely random, and we will share the code used for this process to enhance transparency.\\n\\n\\n\\n## W7: Potential data leakage\\n\\n\\nThank you for expressing this concern. In species differentiation tasks, we consider data leakage to occur when the same species are present in both training and evaluation datasets. We have performed careful validation to prevent this and provided the details in the revised version (Appendix F).\", \"our_experiments_use_two_categories_of_data\": \"CAMI2 and synthetic datasets. We constructed the synthetic data to ensure they do not include any species present in the training data. For the CAMI2 datasets, due to discrepancies in species annotations between CAMI2 and GenBank, direct validation was challenging. Therefore, we performed an alignment-based estimation using **minimap2**. We aligned each evaluation dataset to the training data and considered sequences with over $90\\\\%$ alignment to the training sequences as present in the training data.\\n\\nWe computed the presence rate of each evaluation dataset as **number of presented sequences / total number of sequences**. It's important to note that different species can share common or highly similar genome sequences, so a non-zero presence rate is expected in real-world scenarios. As a reference, the two synthetic datasets with non-overlapping species have presence rates of $6.88\\\\%$ and $8.92\\\\%$. For the CAMI2 datasets, the plant-associated ones have presence rates between $3.51\\\\%$ and $4.98\\\\%$, which are even lower than the synthetic datasets. The marine datasets have presence rates between $7.99\\\\%$ and $9.45\\\\%$, comparable to the synthetic ones. Based on these statistics, there is negligible species leakage between our training and evaluation data.\"}", "{\"title\": \"Authors' Reply (Part 2)\", \"comment\": \"## Q2: Performance of MI-Mix on the original I-Mix tasks\\n\\n\\n\\nThank you for your question. We conduct experiments on the CIFAR-10 dataset in the I-Mix tasks with two separate models: ResNet-18 and ResNet-50.\\n\\nFollowing a similar setting as outlined in the I-Mix tasks (Sec. 4 in [Lee23]), we undertake contrastive representation learning on a pretext dataset and assess the quality of representations through supervised classification on a downstream dataset. In all experiments, we train the model for varying epochs for contrastive learning, specifically 50 or 100. For supervised learning, we train the model for 50 epochs across all settings. We employ the N-pair [Sohn16] as the base contrastive learning method. We integrate I-Mix or MI-Mix with N-pair to compare their performances. The results are presented as follows.\", \"table_1\": \"Results for ResNet-18 on the CIFAR-10 Dataset\\n\\n| Training Epochs | N-pair | +I-Mix | +MI-Mix |\\n|----------|:----------:|:----------:|:----------:|\\n| 50 | 77.18 | 78.30 | 78.30 |\\n| 100 | 81.92 | 83.12 | 82.41 |\", \"table_2\": \"Results for ResNet-50 on the CIFAR-10 Dataset\\n\\n| Training Epochs | N-pair | +I-Mix | +MI-Mix |\\n|----------|:----------:|:----------:|:----------:|\\n| 50 | 80.63 | 81.21 | 81.21 |\\n| 100 | 85.59 | 85.78 | 85.89 |\\n\\n\\nAs shown in the table, I-Mix and MI-Mix achieve very similar performance on images. This is expected as MI-Mix is specifically motivated by the genomics context. I-Mix is less suitable for genomic sequences since sequences from different species may share common segments. If the embedding mixup happens at the beginning of the model, where no contextual information is involved, it becomes very challenging for the model to distinguish the source of a sequence (whether the common segment comes from species A or B). Consequently, the model can become confused when species share common segments. By mixing at an intermediate layer, the common segments incorporate contextual information, allowing for better differentiation of closely related species during model training.\\n\\nYet in visions tasks, it is very unlikely that two images will share common segments. Thus, mixing at intermediate layers (MI-Mix) does not have much benefit over mixing at the begining (i-Mix).\\n\\n[Lee23] i-Mix: A Domain-Agnostic Strategy for Contrastive Representation Learning, ICLR 2023.\\n[Sohn16] Improved Deep Metric Learning with Multi-class N-pair Loss Objective, NeurIPS 2016.\\n\\n\\nThanks a lot for your comments and suggestions. We hope our response can help solve your concerns with this work. Please don't hesitate to share any other thoughts!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper introduces DNABERT-S, a modified version of its precursor, DNABERT-2, that is applied to the task of differentiating DNA sequences between different species.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors perform comprehensive testing across a variety of scenarios, as well as a thorough set of ablations.\", \"While each of the separate components - Manifold Mixup, weighted SimCLR, curriculum learning - have been introduced and utilized previously, this paper appears to be a new application of these methods to genome sequence data.\", \"The model strongly outperforms baselines on metagenomics binning, species clustering, and species classification.\"], \"weaknesses\": [\"In general, there is a lack of meta-level detail regarding the models that are being compared. It would be helpful to include tables that compare the number of parameters and embedding dimensions for each model/technique used.\", \"The use of Curriculum Contrastive Learning (C$^2$LR) strategy with the Manifold Instance Mixup (MI-Mix) loss could be an impactful contribution, however not enough work is done to show the utility of these approaches, and whether they are even helpful enough in light of computational tradeoffs. Indeed, removing C$^2$LR appears to change the performance by a small amount (only a -1.13 to -1.17 decrease).\", \"The paper is missing comparisons to similar models that employ contrastive learning for metagenomic binning tasks.\", \"It is strange that the authors would not use the benchmarking set up in CAMI II to assess their model performance, given the built-in genome binning benchmark and comparison to SOTA tools.\"], \"questions\": [\"While the authors claim to compare DNABERT-S to the strongest existing methods, there appear to be a number of comparable approaches that they overlook, especially in metagenomics binning. Some examples include COMEBin, a SOTA binning method based on contrastive multi-view representational learning (Wang et. al, 2024) and CLMB (Zhang et. al, 2022). How does DNABERT-S compare to these baselines?\", \"In the classification tasks, what is the performance of each baseline? While it is helpful to highlight the difference between DNABERT-S and the best-baseline, please include the full table of performance on each of the datasets for each of the models tested (in the appendix), as this information is still a helpful contribution.\", \"How does Manifold Instance Mixup differ from Manifold Mixup, introduced in Verma et a. (2019)? Please clarify this in the paper.\", \"Dataset complexity, meaning the number of genomes present, and the relative abundances of those genomes, can often influence the performance of a model in metagenomics binning. How does DNABERT-S perform at metagenomics binning when these two factors are changed? For example, what happens when the number of relative abundances of different species are highly imbalanced?\", \"Given the well established, recent CAMI II challenge for metagenomic binning (that the authors reference in the paper), how does DNABERT-2 compare to the tools benchmarked during this challenge (see Meyer et. al, 2022)?\", \"Meyer, F., Fritz, A., Deng, Z. L., Koslicki, D., Lesker, T. R., Gurevich, A., ... & McHardy, A. C. (2022). Critical assessment of metagenome interpretation: the second round of challenges. Nature methods, 19(4), 429-440.\", \"Wang, Z., You, R., Han, H., Liu, W., Sun, F., & Zhu, S. (2024). Effective binning of metagenomic contigs using contrastive multi-view representation learning. Nature Communications, 15(1), 585.\", \"Zhang, P., Jiang, Z., Wang, Y., & Li, Y. (2022). CLMB: Deep contrastive learning for robust metagenomic binning. In International Conference on Research in Computational Molecular Biology (pp. 326-348). Cham: Springer International Publishing.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Authors' Reply (Part 3)\", \"comment\": \"## Q2: Performance of other baselines in classification.\\n\\nThanks for asking this. We agree it is also beneficial to include the results of other models in the appendix. We have organized the data and integrated the statistics in Table 14 (`page 23`) in the revised version. Please see below for the statistics on the first datasets of synthetic, marine, and plant-associated datasets.\\n| Plant-0 | 1 | 2 | 5 | 10 | 20 |\\n| ------------------- | :-----: | :-----: | :-----: | :-----: | :-----: |\\n| TNF | 24.01 | 32.69 | 43.39 | 48.99 | 53.29 |\\n| TNF-K | 22.83 | 30.58 | 40.55 | 45.57 | 49.58 |\\n| TNF-VAE | 20.63 | 28.8 | 39.38 | 45.96 | 51.1 |\\n| DNA2Vec | 23.98 | 31.35 | 41.46 | 47.41 | 51.96 |\\n| HyenaDNA | 28.15 | 36.97 | 48.2 | 55.24 | 60.04 |\\n| HyenaDNA w/ simclr | 43.46 | 52.12 | 59.4 | 62.8 | 66.22 |\\n| DNABERT-2 | 21.04 | 28.16 | 38.5 | 45.46 | 51.99 |\\n| DNA-Dropout | 19.05 | 24.78 | 33.12 | 38.99 | 44.3 |\\n| DNA-Double | 24.56 | 33.09 | 45.06 | 52.91 | 59.57 |\\n| DNA-Mutate | 18.16 | 24.4 | 33.58 | 40.23 | 46.09 |\\n| DNABERT-2 w/ simclr | 44.97 | 52.35 | 60.35 | 64.63 | 68.18 |\\n| DNABERT-S | 47.83 | 55.83 | 63.01 | 67.12 | 69.82 |\\n\\n\\n\\n\\n\\n| Marine-0 | 1 | 2 | 5 | 10 | 20 |\\n| ------------------- | :-----: | :-----: | :-----: | :-----: | :-----: |\\n| TNF | 27.65 | 38.81 | 52.4 | 58.86 | 62.59 |\\n| TNF-K | 25.97 | 36.47 | 49.15 | 55.44 | 59.26 |\\n| TNF-VAE | 23.72 | 34.02 | 47 | 53.88 | 58.59 |\\n| DNA2Vec | 24.56 | 34.04 | 47.79 | 55.36 | 60.11 |\\n| HyenaDNA | 23.92 | 33.94 | 47.47 | 55.5 | 61.42 |\\n| HyenaDNA w/ simclr | 43.6 | 53.7 | 62 | 65.55 | 68.19 |\\n| DNABERT-2 | 19.5 | 28.45 | 40.64 | 48.98 | 55.67 |\\n| DNA-Dropout | 15.42 | 21.47 | 30.99 | 38.05 | 44.06 |\\n| DNA-Double | 26.76 | 36.84 | 49.98 | 57.68 | 63.29 |\\n| DNA-Mutate | 15.7 | 21.74 | 31.86 | 39.32 | 45.95 |\\n| DNABERT-2 w/ simclr | 48.23 | 57.9 | 64.8 | 67.94 | 70.23 |\\n| DNABERT-S | 50.25 | 59.41 | 66.07 | 68.92 | 70.75 |\\n\\n\\n\\n| Synthetic-0 | 1 | 2 | 5 | 10 | 20 |\\n| ------------------- | :-----: | :-----: | :-----: | :-----: | :-----: |\\n| TNF | 44.07 | 56.11 | 68.69 | 75.34 | 79.54 |\\n| TNF-K | 39.06 | 50.22 | 62.52 | 68.55 | 72.82 |\\n| TNF-VAE | 34.23 | 47.06 | 61.44 | 69.31 | 75.02 |\\n| DNA2Vec | 35.85 | 46.98 | 61.54 | 69.64 | 75.26 |\\n| HyenaDNA | 30.13 | 41.18 | 54.86 | 64.03 | 70.69 |\\n| HyenaDNA w/ simclr | 59.58 | 67.79 | 74.62 | 78.53 | 81.42 |\\n| DNABERT-2 | 24.43 | 34.81 | 48.93 | 58.58 | 65.98 |\\n| DNA-Dropout | 21.09 | 29.39 | 40.8 | 48.38 | 54.44 |\\n| DNA-Double | 34.54 | 46.54 | 59.86 | 67.44 | 73.6 |\\n| DNA-Mutate | 21.27 | 29.92 | 41.78 | 50.31 | 57.2 |\\n| DNABERT-2 w/ simclr | 72.02 | 78.63 | 84.55 | 86.99 | 88.93 |\\n| DNABERT-S | 71.36 | 77.93 | 83.37 | 85.81 | 87.77 |\\n\\n\\n\\n## Q3: How does Manifold Instance Mixup differ from Manifold Mixup\\n\\n\\nThanks for your comment. Here are some clarifications.\\n\\n1. Manifold Mixup [Verma19] is a regularization method that linearly interpolates between hidden states and labels of different data samples at a randomly selected layer. It improves deep neural network representations by training on these interpolations. This technique smoothens decision boundaries, flattens class-specific representations, and promotes less confident predictions on unseen data. \\n2. Manifold I-Mix adapts Manifold Mixup for contrastive learning by applying it to the anchor set. It assigns continuous values between 0 and 1 to indicate the similarity between sequences (for both positive and negative pairs). This approach is particularly effective for species differentiation, teaching the model nuanced similarities rather than binary classifications. For example, humans are more similar to monkeys than to viruses. We aim for the DNABERT-S embeddings not only to segregate different species but also to reflect the relative similarities among them (placing humans closer to monkeys and further from viruses). Therefore, the regression-like nature of Manifold I-Mix method makes it ideal for our problem. \\n\\n\\n[Verma19] Manifold Mixup: Better Representations by Interpolating Hidden States, ICML 2019\"}", "{\"summary\": \"The paper introduces DNABERT-S, a genome model focused on species-aware DNA embeddings to differentiate and cluster DNA sequences by species effectively. Building upon DNABERT-2, it incorporates two key innovations: Manifold Instance Mixup (MI-Mix) and Curriculum Contrastive Learning (C2LR). MI-Mix mixes hidden representations at random layers to enhance embedding robustness, whereas C2LR gradually presents increasingly challenging training samples to improve model generalization. Experiments across 23 datasets demonstrate DNABERT-S\\u2019s effectiveness, especially in species clustering, metagenomics binning, and few-shot classification tasks, showing that it significantly outperforms baselines, including by doubling clustering performance in Adjusted Rand Index (ARI). The model provides a robust and scalable solution to biodiversity studies and microbiome research in label-scarce environments, addressing limitations of previous genome foundation models in species differentiation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tInnovative enhancement of embedding representation for species differentiation: The paper improves the embedding representation of DNA sequences by introducing techniques such as DNA-Dropout and DNA-Double, which enable the model to better distinguish DNA sequences of different species. This improvement enhances the robustness of the embedding and the ability to capture the similarity of DNA structures, significantly improving the accuracy of species clustering and classification.\\n2.\\tImproving model generalization using contrastive learning: the paper\\u2019s Mixing of Streaming Instances (MI-Mix) and Course Contrastive Learning (C2LR) techniques gradually introduce training samples of increasing difficulty during fine-tuning, allowing the model to adapt more efficiently to species-rich macrogenomic data. This approach improves the model\\u2019s generalization ability in environments with scarce labels and high data diversity and is suitable for tasks such as macrogenomic binning and species classification.\\n3.\\tPracticality for macro-genomic data: The model is particularly suitable for macro-genomic data and biodiversity research. Through targeted fine-tuning and optimization, DNABERT-S significantly improves its performance on tasks such as species clustering and few-sample classification, providing a powerful tool for microbiomics and biodiversity research.\", \"weaknesses\": \"1.\\tLimited Novelty of Methodology: The paper employs Manifold Instance Mixup (MI-Mix) and Curriculum Contrastive Learning (C2LR), which are widely recognized in deep learning, limiting the originality of the methodology. The primary innovation lies in adapting these techniques specifically to metagenomic tasks rather than introducing novel technical advancements (see Sections 3.3 and 5.2). To strengthen this aspect, further evidence could clarify why these specific strategies are especially suited to metagenomics, particularly in addressing the shortcomings of traditional Mixup or contrastive learning for the specific challenges within this domain.\\n2.\\tLimited Scope of Comparative Experiments: The paper\\u2019s experimental validation is confined mainly to select metagenomic datasets, lacking a broader comparison with other current genomic models and widely used bioinformatics tools, such as database search techniques (refer to the experimental setup). Including these common baselines would provide a more comprehensive assessment of DNABERT-S\\u2019s effectiveness, highlighting the model\\u2019s practical applicability across diverse tasks and data types.\\n3.\\tInsufficient Visual Detail in Figures: In Figure 1, the current marker size obscures certain details, making it difficult to interpret the clustering and classification patterns. Adjusting the marker size could improve visibility, enhancing the visualization of data distribution across different methods.\\n4.\\tFigure Layout Issues: Figure 4\\u2019s layout partially overlaps with the text, which detracts from the paper\\u2019s readability and professionalism. Adjusting the figure\\u2019s placement could ensure proper spacing and clear separation between text and visuals.\\n5.\\tAblation Study Lacks Detailed Discussion: While the ablation study indicates a substantial improvement when combining W. SimCLR and MI-Mix, the analysis does not sufficiently explore the mechanisms behind this synergy (see Section 5.3). A more detailed discussion, possibly with illustrative examples, would elucidate why the combined approach enhances data representation, providing stronger support for the method\\u2019s efficacy.\\n6.\\tUnclear Parameter Justification and Redundancy Reduction: The paper references data filtering criteria, such as selecting only species with at least 100 sequences for classification, but lacks a detailed rationale for these choices. Additionally, the redundancy reduction steps are not thoroughly explained, which could influence the results\\u2019 transparency and reliability. Supplementing the parameter selection with explicit reasoning in the data preprocessing steps would enhance the methodological rigor.\\n7.\\tPotential Data Leakage in Pre-Training: Given that GenBank serves as a substantial training source, there is a potential risk of overlap between the training and testing datasets. The study does not confirm whether this overlap was checked, which raises concerns about possible data leakage (see Section 5.1). Verifying this aspect and addressing any overlapping data would strengthen the reliability of the results.\", \"questions\": \"1.\\tThe paper lacks detailed information on hyperparameter selection and the tuning process, particularly regarding how these choices impact overall performance. Could the authors provide further details to clarify the influence of these selections on model stability and performance?\\n2.\\tWhat are the specific advantages of DNABERT-S over existing DNA classification models? A more detailed explanation would help elucidate the model\\u2019s unique contributions to the field.\\n3.\\tCould this method be extended to other biological sequences (e.g., RNA or protein sequences)? If so, what adjustments would be necessary to adapt to these cases?\\n4.\\tIt would be helpful if the authors could further explain the measures taken to ensure fair comparisons in their experiments, including steps to prevent data leakage and whether model scales were controlled. While the downstream tasks performed well compared to multiple baselines, training/validation/testing based on sequence identity was not conducted, which could pose a risk of data leakage in this setup.\\n5.\\tOne of the paper\\u2019s focuses is on exploring different embedding methods for DNA sequence classification, using a variety of pre-trained models to enhance classification performance. However, a detailed comparison of time and memory consumption across different embedding methods is missing, especially regarding:\\n(a)\\tTime and Memory Consumption of Embedding Methods: Could the authors clarify the computational time and memory usage differences between embedding methods during the training and inference stages?\\n(b)\\tResource Analysis of Different Pre-Trained Models: How do time and memory consumption vary across different pre-trained models, and which models are most advantageous for specific tasks? A more detailed analysis could aid in model selection and optimization.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Authors' Reply (Part 1)\", \"comment\": \"Thank you very much for your insightful review and suggestions!\\n\\n## W1: Number of parameters and embedding dimensions for each models/techniques used in comparision\\n\\n\\n\\nThanks for pointing this out. We acknowledge that the number of parameters and embedding dimensions are important parts in the comparsion. \\n\\nWe have included a table in the latest revised version that compares the number of parameters (million), embedding dimension, inference time (seconds), and inference memory (MB) in Appendix E. The symbol \\\"-\\\" denotes that the inference time or memory is not comparable with the LLM-based models.\\n\\n| Model | Num. Params (M) | Embedding Dim. | Inference Time (Sec.) | Inference Memory (MB) |\\n|----------|:----------:|:----------:|:----------:|:----------:|\\n| TNF | 0 | 256 | \\u2014 | \\u2014 |\\n| TNF-K | 0.026 | 103 | \\u2014 | \\u2014 |\\n| TNF-VAE | 3 | 768 | \\u2014 | \\u2014 |\\n| DNA2Vec | 0.026 | 100 | \\u2014 | \\u2014 |\\n| HyenaDNA | 28.2 | 256 | 11.16 | 995 |\\n| NT-v2 | 97.9 | 512 | 19.16 | 1273 |\\n| DNABERT-2 | 117 | 768 | 14.27 | 3991 |\\n| DNA-Dropout | 117 | 768 | 14.27 | 3991 |\\n| DNA-Double | 117 | 768 | 14.27 | 3991 |\\n| DNA-Mutate | 117 | 768 | 14.27 | 3991 |\\n| DNABERT-S | 117 | 768 | 14.27 | 3991 |\\n\\n## W2: Utility of $\\\\text{C}^{2}$LR and MI-Mix & Computational tradeoffs\\n\\nThanks for the suggestions. \\n\\n\\n(i) There is no extra computational cost when incorporating C$^2$LR since the computation costs of MI-Mix and Weighted SimCLR are nearly the same. For DNABERT-S, we train it with SimCLR for 1 epoch and MI-Mix for 2 epochs. For variants without curriculum learning, we train them for 3 epochs with the same loss function.\\n\\n(ii) Intuition for our proposed curriculum contrastive learning ($\\\\text{C}^{2}$LR) and MI-Mix.\\n> Weighted SimCLR teaches the model whether two sequences are similar or not (a binary outcome of 0 or 1), whereas MI-Mix teaches the model how similar two sequences are, assigning a continuous value between 0 and 1. \\n\\n> MI-Mix method is especially suitable for species differentiation because species similarities are more nuanced than a binary classification. For example, humans are more similar to monkeys than to viruses. We aim for the DNABERT-S embeddings not only to segregate different species but also to reflect the relative similarities among them (placing humans closer to monkeys and further from viruses). Therefore, the regression-like nature of MI-Mix method makes it ideal for our problem. \\n\\n> However, predicting these finer-grained similarities is challenging, which is why we introduce **Curriculum Contrastive Learning**. We view Weighted SimCLR as a warm-up phase in model training, where the model first learns to segregate different species, and then we fine-tune it to adjust the distances in a more fine-grained manner.\\n\\n(iii) Further cases study on the benefit of MI-Mix in genomics data.\\n\\n\\n> Based on our motivation behind the method design, we conduct a case study to empirically validate our intuition. Specifically, we collect 50 5000-bp genome sequences from 3 species: human, monkey, and a randomly selected bacteria named Salmonella enterica. We compute the embedding of each genome sequence, and achieve the species embedding by averaging the embedding of all its 50 sequences. We then compute the cosine distance between human-monkey (H-M), human-bacteria (H-B), and monkey-bacteria (M-B). We then compute the relative distance between humans and bacteria (H-B/H-M) and monkeys and bacteria (M-B/H-M). As shown in the table below, models trained with MI-Mix loss naturally segregate very dissimilar species like humans and bacteria further while keeping similar species like humans and monkeys closer. We observe the same pattern in several different bacteria. These case studies can be an illustration of why MI-Mix is more suitable than SimCLR for metagenomics data, besides the scores in the ablation study. We will conduct more comprehensive case studies and discuss them in the revised version. Thanks for this suggestion.\\n\\n\\n\\n\\n| | H-M | H-B | M-B | H-B/H-M | M-B/H-M |\\n| ------------------------------ | :----: | :-----: | :-----: | :------------------: | :-------------------: |\\n| W. SimCLR only | $0.0929$ | $0.7310$ | $0.7722$ | $7.87$ | $8.31$ |\\n| MI-Mix only | $0.0807$ | $0.8308$ | $0.8907$ | $\\\\bf{10.29}$ | $\\\\bf{11.04}$ |\\n| DNABERT-S (W. SimCLR + MI-Mix) | $0.0761$ | $0.7376$ | $0.7649$ | $\\\\underline{9.70}$ | $\\\\underline{10.06}$ |\"}", "{\"title\": \"Authors' reply (Part 1)\", \"comment\": \"Thank you very much for your detailed and insightful review! Your comments and suggestions are very helpful in improving the quality of our manuscript.\\n\\n## W1: Novelty\\n\\n\\n\\nThank you for sharing your perspective on this matter. We agree that providing further illustration behind our method design is important.\\n\\n1. **SimCLR or I-Mix?** In summary, SimCLR teaches the model whether two sequences are similar or not (a binary outcome of 0 or 1), whereas I-Mix teaches the model how similar two sequences are, assigning a continuous value between 0 and 1. I-Mix methods are especially suitable for species differentiation because species similarities are more nuanced than a binary classification. For example, humans are more similar to monkeys than to bacterias. We aim for the DNABERT-S embeddings not only to segregate different species but also to reflect the relative similarities among them (placing humans closer to monkeys and further from bacterias). Therefore, the regression-like nature of I-Mix method makes it ideal for our problem. However, predicting these finer-grained similarities is challenging, which is why we introduce **Curriculum Contrastive Learning**. We view SimCLR as a warm-up phase in model training, where the model first learns to segregate different species, and then we fine-tune it to adjust the distances in a more fine-grained manner. In our reply to your W5, we provide a case study to empirically validate this.\\n2. **MI-Mix vs. I-Mix.** I-Mix is less suitable for genomic sequences since sequences from different species may share common segments. If the embedding mixup happens at the beginning of the model, where no contextual information is involved, it becomes very challenging for the model to distinguish the source of a sequence (whether the common segment comes from species A or B). Consequently, the model can become confused when species share common segments. By mixing at an intermediate layer, the common segments incorporate contextual information, allowing for better differentiation of closely related species during model training.\\n\\nBesides the above methodology, our novelty also lies at data construction and benchmark design.\\n\\n- **Data Construction.** Unlike well-explored areas like NLP and CV, data construction for DNA representation is non-standard with respect to data source, data augmentation, sequence length, and preprocessing. With inappropriate data construction, the trained model is likely to underperform textual features like TNF, as illustrated in Table 1 and Figures 7\\u20139. We demonstrate the viability of learning species-awareness by treating non-overlapping segments of the same species as positive pairs and analyze the effectiveness of different positive pair construction strategies for DNA sequences.\\n- **Benchmark Availability.** There is a lack of standard datasets and evaluation strategies for this problem. DNA datasets are diverse in many aspects, such as being balanced or raw, containing seen or unknown species, being data-scarce or abundant, and consisting of reference or long-read sequences. We have therefore compiled and published a benchmark and evaluation pipeline after iterative refinement to address these challenges.\\n\\n\\n\\n## W2: Scope of experiments\\n\\n\\nThank you for highlighting this issue.\\n\\n\\nWe have conducted the experiments, but due to space limitations, we did not include all of them in the main text. We have highlighted them in the revised version (`line 273-282`). Specifically:\\n\\n1. We have compared our model with most of the widely-used and state-of-the-art genomics models, including Nucleotide Transformer v2, DNABERT-2, and HyenaDNA (as shown in Table 1, Figures 3 and 4).\\n2. In Appendix C.4, we compare DNABERT-S with MMSeqs2, one of the state-of-the-art database search methods. We show that DNABERT-S achieves slightly better performance than MMSeqs2 with fewer labeled data, indicating the potential of embedding-based methods in taxonomy classification.\\n3. In Appendix C.8, we also compare DNABERT-S with DNABERT-2 on several genome function prediction tasks and show that species-aware training does not significantly improve genome function predictions.\\n\\n\\n\\n## W3: Insufficient visual detail\\n\\n\\nThank you for your suggestion. Our aim with this figure is to provide a global view showing that DNABERT-S is able to segregate species into separate clusters. We agree that the marker size was not optimal, and we have adjusted the figure accordingly to improve visual clarity and enhance the visualization of data distribution in the revised version.\\n\\n\\n\\n\\n\\n## W4: Figure layout\\n\\n\\n\\n\\nThank you for bringing this to our attention. We apologize for the oversight. We have adjusted the figure's placement in the revised version.\"}", "{\"title\": \"Authors' Reply (Part 4)\", \"comment\": \"## Q4: Performance in unbalanced data\\n\\n\\n\\nWe agree that the relative abundances of those genomes is very important to model's performance. So we choose the raw samples from CAMI2 as our evaluation data, which already contains largely unbalanced data. We use the 0/25/50/75/100 percentile of number of sequences in each species as the data balance statistics and present them in the table below.\\n\\n\\n\\n| | Plant-5 | Plant-6 | Marine-5 | Marine-6 |\\n| ---- | :-------: | :-------: | :--------: | :--------: |\\n| 0 | 10 | 10 | 10 | 10 |\\n| 25 | 66 | 30 | 115 | 114 |\\n| 50 | 190 | 116 | 201 | 223 |\\n| 75 | 450 | 413 | 345 | 357 |\\n| 100 | 4293 | 4599 | 841 | 915 |\\n\\n\\nAs the data is already very unbalanced, and the robustness of clustering results largely depends on the clustering algorithm, we validate the model's robustness to data balancing with K-means clustering and consider 10 datasets used in our clustering & classification evaluation.\\n\\nFor each dataset, we kept 100 species and evaluated species clustering with 3 cases.\\n- Case 1: Balanced. We keep 100 sequences in each species.\\n- Case 2: Less balanced. We keep 100 sequences in the first 10 species, 90 sequences in the next 10 species, 80 sequences in the next 10, ...., 10 sequences in the last 10.\\n- Case 3: Very unbalanced. We keep 100 in the first species, 99 in the second species, ... , 1 in the last species\\n\\n\\nWe then set K=100 for K-means and use Adjusted Random Index (ARI) as the clustering metrics. We ran each experiment with 5 random seeds and report the mean and std of the runs. As shown in the Table, DNABERT-S demonstrates relatively robust performance as the data goes from purely balanced to largely unbalanced.\\n\\n\\n\\n\\n| | Case 1 | Case 2 | Case 3 |\\n| -------- | :----------: | :----------: | :----------: |\\n| Plant-0 | 52.05\\u00b11.02 | 53.44\\u00b10.99 | 53.58\\u00b11.68 |\\n| Plant-1 | 51.49\\u00b10.82 | 49.38\\u00b11.01 | 49.75\\u00b11.15 |\\n| Plant-2 | 50.94\\u00b10.98 | 53.50\\u00b11.98 | 53.13\\u00b12.16 |\\n| Plant-3 | 55.74\\u00b11.04 | 52.29\\u00b11.11 | 51.19\\u00b11.45 |\\n| Plant-4 | 55.21\\u00b11.49 | 55.39\\u00b11.40 | 55.84\\u00b11.04 |\\n| Marine-0 | 46.71\\u00b10.56 | 39.87\\u00b10.45 | 39.57\\u00b10.83 |\\n| Marine-1 | 44.95\\u00b11.92 | 37.77\\u00b10.56 | 36.93\\u00b10.70 |\\n| Marine-2 | 45.73\\u00b10.87 | 40.13\\u00b10.34 | 39.81\\u00b11.13 |\\n| Marine-3 | 37.90\\u00b11.29 | 30.39\\u00b10.93 | 28.91\\u00b11.20 |\\n| Marine-4 | 47.63\\u00b11.31 | 39.28\\u00b10.97 | 38.18\\u00b11.58 |\\n\\n\\n\\n\\n\\n\\nWe really appreciate your reviews and suggestions. We hope our reply can solve your concerns. Please don't hesitate to share any other thoughts.\"}", "{\"title\": \"Authors' Reply (Part 1)\", \"comment\": \"Thank you very much for your detailed and insightful review!\\n\\n## W1: Intuition for Curriculum Contrastive Learning\\n\\n\\nThank you for pointing this out. MI-Mix indeed performs very well on its own. The reason we choose to use curriculum contrastive learning in this work it that: $\\\\text{C}^{2}$LR does not involve any extra computational costs or engineering efforts, while lead to slightly better performance. As a foundation model for species differentiation, we aim to get the best possible results. Nevertheless, we agree that using MI-Mix alone is good choice for conceptual simplicity in genome representation learning. We will further discuss this in the revised version.\\n\\nThe intuition behind the proposed curriculum contrastive learning ($\\\\text{C}^{2}$LR) and MI-Mix are:\\n\\n1. Weighted SimCLR teaches the model whether two sequences are similar or not (a binary outcome of 0 or 1), whereas MI-Mix teaches the model how similar two sequences are, assigning a continuous value between 0 and 1. \\n2. MI-Mix method is especially suitable for species differentiation because species similarities are more nuanced than a binary classification. For example, humans are more similar to monkeys than to viruses. We aim for the DNABERT-S embeddings not only to segregate different species but also to reflect the relative similarities among them (placing humans closer to monkeys and further from viruses). Therefore, the regression-like nature of MI-Mix method makes it ideal for our problem. \\n3. However, predicting these finer-grained similarities is challenging, which is why we introduce **Curriculum Contrastive Learning**. We view Weighted SimCLR as a warm-up phase in model training, where the model first learns to segregate different species, and then we fine-tune it to adjust the distances in a more fine-grained manner.\\n\\n\\n## W2: Baselines with species differentiation training\\n\\n\\n\\nThanks for indicating this. Yes, only DNABERT-S has gone through species differentiation training among all the models in previous Table 1. We have results of other models with species differentiation training, such as HyeneDNA trained with our proposed data construction and Weighted SimCLR. We have included it in the Table 1 in the revised version to better reflect it. \\n\\n\\n## W3: Discussion on original mixup\\n\\n\\n\\nThanks for your suggestions.\\n1. Manifold Mixup [Verma19] is a regularization method that linearly interpolates between hidden states and labels of different data samples at a randomly selected layer. It improves deep neural network representations by training on these interpolations. This technique smoothens decision boundaries, flattens class-specific representations, and promotes less confident predictions on unseen data. \\n2. Manifold I-Mix adapts Manifold Mixup for contrastive learning by applying it to the anchor set. It assigns continuous values between 0 and 1 to indicate the similarity between sequences (for both positive and negative pairs). This approach is particularly effective for species differentiation, teaching the model nuanced similarities rather than binary classifications. For example, humans are more similar to monkeys than to viruses. We aim for the DNABERT-S embeddings not only to segregate different species but also to reflect the relative similarities among them (placing humans closer to monkeys and further from viruses). Therefore, the regression-like nature of Manifold I-Mix method makes it ideal for our problem. \\n\\n\\n[Verma19] Manifold Mixup: Better Representations by Interpolating Hidden States, ICML 2019\\n\\n## W4: Typo in Figure 4's layout\\n\\n\\n\\nThank you for bringing this to our attention. We apologize for the oversight. We have modified it in the revised version to optimize the layout.\\n\\n\\n\\n## Q1: Any baseline model see the same labels\\n\\n\\n\\nThanks for your question. \\n\\n1. In previous Table 1, none of the listed baselines utilize species differentiation labels, as we consider only existing DNA embedding methods for baselines. \\n2. We have undergone species differentiation training, including HyenaDNA with Weighted SimCLR (Sec. 5.6) and various DNABERT-S variants trained with different loss functions (Sec. 5.5). We primarily view these models as ablation studies focusing on the base model and training loss functions. As discussed in W2, we have also included HyenaDNA with Weighted SimCLR as a baseline in Table 1 in the revised version.\"}", "{\"title\": \"Authors' Reply (Part 2)\", \"comment\": \"## W3 & Q1: Missing comparison with similar models like COMEBin\\n\\n\\nWe appreciate this concern and would like to clarify that DNABERT-S serves a distinct purpose from complete metagenomics binning methods, making direct comparisons inappropriate.\\nModern metagenomics binning methods (e.g., MetaBat2 [Kang19] and COMEBin [Wang24]) typically follow a five-step pipeline utilizing three data types:\\n\\n1. Generate DNA embeddings from sequences (**contigs**)\\n2. Extract abundance information (**alignment files**)\\n3. Combine DNA embeddings and abundance information to generate final embeddings\\n4. Perform clustering\\n5. Refine results using external features (**e.g., length, single-copy gene markers**)\\n\\nDNABERT-S specifically targets step 1, rather than end-to-end metagenomics binning. To evaluate its effectiveness, we implemented step 4 using the straightforward approach detailed in Algorithm 1, comparing DNABERT-S against other methods for step 1 only. We deliberately omitted steps 2, 3, and 5 to isolate and assess the impact of DNA embeddings alone.\\n\\nConsequently, compared to complete binning methods like COMEBin, our approach uses only one-third of the available information (sequences only, without alignments or external features) and employs a simplified clustering algorithm. DNABERT-S is designed to enhance existing binning methods by replacing their step 1 (TNF), which explains our focus on TNF as the primary baseline. As discussed in our response to W4, despite using significantly less information and a simpler clustering approach, we achieve comparable binning performance to SOTA methods on CAMI2, demonstrating DNABERT-S's effectiveness.\\n\\n\\n\\n[Kang19] MetaBAT 2: an adaptive binning algorithm for robust and efficient genome reconstruction from metagenome assemblies, PeerJ 7, 2019\\n\\n[Wang24] Effective binning of metagenomic contigs using contrastive multi-view representation learning, Nature Communications, 2024\\n\\n\\n## W4 & Q5: Compare with tools directly on CAMI2 benchmark\\n\\n\\nWe appreciate this suggestion. As explained above, our implementation utilizes only a subset of available information and omits three steps from the standard pipeline, making direct comparisons with existing binners not entirely appropriate.\\n\\nNevertheless, our results are competitive with existing methods. We have compared our approach against official results from SOTA models on the CAMI2 benchmark, including a baseline implementation using TNF (the default DNA embedding method in most metagenomics binners).\\n\\n\\n\\nResults for the 'Plant 5' and 'Plant 6' datasets.\\n\\n| Plant-Associated | Completeness | Purity | F1 |\\n| ------------------------ | :------------: | :--------: | :--------: |\\n| MetaBat 2 | 14.3 | 89 | 24.6 |\\n| MetaBinner | 15.8 | 66.8 | 25.6 |\\n| CONCOCT | 16.2 | 69.3 | 26.3 |\\n| Vamb | 0.1 | 100 | 0.2 |\\n| MaxBin | 20.5 | 81.3 | 32.8 |\\n| **TNF(Plant-5)** | **17.6** | **33.5** | **16.4** |\\n| **DNABERT-S (Plant-5)** | **27.1** | **55.5** | **33.3** |\\n| **TNF(Plant-6)** | **16.2** | **29.8** | **17.1** |\\n| **DNABERT-S (Plant-6)** | **22.3** | **46.6** | **30.6** |\\n\\n\\n\\n\\n\\nResults for the 'Marine 5' and 'Marine 6' datasets.\\n\\n| Marine | Completeness | Purity | F1 |\\n| ------------------------ | :------------: | :--------: | :--------: |\\n| MetaBat 2 | 19 | 87.9 | 31.2 |\\n| MetaBinner | 23 | 69.4 | 34.5 |\\n| CONCOCT | 24.7 | 80 | 37.8 |\\n| Vamb | 0.8 | 99.9 | 1.5 |\\n| MaxBin | 20.6 | 68.6 | 31.7 |\\n| **TNF (Marine-5)** | **17.9** | **42.4** | **21.0** |\\n| **DNABERT-S (Marine-5)** | **22.8** | **51.8** | **28.9** |\\n| **TNF (Marine-6)** | **17.9** | **41.9** | **21.0** |\\n| **DNABERT-S (Marine-6)** | **21.7** | **50.6** | **27.4** |\", \"our_comparisons_yield_three_key_insights\": \"1. DNABERT-S largely outperforms TNF, demonstrating its superiority as a DNA embedding method\\n2. The performance gap between TNF and complete binning solutions highlights the importance of integrating additional features (e.g., abundance information)\\n3. Despite our simplified implementation using limited information, we achieve comparable performance to comprehensive binning methods\\n\\nWhile integrating DNABERT-S embeddings into existing metagenomics binners (replacing TNF) would require substantial engineering effort, our results suggest this would be a promising direction for future work.\"}", "{\"summary\": \"This paper finetunes the DNABERT-2 model to generate species-aware embeddings from genomic sequences.\\nCurrent genome foundation models (such as DNABERT-2) are trained on language-modelling training tasks but do not develop discriminative embeddings.\\nThe authors leverage genome species datasets and contrastive methods to learn embeddings that perform better on both unsupervised and supervised downstream tasks in species differentiation.\\n\\nThey develop a training scheme they name C^2LR for Curriculum Contrastive Learning.\\nIn C^2LR the training of the model is in 2 phases:\", \"phase_1\": \"First, a weighted version of SimCLR is used to encourage embedding s from the same species to be near each other. Weighted SimCLR is SimCLR but with with higher weights for negative samples closer to the anchor.\", \"phase_2\": \"Next, they introduce and use a contrastive loss called Manifold Instance Mixup. This is a more challenging task where they mix hidden states in a random layer and predict the proportion of the mix at the output.\\n\\nThey create and share an evaluation benchmark and perform extensive evaluations of the resulting embeddings.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This is a well written paper with a clear goal and solid choices in methods.\\n\\nThe main novel contribution is the Manifold Instance Mixup method(MI-Mix). They take previous work (i-Mix) which mix inputs in the batch such as images to create examples. They realize that there are no good ways to mix (blend) DNA sequence s at the input and instead apply the i-Mix methodology to the hidden states at a random layer of the network.\\n\\nThey perform extensive evaluations with baselines from VAE and transformer competitors and perform ablation studies.\\n\\nThe embeddings are clearly beneficial in downstream tasks and useful to the community.\\n\\nThey also create and share a benchmark dataset.\", \"weaknesses\": \"In table 2, it is clear that MI-Mix performs very well on it's own. The paper would be much simpler and just as convincing re. performance if it focused on MI-Mix and dropped the curriculum and weighted SimCLR. Of course, getting the best result for a foundation model is also important.\\n\\nIn Table 1 there should also be at least one baseline for a model that has also gone through some kind of species differentiation training or finetuning. As noted in the text, baseline models are unlearnable or trained on generic language modelling objectives. As far as I can tell only DNABERT-S has had the luxury of using species labels in its training.\\n\\nThere could be more discussion about the original Manifold Mixup method (that i-Mix this method was partly inspired by), and how it relates to the new Manifold Instance Method.\", \"typos\": \":\\n\\nline 435 text overlaps with fig 4\", \"questions\": \"Do any of the other baseline models see the same or similar labels to those used in the contrastive trianing?\\n\\nHow well does MI-Mix perform on other modalities - such as the original i-Mix task?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Global Response (Revision Summary)\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your insightful feedback and the time invested in reviewing our work. We have addressed your inquiries and concerns in our rebuttal and the updated manuscript. The changes in the updated manuscript are **highlighted in blue**.\\n\\nFollowing your suggestions, we have enhanced the paper's readability through proofreading and adding more experimental details. For the convenience of reviewers/future readers, this response provides a high-level overview of our contributions.\\n\\n**Revision Details**\\n\\nIn response to the reviewers' suggestions, we have made several key modifications, summarized as follows:\\n\\n**Major revisions include:**\\n1. Included the comparison of the number of parameters, embedding dimensions, inference time and memory for the models in Appendix E. [Reviewers `UzBN` and `n3kh`]\\n2. Justified the absence of data leakage in Appendix F [Reviewer `UzBN`]\\n3. Provided detailed classification results for all baselines on all 12 datasets for completeness in Table 14. [Reviewer `n3kh`]\\n4. Included additional results in Table 1: the results for fine-tuning HyenaDNA and DNABERT-2 using the Weighted SimCLR loss for 3 epochs with the same training dataset used for DNABERT-S. [Reviewer `U4Wt`]\\n\\n**Minor revisions include:**\\n1. Adjusted Figure 4's placement. [Reviewers `UzBN` and `U4Wt`]\\n2. Improved the visual clarity of Figure 1. [Reviewer `UzBN`]\\n3. Included parameter justification and redundancy reduction (line `263-272`). [Reviewer `UzBN`]\"}", "{\"metareview\": \"The authors introduce techniques like DNA-Dropout and DNA-Double to improve embedding representations, enhancing the model's ability to distinguish between species. The paper utilizes contrastive learning techniques, such as MI-Mix and C2LR, to gradually introduce increasingly difficult training samples, improving the model's generalization ability, especially with limited labeled data. The paper was praised for its clear presentation and comprehensive testing across various scenarios.\\n\\nHowever, the reviewers also felt that the techniques employed, like MI-Mix and C2LR, are not entirely novel and are adapted from existing deep learning methods, limiting the originality of the methodological contribution. The experimental validation focuses mainly on metagenomic datasets, lacking a broader comparison with other current genomic models and widely used bioinformatics tools. \\u00a0 \\nThe ablation study, while showing improvement with combined methods, lacks a detailed exploration of the mechanisms behind the observed synergy The paper lacked detailed explanations for parameter choices and data preprocessing steps, raising concerns about transparency and potential data leakage from the training data. \\u00a0 \\n\\nFor these reasons, overall, the reviewers felt the paper is slightly below the acceptance threshold in its current state.\", \"additional_comments_on_reviewer_discussion\": [\"The authors clarified that while they adapt existing deep learning techniques like MI-Mix and C2LR, their primary innovation lies in applying these methods to the specific challenges of metagenomic tasks and addressing the shortcomings of traditional approaches in this domain.\", \"They also emphasized their novelty in data construction and benchmark design, highlighting the non-standard nature of data preparation for DNA representation and the lack of standard datasets and evaluation strategies for this problem.\", \"The authors acknowledged the reviewers' concern and clarified that they conducted experiments with a wider range of models and methods, but did not include all of them in the main text due to space limitations.\", \"The authors included additional details in the revised version to clarify their parameter selection and data preprocessing steps.\"]}", "{\"title\": \"Authors' Reply (Part 3)\", \"comment\": \"## Q1: Hyperparameter selection\\n\\n\\nThank you for your question regarding hyperparameter selection. \\n\\n1. In our problem, we consider the most important hyperparameters to be sequence length and hidden dimension size, as they are directly related to real-world applications and have a significant impact on performance. In Appendices C.6 and C.7, we present the impact of these two parameters.\\n2. For batch size, we set it to the maximum value possible with BF16 precision on 80GB GPUs, as larger batch sizes generally benefit contrastive learning. Due to the high memory cost of handling long input sequences, we are limited in batch size. Preliminary experiments suggest that our base model, DNABERT-2 [Zhou23], is robust to most hyperparameters, including learning rate, weight decay, and dropout. Therefore, we use the same values suggested in the DNABERT-2 paper.\\n\\n[Zhou23] DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genomes, ICLR 2023\\n\\n\\n\\n## Q2: Advantage over classification model\\n\\n\\nThanks for your question. DNABERT-S's most significant advantage over existing DNA classification models is its generalizability to unseen species, which is one of the main motivations behind this work. Traditional classification models are only applicable to the species they are trained on, limiting their utility in real metagenomics research where a large portion of observed sequences belong to unknown species. DNABERT-S, on the other hand, generates embeddings that naturally cluster and segregate sequences from different species, making it applicable to any given DNA sequence regardless of prior knowledge about the species.\\n\\n\\n\\n\\n\\n## Q3: Extend to other biological sequences\\n\\n\\n\\nThanks for your question. Yes, our method can be readily extended to other biological sequences, such as RNA or protein sequences. The only requirements are:\\n\\n1. A deep learning model that generate an embedding for the input biological sequence\\n2. A dataset contains pairs of *similar* sequences. The similarity can be defined in any desired ways (e.g., species, function, and structure). \\n\\n\\n\\n## Q4: Fair comparison\\n\\n\\n\\nThanks for your question. Please refer to our response to W7 for the measures we have taken to prevent data leakage and ensure fair comparisons in our experiments. Regarding model scales, we have tried to reduce their effect by selecting the appropriate version of baseline models with a similar number of parameters. DNABERT-S contains 117 million parameters. Among the deep learning baselines, variants of DNABERT-S (including DNA-Dropout, DNA-Double, DNA-Mutate, and DNABERT-2) have the same number of parameters. For the Nucleotide Transformer, we chose the 97.9 million parameter version to maintain consistency in model size across comparisons.\\n\\n\\n\\n## Q5: Time / Memory / Resources\\n\\n\\n\\nThank you for pointing out the importance of analyzing time and memory consumption. We agree that including this information is helpful, and we have added it to the revised version (Appendix E).\\n\\n1. **Time and memory usage of inference.**\\n\\nSince time and memory usage are primarily impacted by the base model, we compare DNABERT-2, HyenaDNA, and Nucleotide Transformer using sequences of 10,000 base pairs and BF16 precision. The memory usage is measured with a batch size of 1, and time is computed using the largest possible batch size when encoding 512 sequences. As shown below, the models demonstrate similar inference speeds, while DNABERT-2 uses more memory than the Nucleotide Transformer and HyenaDNA.\\n\\n| | DNABERT-2 | Nucleotide Transformer | HyenaDNA |\\n| ----------- | :---------: | :----------------------: | :--------: |\\n| Time (Secs) | 14.27 | 19.16 | 11.16 |\\n| Memory (MB) | 3991 | 1273 | 995 |\\n\\n\\n\\n2. **Resources for training different models**\\n\\nComparing training costs directly is challenging because DNABERT-S is trained upon the pre-trained DNABERT-2, and different genomics models are trained with different datasets. However, for reference, pre-training DNABERT-2 takes approximately 3 days on 8 A100 80GB GPUs, while training DNABERT-S takes around 2 days using the same resources.\\n\\n\\n\\n3. **Model selection**\\n\\nBased on our empirical study, DNABERT-S is most advantageous for tasks where species differentiation is important. For single-species tasks, such as promoter prediction on the human genome, genome foundation models like DNABERT-2 and Nucleotide Transformer are preferred choices. For tasks involving extra-long sequences (e.g., 1 million base pairs), HyenaDNA offers advantages due to its ability to handle longer sequences efficiently.\\n\\n\\nThanks again for all the suggestions. We hope our reply can solve your concerns. Please don't hesitate to share any other thoughts!\"}" ] }
9kR4MREN9E
Adversarial Attacks on Fine-tuned LLMs
[ "Jingwen Ye", "Xinchao Wang" ]
Large Language Models (LLMs) have greatly advanced the field of General Artificial Intelligence, yet their security vulnerabilities remain a pressing issue, particularly in fine-tuned models. Adversarial attacks in black-box settings—where model details and training data are obscured—are an emerging area of research, posing a substantial threat to private models' integrity. In this work, we uncover a new attack vector: adversaries can exploit the similarities between open-source LLMs and fine-tuned private models to transfer adversarial examples. We introduce a novel attack strategy that generates adversarial examples on open-source models and fine-tunes them to target private, black-box models. Our experiments show that these attacks achieve success rates comparable to white-box attacks, even when private models have been trained on proprietary data. Furthermore, our approach demonstrates strong transferability to other models, including LLaMA3 and ChatGPT. These findings highlight the urgent need for more robust defenses when fine-tuning open-source LLMs.
[ "Adversarial Attacks", "Large Language Models" ]
https://openreview.net/pdf?id=9kR4MREN9E
https://openreview.net/forum?id=9kR4MREN9E
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zTg8vIGq7S", "xMTxT9ca8w", "VRYCxBvIIs", "Fjj9q2Vajm", "Df7Nvvbr7T" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1732509967172, 1730755455107, 1730614874367, 1730672622164, 1730510135100 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1587/Authors" ], [ "ICLR.cc/2025/Conference/Submission1587/Reviewer_4Drs" ], [ "ICLR.cc/2025/Conference/Submission1587/Reviewer_F7tU" ], [ "ICLR.cc/2025/Conference/Submission1587/Reviewer_6FHU" ], [ "ICLR.cc/2025/Conference/Submission1587/Reviewer_MiGQ" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper proposes a method to attack private, black-box LLMs that have been finetuned using open-source models on proprietary data. Black-box attacks are a well studied problem in literature. The only additional relaxation in this case is that the base model is known completely, and can be exploited to improve attack success rates on finetuned black-box models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper addresses an important problem, with a new approach.\", \"weaknesses\": \"1. The threat model makes omissions regarding access to the target model; namely, the probability distributions of its outputs.\\n2. It is not clear what the \\\"local data pairs\\\" refer to in line 230. This makes it very hard to appreciate the optimization objective in eq (6). There is no justification given as to why aligning the model on arbitrary data pairs, would in any way be sufficient to find an adversarial suffix for the malicious query. \\n3. The justification for the approximation in eq.(5) does not hold for the chosen finetuned/base model pair. Vicuna is a finetuned version of Llama-2 that utilizes FSDP and flash attention to reduce memory overhead. It does not use a PEFT method that satisfies the assumption where most parameters of the base model are being frozen (https://github.com/lm-sys/FastChat/blob/main/fastchat/train/train.py). This cannot be used to explain the improvement in performance. This assumption also might not hold in instances where the finetuned model also gets extra alignment training, so it must be tested with more recent and robust finetuned/base model pairs. In this case, Vicuna is well known to be badly aligned (and can be seen as much from the results of the paper).\\n4. Vague comparisons in Table 2: In this comparison, the proposed method is optimizing over the original model, aside from their approximation of the target model. GCG only has access to the target model (it\\u2019s performance improves on this benchmark when optimized with other models: Pg. 13 https://arxiv.org/pdf/2307.15043) and it\\u2019s not stated how many iterations does PAIR run compared to the other methods.\\n5. Authors haven't clearly explained the drop in target ASR of their attack for Vicuna->Llama as compared to the original model which is not case for other setting in Table 1. \\n6. Lack of models used in experiments. There are several recent and more robust finetuned/base model pairs available. \\n7. The judge used in the experiments is not appropriate. There can be many false positives, where the model is actually refusing to answer, but the refusal suffixes do not catch. For example, Vicuna can be very easily overfitted to the target string and not have anything after it.\", \"questions\": \"1. Could the authors provide more justification and intuition behind their approach?\\n2. Could the authors justify the relaxations made in the setup?\\n3. Could the authors provide more comparisons with other grey-box methods?\\n4. Lastly, could the authors clarify the setup used for PAIR and GCG in Table 2?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a setting for jailbreaking attacks, which targets private LLMs with inaccessible parameters and unknown fine-tuning data derived from public, open-weight LLMs. Additionally, the authors propose a method for obtaining suffixes for jailbreaking the private LLM, by optimizing suffixes on the public LLM. The authors claim that performance is comparable to attacks that incorporate information about the target LLM's parameters.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The results are surprisingly strong for the target LLM in Table 1\", \"Table 2 suggests that transferability of the authors' method is significantly stronger than GCG\", \"The preliminaries are clear and the notation is rigorous\"], \"weaknesses\": [\"The justification for the setting the authors propose is unclear. If one has access to the public base LLM weights, an attacker can simply fine-tune the model on harmful data to achieve the desired outputs.\", \"Prior work [1] has pointed out that benign fine-tuning already causes harmful ASR to increase by default, which seems likely to account for some of the ASR increase in this threat model\", \"There is a significant ASR increase in the black-box models in Table 2, though it doesn't seem obvious that the method should be significantly more successful than GCG in this regime, and the authors provide very limited commentary on this.\", \"The authors don't include evaluations against highly relevant defenses, such as RPO [2]\", \"The presentation needs work overall, particularly Figures 2 and 3\", \"[1] Qi, X., Zeng, Y., Xie, T., Chen, P. Y., Jia, R., Mittal, P., & Henderson, P. (2023). Fine-tuning aligned language models compromises safety, even when users do not intend to!\", \"[2] Zhou, A., Li, B., & Wang, H. (2024). Robust prompt optimization for defending language models against jailbreaking attacks.\"], \"questions\": \"1) Can the authors justify why the ASR compared to GCG is much higher in Table 2?\\n2) Can the authors clarify the technical novelty of their method compared to GCG?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work develops an approach that generates adversarial examples using open-source models and fine-tunes them to target private, black-box models. The approach first searches for synthetic prompt suffices that align the open-source model generation with the private model generation. Then, standard attack methods are performed using the open-source model, and the synthetic prompt suffices. The open-source model combined with the suffices is considered as a proxy of the private fine-tuned model.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Using prompt suffixes to convert open-source models to a proxy of a private model is novel.\", \"weaknesses\": \"1. The motivation for optimizing the prompt is unclear. There are several ways to \\\"steal\\\" a private model or directly launch an attack without acquiring a proxy model of the private model. What are the limitations of existing approaches, and what is the key advantage of the prompt-based approach? Is it more affordable than LoRA?\\n\\n2. The proposed approach underperforms against competitors such as PAIR. Although PAIR is starred in Table 2, I do not see a corresponding footnote or explanation.\\n\\n3. The authors claim a novel setting that does not expect the generated attacks to succeed in targeting the public base LLM. However, the motivating scenario is not grounded in the paper.\\n\\n4. The proposed attack has a 90% attack success rate on the public base LLM, according to Table 2, which contradicts the claim on reduced success rate on the public base LLM.\", \"questions\": \"1. Could the authors explain the difference between the proposed approach and PAIR? Section 4.2 mentioned that \\\"PAIR achieves higher ASR than our method as it can query the downstream models to generate attacks. However, considering our focus on transferability evaluation, our performance approaches are achievable by querying black-box models.\\\". How does the \\\"query\\\" in PAIR differ from yours?\\n\\n2. Could the authors provide a compelling use case when transferring an attack back to the open-source model is prohibitive? Who will be the primary user of this method?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a new technique for generating suffix based jailbreak attacks against privately fine-tuned versions of open source LLMs. The threat model assumes that the adversary has white box access to the open source LLM and soft label (i.e. log probs) black box access to the fine-tuned version. The authors propose a iterative procedure where each step consists of two stages - (1) the adversarial suffix is first optimized to align the output distribution of the open and the fine-tuned LLM and then, (2) the resultant suffix is then optimized to jailbreak the open LLM. The authors evaluate their attack on Llama2-7b as the open model and Vicuna-7b as the fine-tuned version. They also compare against baselines where white box attacks generated against the open LLM are transferred to the fine-tuned version. The proposed attack achieves a higher attack success rate compared to the considered baseline. Overall, the paper considers a commonly used setting and provides a way to leverage the open source LLM while attacking it\\u2019s fine-tuned version.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Overall, this paper considers a realistic setting where open source LLMs are privately fine-tuned and deployed in a black box setting. Existing attacks only attack model individually, whereas, the proposed attack leverages the open LLM to incorporate additional information about the fine-tuned version. The 2-stage optimization procedure iteratively performs local (around the harmful input query) alignment of the open LLM with the fine-tuned version which increases the transferability of the generated jailbreak. Performing this alignment in each step of the attack iteration should improve transferability as compared to performing the alignment only once at the beginning of the attack.\", \"weaknesses\": \"The evaluation section of this paper leaves much to be desired. It is unclear whether using two-stage optimization provides any empirical benefit over directly attacking the black box fine-tuned LLM. The major weaknesses are as below:\\n\\n- The paper considers a setting where the adversary has black box access to the fine-tuned model. It discusses one way to attack in the black box setting i.e. transferability, but it entirely misses mentioning anything about the the second way i.e. query based attacks which constitutes a rich literature of black box jailbreak attacks against LLMs [1,2,3,4]. To demonstrate that the proposed is actually useful, the authors need to evaluate against query based black box attacks.\\n- The authors only consider the setting of Llama2 \\u2192 Vicuna. Evaluating only this setting is not enough to demonstrate that generalizability of the attack. Further, it fails to provide any insight on how does the attack success rate depend on the type of fine-tuning (standard or parameter efficient), which is an important part of the considered threat model. For example, guanaco is an openly available model which is QLoRA fine-tuned version of the Llama models.\\n- The evaluation uses a string matching based judge which can lead to false positives (since even without a negative prefix, the model response can still fail to successfully respond to the harmful query). Similar to recent work, the authors should instead use LLM-as-a-Judge to evaluate the success of the attack (there are open source judges as well) [5,6].\\n\\n[1] Andriushchenko, Maksym, Francesco Croce, and Nicolas Flammarion. \\\"Jailbreaking leading safety-aligned llms with simple adaptive attacks.\\\" arXiv preprint arXiv:2404.02151 (2024).\\n\\n[2] Sitawarin, Chawin, et al. \\\"Pal: Proxy-guided black-box attack on large language models.\\\" arXiv preprint arXiv:2402.09674 (2024).\\n\\n[3] Hayase, Jonathan, et al. \\\"Query-based adversarial prompt generation.\\\" arXiv preprint arXiv:2402.12329 (2024).\\n\\n[4] Mehrotra, Anay, et al. \\\"Tree of attacks: Jailbreaking black-box llms automatically.\\\" arXiv preprint arXiv:2312.02119 (2023).\\n\\n[5] Mazeika, Mantas, et al. \\\"Harmbench: A standardized evaluation framework for automated red teaming and robust refusal.\\\" arXiv preprint arXiv:2402.04249 (2024).\\n\\n[6] Chao, Patrick, et al. \\\"Jailbreakbench: An open robustness benchmark for jailbreaking large language models.\\\" arXiv preprint arXiv:2404.01318 (2024).\", \"questions\": \"1. In Section 5, the authors state that - \\u201c*Our framework assumes that the target model is only slightly fine-tuned from the original model. However, there may be a drop in ASR when the fine-tuned model significantly differs from the original. In such cases, as discussed in the experiments, our framework can still generate suffixes with high transferability.*\\u201d It is unclear what part of the evaluation supports this statement, since GPT3.5 etc are not fine-tuned from either Llama or Vicuna.\\n2. The plot in Figure 2 shows the loss values for the baseline and the proposed attack. Why does the proposed attack have a lower loss value at the beginning of the attack?\\n3. According to Table 1, the ASR for the proposed attack on Llama in the harmful behaviour case is 49 when it is the original model (white box setting) but 54 when it is the target model (black box setting). How is this possible ?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
9kJperA2a4
AttriBoT: A Bag of Tricks for Efficiently Approximating Leave-One-Out Context Attribution
[ "Fengyuan Liu", "Nikhil Kandpal", "Colin Raffel" ]
The influence of contextual input on the behavior of large language models (LLMs) has prompted the development of context attribution methods that aim to quantify each context span's effect on an LLM's generations. The leave-one-out (LOO) error, which measures the change in the likelihood of the LLM's response when a given span of the context is removed, provides a principled way to perform context attribution, but can be prohibitively expensive to compute for large models. In this work, we introduce AttriBoT, a series of novel techniques for efficiently computing an approximation of the LOO error for context attribution. Specifically, AttriBoT uses cached activations to avoid redundant operations, performs hierarchical attribution to reduce computation, and emulates the behavior of large target models with smaller proxy models. Taken together, AttriBoT can provide a 300x speedup while remaining more faithful to a target model's LOO error than prior context attribution methods. This stark increase in performance makes computing context attributions for a given response $30\times$ faster than generating the response itself, empowering real-world applications that require computing attributions at scale. We release a user-friendly and efficient implementation of AttriBoT to enable efficient LLM interpretability as well as encourage future development of efficient context attribution methods.
[ "Large Language Model", "Context Attribution", "Interpretability" ]
Accept (Poster)
https://openreview.net/pdf?id=9kJperA2a4
https://openreview.net/forum?id=9kJperA2a4
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xsqVVHyDMe", "xDkC41zCI0", "vvUcHR47yI", "srYE8YLhEB", "rklD0ndRmc", "jexdwIzuLH", "jYZ3GcFulI", "ggKp2BcQZp", "aU9tisOACY", "QUsoOAf79v", "OaBt3Kszcr", "OJxeEevg4g", "NDE7SEPXS5", "Lb291s9BUQ", "JvZ14WZY6n", "C4WmiOKBs5", "BIvQB9WgST", "4pKfrVLUMB", "2N6k7sbhzO", "1y5gmh8e2H", "0FcNxkI69f" ], "note_type": [ "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "meta_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1730557141628, 1730704188327, 1732293975961, 1732293824687, 1732294162539, 1732631001601, 1730677941954, 1730674139342, 1732294050301, 1732647371163, 1730707305798, 1732633768421, 1732500212488, 1732294109472, 1730720965632, 1737524106732, 1732294433129, 1734406640555, 1732294339596, 1732294463329, 1731122296379 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11149/Reviewer_Y8fS" ], [ "ICLR.cc/2025/Conference/Submission11149/Reviewer_f2FJ" ], [ "ICLR.cc/2025/Conference/Submission11149/Authors" ], [ "ICLR.cc/2025/Conference/Submission11149/Authors" ], [ "ICLR.cc/2025/Conference/Submission11149/Authors" ], [ "ICLR.cc/2025/Conference/Submission11149/Reviewer_Jn65" ], [ "ICLR.cc/2025/Conference/Submission11149/Reviewer_uZjG" ], [ "ICLR.cc/2025/Conference/Submission11149/Reviewer_5ZgU" ], [ "ICLR.cc/2025/Conference/Submission11149/Authors" ], [ "ICLR.cc/2025/Conference/Submission11149/Reviewer_f2FJ" ], [ "ICLR.cc/2025/Conference/Submission11149/Reviewer_zBXc" ], [ "ICLR.cc/2025/Conference/Submission11149/Reviewer_abNN" ], [ "ICLR.cc/2025/Conference/Submission11149/Reviewer_Y8fS" ], [ "ICLR.cc/2025/Conference/Submission11149/Authors" ], [ "ICLR.cc/2025/Conference/Submission11149/Reviewer_Jn65" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11149/Authors" ], [ "ICLR.cc/2025/Conference/Submission11149/Area_Chair_vwTS" ], [ "ICLR.cc/2025/Conference/Submission11149/Authors" ], [ "ICLR.cc/2025/Conference/Submission11149/Authors" ], [ "ICLR.cc/2025/Conference/Submission11149/Reviewer_abNN" ] ], "structured_content_str": [ "{\"summary\": \"This paper focuses on efficiently approximating the LOO error for context attribution. Based on key observations, the author develops AttriBoT, which utilizes cached activations, hierarchical attribution, and smaller proxy models to facilitate large-scale attribution computation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The issue addressed in this paper is significant and holds substantial research value. Furthermore, the writing is clear, and the arguments are easily understandable, with an intuitive methodology.\", \"weaknesses\": \"See questions part for more details.\", \"questions\": \"1. My understanding is that the Pareto-optimal conclusion regarding efficiency and performance is derived from Figure 2. While consistent results are observed across the three datasets, does the Pareto-optimal outcome have corresponding theoretical support?\\n\\n2. In Section 4.1.1, the authors introduce two target models, the Llama3.1 series and the Qwen 2.5 series, for evaluating the AttriBoT method. Can this method be applied to closed-source large models? Additionally, is it applicable to LLMs with architectures other than Transformers?\\n\\n3. As a general approach, the authors mention the broad applications of context attribution, but only use OBQA for data. Can the effectiveness of the proposed method be further validated in other contexts or task types?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"This paper studies how to perform leave-one-out context attribution with resource limitations. For Large language models, performing leave-one-out context is very expensive. Therefore, this paper proposes AttriBot, including the following key points:\", \"Key-value caching\", \"Hierarchical attribution\", \"Proxy modeling\", \"This paper shows that their method achieves lots of speedup comparing to the scenarios where their method is not used while maintaining the performance.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed methods are simple but effective.\", \"This research direction is useful.\", \"All findings are supported by experiments (e.g., smaller model in the same model family approximate the target model\\u2019s LOO attribution score well)\"], \"weaknesses\": [\"[Major] The proposed methods are too simple and not novel, the messages are not surprising. In fact, I am not sure whether key-value caching can be considered as a contribution here or not. This is just one general trick for speeding up LLM\\u2019s inference. The other twos are slightly more novel, but still pretty straightforward. Therefore, to mitigate this weakness, I think this paper requires more extensive experiments to have sufficient technical contributions.\", \"[Major] More experiments are needed. For instance, this paper only considers Llama model family and Qwen family, and the experiments of many important findings seems only conducted on LLaMA (Figure 1, please correct me if I am wrong, I also checked appendix).\", \"[Medium] More findings are needed. Just one random example \\u2014 for other models, whether we observe similar phenomenon as shown on LLaMA model family? If not, what could be the reason? Training dataset? Or other thing? At least, there are still many interesting questions to answer. I think if the authors can dive deeper in this directions, this paper can be a good paper. But now the technical contributions are not sufficient yet.\"], \"questions\": \"See the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for taking the time to review our work. We agree that context attribution is a topic of crucial importance for understanding how LLMs \\u201cthink\\u201d as well as allowing us to easily verify and validate their context-dependent claims. Below, we would like to discuss the three weaknesses you identified in your review:\\n\\n\\n# Analysis of Proxy Model Fidelity\\n\\nWe agree that the AttriBoT methods that rely on proxy modeling degrade as the similarity between the proxy model and the target model decreases. This is seen in our results where larger proxy models achieve higher context attribution fidelity than smaller proxy models (Figure 3). Spurred by your suggestions, we have expanded on these results by investigating how differences in training distribution between the target and proxy model affect context attribution. In particular, we evaluate the use of Llama 8B, Qwen 7B, and Mistral 7B as a proxy for Llama 70B and find the LOO attribution approximation error is higher with proxy models from a different model family than the target model (mAP of 0.81 for Llama 8B vs. 0.75 Mistral 7B and 0.71 for Qwen 7B). This gives evidence that matching the training distribution between the target and proxy model is important, and we have added these results to our paper (Figures 7-10, appendix). However, given that most LLM model families include models of various sizes (e.g. the Llama 3 collection now contains 5 models from 1B to 405B parameters), we think practitioners will likely be able to choose a proxy model from the same family.\\n\\n# Generalization to other model classes\\n\\nWe focus on decoder-only transformers in this work for two reasons. First, nearly all LLMs are decoder-only transformers, and second, many decoder-only transformers have been trained at a variety of model sizes allowing for methods that leverage multiple models from the same family. To maximize the impact of our work, we chose to focus on this model class. However, if the reviewer has a specific suggestion of other language model architectures that are state-of-the-art and come in multiple sizes, we would certainly be interested in evaluating them too.\\n\\n# Incorporating Context Attribution into the Human-Machine Interaction Process\\n\\nThis would be a useful and interesting application of our research on efficient LOO context attribution. Our specific goal in this paper was to lay the algorithmic foundation for an application like the one you've proposed. However, we will be open sourcing a library (see supplementary files for a recent version) with an easy-to-use API for performing LOO attribution with the AttriBoT set of methods. We hope that this will empower practitioners to enhance the usability of LLMs deployed in their products.\\n\\n# Baselines\\nWe believe we have covered all relevant baselines (5 including standard LOO) for context attribution. If the reviewer has any specific suggestions on additional baselines for context attribution, we would be happy to test them against our existing baselines and AttriBoT.\"}", "{\"title\": \"Updates to our Submission\", \"comment\": [\"Thank you to all seven reviewers for engaging with our work and providing insightful feedback. Here, we would like to update you all on the changes we\\u2019ve made to our submission in response to your comments:\", \"We updated Figure 2 to indicate which combination of methods achieved different points on the AttriBoT Pareto front\", \"We\\u2019ve added a \\u201cPractitioner\\u2019s Guide to Using AttriBoT\\u201d in the Appendix to advise users on how best to apply AttriBoT to their problem and select appropriate algorithms and hyperparameters\", \"We expanded our set of results by evaluating our approximate LOO attribution methods using the Qwen 2.5 model family on all three datasets we consider in the paper (see Figure 6)\", \"We added experiments exploring the effectiveness of proxy modeling when the target model and proxy model come from different model families \\u2013 e.g., Llama 70B approximated by Mistral 7B (see Figures 7-10)\", \"We have made some changes to our organization and notation to improve the paper\\u2019s clarity where indicated by reviewers\"]}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for taking the time to thoroughly read our work. Below, we discuss some of the concerns brought up in your review.\\n\\n# Simple Methods\\n\\nRegarding KV caching, while KV caching is a general trick for speeding up inference, we note that standard KV caching (where the entire context is cached and re-used as-is) is not applicable to LOO attribution since the context changes for each pass of leave-one-out. While we agree that adapting KV caching to LOO attribution is not a fundamentally new method, we argue that it does involve some novelty. Furthermore, we would like to note that KV caching actually cannot be applied out of the box to any attribution method that performs repeated inference. For instance, ContextCite gains almost no improvement in performance from KV caching, since for every forward pass a randomly sampled set of sources are removed from the context resulting in a low probability of prefix-sharing across forward passes.\\n\\nMore broadly, we agree that the methods proposed in this work are quite straightforward. However, seeing as how these methods actually work quite well for efficiently approximating LOO attributions, we argue that this should not be considered a weakness. Especially in newer research areas, like context attribution, it is important to start with simple methods to understand areas of improvement that actually require complex solutions.\\n\\nFurthermore, we make explicit the assumptions of each method (e.g., sum of k leave-one-out attributions is approximately equal to leave-k-out attribution, proxy model attributions approximate target model attributions when both models are attributing the same response, etc.) and provide evidence that these assumptions hold in realistic settings. \\n\\n# Experiments on Multiple Models and additional findings\\nWe agree with this point. Our main experiments used the Llama model family and we also reported results on the HotPotQA dataset using the Qwen model family to show that these results generalize across models. Since receiving your feedback we have replicated all of the experiments (QASPER, SQuAD, and HotPotQA) using the Qwen model family (Figure 6) and see nearly identical results, demonstrating the generalizability of our method. We also included new results considering mismatched proxy models including Mistral 7B (Figures 7-10) and find that, while still effective, using a mismatched proxy model can degrade performance somewhat.\"}", "{\"comment\": \"Thanks for the authors' detailed responses. My main concerns have been addressed. Therefore, I am happy to maintain my positive rating.\"}", "{\"summary\": \"This paper introduces AttriBoT, a novel collection of techniques for efficiently approximating leave-one-out (LOO) context attribution in large language models. The authors present three key insights: 1) caching attention key-value pairs can avoid redundant computations, 2) hierarchical attribution can reduce necessary computations through pruning, and 3) smaller proxy models can effectively approximate larger target models' attributions. The combined approach achieves remarkable efficiency gains, providing up to 300x speedup while maintaining high fidelity to the target model's attributions compared to baselines. The method is extensively evaluated across different model scales and datasets, demonstrating consistent performance improvements in open-book question answering tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"1. Technical Innovation & Practicality\", \"Introduces multiple complementary techniques that can be used independently or combined\", \"Provides both theoretical and empirical justification for each component\", \"Achieves practical speedups that make attribution feasible in real-world applications\", \"Releases efficient implementation to benefit the research community\", \"2. Comprehensive Evaluation\", \"Tests across multiple datasets (SQuAD, HotpotQA, QASPER)\", \"Evaluates with different model families (Llama, Qwen)\", \"Includes thorough ablation studies of different components\", \"Provides detailed theoretical efficiency analysis with derivations\", \"3. Strong Empirical Results\", \"Demonstrates clear Pareto-optimal trade-offs between speed and accuracy\", \"Shows consistent performance across different context lengths and model sizes\", \"Achieves significant speedups (300x) while maintaining high attribution fidelity\", \"Outperforms existing attribution methods like ContextCite\", \"4. Clear and Rigorous Methodology\", \"Well-defined metrics for measuring attribution quality\", \"Thorough baselines for comparison\", \"Careful experimental design with appropriate controls\", \"Detailed implementation specifications\"], \"weaknesses\": [\"1. Limited Scope of Applications\", \"Primary evaluation focuses on open-book QA tasks\", \"Could explore effectiveness in other applications like detecting malicious prompts or hallucinations\", \"Could demonstrate utility for real-time attribution scenarios\", \"2. Parameter Sensitivity Analysis\", \"Could provide more guidance on selecting optimal parameters (e.g., pruning thresholds)\", \"Limited discussion of how parameter choices might vary across different use cases\", \"Could explore automatic parameter tuning approaches\"], \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors present a method leave-one out for contextual attribution, i.e. how does a certain part in a model prompt influence the output of the model. Leave-one-out has been mainly impractical due to computational cost and the authors address efficiency via\\n1. KV caching\\n2. hierarchical evaluation\\n3. approximation via smaller (proxy) models\\n\\nThe authors demonstrate that this combination can produce satisfying results with up to 300x speedup, of 30x faster than generating the response (with the bigger model alone).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Context attribution is an important problem, including but not limited to the areas the authors called out in the paper. For example interpretability, reliability, safety, privacy/confidentiality, etc.\", \"Solid discussion of computational cost, and gains by introducing the individual optimizations\", \"I like that authors combine and discuss optimizations across the stack ranging from hybrid modelling by combining models of different complexity, leveraging insights of the problem (hierarchical approach), down to lower level optimization such as KV caching\"], \"weaknesses\": \"1. I do not consider the way KV caching is leveraged in this paper as a novelty. KV caching over multiple requests comes for free with well established inferencing frameworks available. However KUDOs to the authors that they discuss the importance of KV cachine in detail, including it's cost saving.\\n2. It's an important, but fairly narrow area\", \"some_minor_concerns\": \"1. How well would this generalize to more complex reasoning tasks. Recent literature suggests that reasoning capabilities in models vary quite considerable, and only the largest models can properly come up with reasoning solutions, or even judge the output beyond style or simpler tasks (if properly assessed). How well does the proxy model approach generalize if there is a larger gap in model capabilities, and for more complex tasks.\\n2. How well does the hierarchical approach work in real world scenarios, where information can appear in multiple places (also see in questions section)\", \"questions\": \"I can see how hierarchical attribution works if context only appears in a single, localized place int he prompt. However, with most real world RAG approaches, we would expect information to be present & spread accross multiple places in the document. What would happen in such a case?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for taking the time to read our paper and for providing this thoughtful response. Below, we provide some further context and discuss the weaknesses identified in your review.\\n\\n# Importance of the LOO Error and its approximation\\nThe LOO error measures the change in a model\\u2019s prediction when a single piece of data (e.g., one training example, one context source, or other notions of a single datum) is removed from the model. This provides a simple and interpretable score measuring how important that piece of data is to a model\\u2019s prediction or behavior. In fact, approximating the LOO error is the objective of many methods that aim to attribute back to training data (e.g., including LOGRA, but also datapoint attribution methods like influence functions and TRAK) or to context sources (e.g., ContextCite).\\n\\nIn training data attribution works, such as the LOGRA paper highlighted in the review, computing the exact LOO error for a sample requires re-training the model with that sample excluded from the training set, which makes computing the exact LOO error for every training example infeasible. In fact, the basis of many training data attribution methods, like LOGRA, is to approximate the LOO error since it is not possible to compute directly. This is discussed in [Section 2: Scalability Bottlenecks in Influence Functions](https://arxiv.org/pdf/2405.13954) from the LOGRA paper.\\n\\nSince the exact LOO error is infeasible to compute in training data attribution, this also makes it prohibitively expensive to evaluate training data attribution methods in terms of how well they approximate the LOO error. As a result, most training data attribution methods resort to evaluating with proxy metrics like task accuracy as a function of the number of high-attribution training examples removed.\\n\\nFor context attribution, the picture is a bit different. In this setting, it is expensive, *but feasible*, to compute exact LOO attributions since each attribution score requires a single forward pass rather than a full training run. Thus, in our work, we are actually able to measure how well our approximate LOO attribution methods align with the ground truth LOO attributions. To improve the clarity of the paper, we will be adding parts of this explanation to our related work.\\n\\n# Efficient LOO Attribution and Large Model Compression\\nThe goals of efficient LOO attribution and model compression do seem somewhat similar \\u2013 in both settings we aim to emulate some behavior of large models efficiently. However, model compression focuses on retaining performance on downstream tasks while we primarily care about retaining the LOO attribution behavior of models. As such, there may be some similarity between the hierarchical, pruning, and proxy modeling approaches we propose and methods used for model compression, but we develop novel variants of these ideas and specifically test that the underlying assumptions of each of these methods hold true in the LOO attribution setting. Based on your feedback, we would be happy to add some references to the relevant literature on model compression, and if you have any suggestions for relevant papers, we would be happy to add them.\\n\\n# Recovering Attributions of More Than the Top Sources\\n\\nMost applications of context attribution (via LOO error or otherwise) primarily aim to identify highly contributive sources because the insight that many spans contributed a small amount is not particularly actionable. We note that AttriBoT will absolutely provide attribution scores for all sources in the context, but we focus on evaluating the highly contributive sources to ensure a reliable and realistic evaluation. If there is a different metric you would be interested in seeing on HotpotQA or some other dataset, we would be happy to compute and report it.\\n \\n# Organization of Experimental Results Section\\n\\nDue to space constraints, we had to defer some figures to the appendix while referring to them in the main text\\u2019s experimental results. We will try to make this section more clear, and if our submission is accepted, we plan to use the extra page to move these Appendix figures back into the main text as you've suggested.\"}", "{\"comment\": \"Thanks so much for your thoughtful response. I will increase my score to 6.\"}", "{\"summary\": \"The paper introduces AttriBoT, a set of methods for efficiently approximating leave-one-out context attribution at LLM scale. In particular, the authors show that (1) using context caching reduces the number of FLOPs approximately twice compared to naive computation of LOO; (2) Hierarchical Leave-k-out attribution efficiently approximates leave-one-out attribution; and (3) LOO attribution scores for larger models can be approximated by LOO attribution scores for smaller models from the same family. The authors demonstrate that combining these techniques can achieve a 300x speedup in the computation of approximate LOO attribution scores, with only a 10% drop in mean average precision.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper addresses an important problem of context attribution. The leave-one-out (LOO) error method for context attribution is computationally expensive due to the need to re-generate the text after removing each piece from the context. The paper proposes a set of techniques for approximating the LOO scores while significantly reducing computational cost. The proposed methods are very intuitive and can be easily applied in practice.\\n\\n2. The paper evaluates the proposed combination of approximation and computational techniques across a number of datasets and language models. The authors also compare AttriBoT against several baselines and demonstrate the optimality of AttriBoT's precision-speed Pareto front compared to other context attribution methods.\", \"weaknesses\": \"While most of the proposed techniques are not entirely novel in the field\\u2014for example, KV caching has been applied to other problems, as has transferability between language models from the same family\\u2014to my knowledge, this is the first paper where these approaches have been applied to the context attribution problem.\", \"questions\": \"1. Could the authors please label the AttriBoT combinations in Figure 2 for better clarity?\\n\\n2. It might be beneficial to include more model families in the empirical experiments, such as Gemma.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for responding to my previous concerns and pointing out my misunderstanding in my original comments. The main issue of the proxy model has been well explained. I will keep my original rating.\"}", "{\"title\": \"Reviewer Feedback on Author's Revisions\", \"comment\": \"Thank you for your response to the issues raised.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for taking the time to engage with our paper. Regarding your questions:\\n\\n1. We have amended Figure 2 in our submission to identify which points on the Pareto front correspond to different methods. Across most experiments, the rough ordering of methods on the Pareto front from most efficient to least efficient is as follows: Hierarchical + Proxy + KV Caching, Proxy + KV Caching, Hierarchical + KV Caching, Pruning + KV Caching, and KV Caching.\\n\\n2. Our original experiments all used the Llama model family except for one experiment on HotPotQA that we replicated using the Qwen 2.5 model family. The intention of this Qwen experiment was to show that these methods generalize across model families. Since receiving this feedback, we have gone one step further and replicated experiments on each of the datasets with Qwen 2.5 (Figure 6), finding nearly identical results. Additionally, we have also included an experiment where we use Llama 70B as the target model and compare Llama 8B, Qwen 7B, and Mistral 7B as proxy models (Figures 7-10). We find that using a proxy model from the same model family as the target model performs best, providing evidence that a similar training distribution between the target and proxy model is important for accurate proxy modeling.\"}", "{\"summary\": \"The paper proposes a novel techniques called AttriBoT, which is an efficient approach for computing context attribution in LLMs by approximating the computationally expensive leave-one-out (LOO) error. It utilizes KV cache, hierarchical attribution, and proxy models to reduce the cost of calculating LOO attributions by over 300\\u00d7, achieving real-time interpretability in context-augmented LLMs. The method provides a practical, feasible solutions for efficient real-world context attribution methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Efficient Context Attribution: The paper introduces an interesting and efficient approach to approximate the LOO error for context attribution from the perspective of cache reusage, hierarchical attribution, and smaller proxy model.\\n\\n2. The proposed different approaches can be composed together to achieve better efficiency and performance. \\n\\n3. The framework shows strong performance across large language models by achieving over a 300\\u00d7 speedup in context attribution computations.\\n\\n4. The paper also shows the implementation of efficient and easy-to-use AttriBoT.\", \"weaknesses\": \"1. The paper lacks an in-depth comparison with the latest methods, e.g., for SIG[1], a sequential-gradients-based approach to compute word importance, it used Log-Odds, Comprehensiveness, and Sufficiency evaluation metrics; for LOGRA[2], it evaluated the effectiveness in terms of accuracy and efficiency. However, these did not mention the LOO error. Could you please provide a more in-depth explanation of the necessity and importance of using the LOO error?\\n\\n[1]. Enguehard, Joseph. \\\"Sequential Integrated Gradients: a simple but effective method for explaining language models.\\\" arXiv preprint arXiv:2305.15853 (2023).\\n\\n[2]. Choe, Sang Keun, et al. \\\"What is Your Data Worth to GPT? LLM-Scale Data Valuation with Influence Functions.\\\" arXiv preprint arXiv:2405.13954 (2024).\\n\\n2. The innovation is incremental, it has already well-established in large model compression and optimization methods for KV reuse, Mixture of Experts approach, and pruning strategies. Using cached activation avoiding redundant operation, hierarchical attribution reducing computation, and smaller proxy model emulating large model to approximate LOO seems to be a combination of the previous compression methods.\\n\\n3. In the Approximation Error part. The paper focus on only a few highly contributive sources, which may overlook the cumulative effect of less influential spans. This approach might limit the method's generalizability, especially in cases where there isn't a clear distinction between highly and moderately contributive sources. It would be better to evaluate your method recovering few highly cotributive sources and full sources on datasets like HotpotQA to verify the rationality.\\n\\n4. The experimental section needs better reorganization. Key experimental results like the efficiency vs. accuracy trade-off for each AttriBoT acceleration method should be presented clearly within the main text.\", \"questions\": \"Please refer to Weakness 3 in the section above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for taking the time to thoughtfully review our work. Below, we provide some further context and discuss the questions and weaknesses identified in your review.\\n\\n# KV Caching\\nWe agree with the reviewer, that KV caching comes at little implementation cost. However, we do note that our use of KV caching is notable and novel in this context for the following reasons:\\n1. Standard KV caching (where the entire context is cached and re-used as-is) is not applicable to LOO attribution since the context changes for each pass of leave-one-out. AttriBoT's novel application of KV caching involves identifying the longest possible pre-computed prefix that can be used based on the current context and past contexts, with additional complications when processing a context hierarchically.\\n2. KV caching does not provide a speedup by default for all context attribution methods that involve multiple forward passes. An example of this is ContextCite, which performs each forward pass on a context with a randomly selected set of sources removed. Since the source removal is probabilistic, contexts on average have very short prefixes in common resulting in virtually zero performance gain from KV caching.\\n\\n# Proxy Modeling for Complex Tasks\\nThis is an excellent point that certain complex tasks are only possible for large models, thus potentially breaking the proxy modeling assumption. This is seen in our proxy modeling results (Figure 3) where larger proxy models yield better approximate LOO attributions than smaller proxy models. One important caveat to note is that for proxy modeling to work, it is *not* necessary that the proxy model be able to perform the task on its own. This is because when we approximate the LOO attributions for a large target model using a small proxy model, we use the response generated by the target model and evaluate this response\\u2019s likelihood (given different contexts) using the proxy model. Thus, the proxy does not need to produce the same response as the target model, but rather must emulate the conditional likelihood of a response given a context. We do note that we include results on HotpotQA, which involves reasoning over multiple \\\"hops\\\" to answer questions. We have added corresponding clarifications to the paper. \\n\\n# Duplicate Context Information\\nIt is a valid point that duplicate contextual information can cause problems for the methods we propose. If the same piece of information appears multiple multiple times in the context, then simply removing one instance of that information would likely not change the model\\u2019s response likelihood. However, this is not unique to our methods, but rather to any attribution method that aims to approximate the LOO error. For instance, many popular training data attribution methods, such as influence functions, also suffer from this issue since they approximate LOO error for training data. In our newly added \\u2018Practitioner\\u2019s Guide to AttriBoT\\u2019 in Appendix D, we suggest users to deduplicate input before performing LOO attribution.\\n\\nAn alternative approach for situations with duplicate context information would be to compute attributions based on notions of value other than the LOO error \\u2013 such as the Shapley value. In general, however, Shapley values are much more expensive to compute than the LOO error, as they require performing an exponential (in the number of context sources) number of forward passes compared to a linear number of forward passes needed for the LOO error.\"}", "{\"metareview\": \"This paper simultaneously achieves a big speedup and also faithfull attributions compared to existing work. The reviewers unanimously vote for acceptance, and I agree. The reviewers did raise some concerns, but none of these were disqualifying, and the authors engaged extensively with the reviewers during the discussion period, including additional experiments. For example, several reviewers raised novelty concerns, but methods need not be entirely novel as long as they obtain stronger results and are often composed of existing techniques. Reviewers also pointed out that the method focuses on decoder-only transformers, which I think is entirely acceptable since virtually all LLMs in use are decoder-only transformers. Overall, the reviewers and I have a positive view of this paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviews were already positive before rebuttals, but nonetheless, the authors did a great job during rebuttals addressing feedback.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for taking the time to thoughtfully review our work. Below, we provide some further context and discuss the weaknesses identified in your review.\\n\\n# Evaluation Other Than Open-Book QA\\nWe focus on open-book QA in our work because it is realistic and representative of many typical LLM use-cases (i.e. any query that involves conditioning on specific spans within a larger context) while being straightforward to evaluate. Other LLM use-cases, such as abstractive summarization, most or many context spans might be informative, so measuring the LOO error might not provide useful or actionable insights. If you have specific benchmarks/settings that you would like us to include, please let us know and we will do our best to add them.\\n\\n# Hyperparameter Selection\\nWe agree that with a set of methods like AttriBoT, that can be mixed and matched to achieve different efficiency vs. accuracy tradeoffs, selecting an algorithm combination and associated hyperparameters can be difficult. To mitigate this we do two things:\\n\\n1. We amend Figure 2 to show which algorithm combinations are represented by different points on the AttriBoT Pareto front. This gives readers better insight into which combinations of methods tend to provide higher accuracy and lower efficiency vs. lower accuracy and higher efficiency. \\n2. We have added a \\u201cPractitioner\\u2019s Guide to AttriBoT\\u201d as Appendix D, in which we summarize the relative efficiency of different algorithm combinations, provide insight into hyperparameter (e.g., hierarchical thresholds) selection based on our experiments, and advise users on best practices for data preprocessing before performing context attribution.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for taking the time to thoughtfully review our work. Below, we provide some further context and try to clarify the questions identified in your review.\\n\\n# Theoretical Support of Pareto-Optimality\\nWe are not able to make theoretical claims that AttriBoT is Pareto-optimal in general because this finding involves comparison to methods that work quite differently than LOO attribution (e.g. attention-based attribution).\\n\\n# Closed Source Language Models\\nThe only assumption made by our methods (and LOO attribution in general) is that we can compute the conditional likelihood of a response given a context. Thus, our method can be applied to any model that provides likelihoods as part of its API. Whether AttriBoT could be applied to a closed-source LLM therefore depends on whether the LLM's API provides likelihood values.\\n\\n# Non-Transformer Architectures\\nIn principle, the hierarchical, proxy modeling, and pruning methods within AttriBoT could be applied to non-Transformer language models. KV caching would only extend to other architectures if, as in causal self-attention, it is valid to reuse past KV states.\\n\\n# Evaluation Outside of Open-Book QA\\nWe focus on open-book QA in our work because it is realistic and representative of many typical LLM use-cases (i.e. any query that involves conditioning on specific spans within a larger context) while being straightforward to evaluate. Other LLM use-cases, such as abstractive summarization, most or many context spans might be informative, so measuring the LOO error might not provide useful or actionable insights. If you have specific benchmarks/settings that you would like us to include, please let us know and we will do our best to add them.\"}", "{\"summary\": \"The paper presents AttriBoT, a set of techniques including key-value caching, hierarchical attribution, and proxy modeling/pruning, to efficiently approximate leave-one-out context attribution for large language models, achieving significant speedup and better faithfulness compared to prior methods, with implications for LLM interpretability and various applications.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper presents a novel approach, AttriBoT, for efficiently approximating the Leave-One-Out (LOO) error in context attribution for large language models (LLMs). The idea of using cached activations to avoid redundant operations is highly original. By caching the attention key and value tensors at each layer, the method significantly reduces the computational cost, which is a new way of addressing the inefficiency issue in computing LOO attributions. This has not been explored in such a comprehensive manner in previous works.\", \"In the context of the increasing use of LLMs, understanding how the model generates its output based on the input context is crucial. The work on context attribution, especially the proposed AttriBoT method, is highly significant as it enables more efficient analysis of the influence of each context span on the LLM's generations. This has practical implications in various applications such as improving the reliability and safety of LLMs by detecting malicious prompts and model hallucinations.\", \"The hierarchical attribution technique is a novel addition. The assumption that the sum of the LOO attributions for $k$ contiguous text spans can be approximated by a single Leave-$k$-Out attribution score is an innovative concept. This allows for a reduction in the number of forward passes required for attribution computation, especially in hierarchical contexts like paragraphs and sentences.\"], \"weaknesses\": [\"__theoretical analysis__\", \"For the similarity assumption between the proxy model and the target model, the paper mainly proves it by experimentally measuring the correlation, but does not deeply explore the stability and limitations of this similarity assumption under different model architectures and training data distributions. In some cases, the small proxy model may not accurately capture the complex behavior of the large target model, leading to deviations in the attribution results.\", \"__generalization__\", \"Investigate the adaptability of the method to different model architectures. Consider testing on models with different architectural designs (such as models with different attention mechanisms, non - Transformer-based models) to see if the proposed techniques can be generalized or need to be adjusted. This will provide more comprehensive guidance for the application of the method in the broader field of LLMs.\", \"__evaluation and baselines__\", \"Explore ways to integrate context attribution results into the human-machine interaction process. For example, design a feedback mechanism that provides users with context attribution information when they receive a response from the LLM, helping them to better understand the basis of the model's answer and guiding them to ask more effective questions. This can improve the overall user experience and the effectiveness of using LLMs.\", \"The choice of baselines is relatively limited. Although several common methods are included, there may be other emerging or less well-known methods that could provide a more comprehensive comparison. Additionally, the baselines may not cover all possible alternative approaches, leaving room for a more thorough evaluation of the novelty and superiority of the AttriBoT method.\"], \"questions\": [\"When using proxy models for approximation, what is the impact of differences in the training data distribution between the proxy model and the target model on the accuracy of the context attribution? How can this potential issue be mitigated, especially when dealing with models trained on diverse or domain-specific datasets?\", \"In the hierarchical attribution method, how do you ensure the stability and accuracy of the approximation when the context structure becomes extremely complex, such as in documents with highly nested or irregular hierarchical structures? Are there any theoretical guarantees or additional techniques that could be employed to handle such cases?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
9kFaNwX6rv
SIMPL: Scalable and hassle-free optimisation of neural representations from behaviour
[ "Tom George", "Pierre Glaser", "Kim Stachenfeld", "Caswell Barry", "Claudia Clopath" ]
Neural activity in the brain is known to encode low-dimensional, time-evolving, behaviour-related variables. A long-standing goal of neural data analysis has been to identify these variables and their mapping to neural activity. A productive and canonical approach has been to simply visualise neural "tuning curves" as a function of behaviour. However, significant discrepancies between behaviour and the true latent variables -- such as an agent thinking of position Y whilst located at position X -- distort and blur the tuning curves, decreasing their interpretability. To address this, latent variable models propose to learn the latent variable from data; these are typically expensive, hard to tune, or scale poorly, complicating their adoption. Here we propose SIMPL (Scalable Iterative Maximization of Population-coded Latents), an EM-style algorithm which iteratively optimises latent variables and tuning curves. SIMPL is fast, scalable and exploits behaviour as an initial condition to further improve convergence and identifiability. It can accurately recover latent variables in spatial and non-spatial tasks. When applied to a large hippocampal dataset SIMPL converges on smaller, more numerous, and more uniformly sized place fields than those based on behaviour, suggesting the brain may encode space with greater resolution than previously thought.
[ "neuroscience; place cells; grid cells; representations; neural data; hippocampus;" ]
Accept (Poster)
https://openreview.net/pdf?id=9kFaNwX6rv
https://openreview.net/forum?id=9kFaNwX6rv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zdMHx79Fo7", "xC9OhtoWZB", "x4KurogDtP", "vtZPK0WckA", "s3dH6pJ1ge", "nd1fk09pgC", "jffdQhwsQw", "gwP1MFRnij", "fADqOKdOk4", "cU5rdDtRSB", "WhZLEtm4ll", "RxcomRBQRn", "QwAznDeI7K", "PeSHW8lzuZ", "LpVOVHzRto", "LDPdlfNCCy", "KeGmZTYa9z", "JHpMyA4oaK", "JAEbAQmiIv", "I2jJYiVGKJ", "HT2XcyhYVs", "DhE9qXqjZj", "CIvW4VwdyP", "BrO5hHZJbA", "5FJvXsMtsr", "4gMGGHh0G1", "4bBtdsrJHC", "3QAa52fNpq", "3DkwukusRK", "33WIDZNF81", "1NCMFMivWQ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732121760952, 1733078533157, 1732127773034, 1732576809538, 1732563103129, 1732127747544, 1732637646337, 1729955646119, 1732121096757, 1730685572804, 1732475618614, 1732534479184, 1731511951492, 1732131182068, 1732131300453, 1732768702462, 1732121333359, 1732121734808, 1732562274276, 1731167462679, 1732216260712, 1732121358520, 1732533296988, 1734986348652, 1732121124475, 1730713980545, 1732750421317, 1737523842243, 1732543525341, 1732562312162, 1732530759963 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Reviewer_5Qja" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Reviewer_6vN3" ], [ "ICLR.cc/2025/Conference/Submission7501/Reviewer_5Qja" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Reviewer_gB66" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "~Cole_Lincoln_Hurwitz1" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Reviewer_6vN3" ], [ "ICLR.cc/2025/Conference/Submission7501/Reviewer_5Qja" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Area_Chair_PjVJ" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Reviewer_Kcef" ], [ "ICLR.cc/2025/Conference/Submission7501/Reviewer_Kcef" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7501/Reviewer_gB66" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ], [ "ICLR.cc/2025/Conference/Submission7501/Authors" ] ], "structured_content_str": [ "{\"title\": \"...(response continued)\", \"comment\": \"* GPLVM: We find that GPLVM performs quite well on our synthetic grid cell dataset but terminates with a higher error than both SIMPL and CEBRA, probably due to its misspecified emission model meaning it does not properly account for the Poisson nature of the spiking data. Furthermore, to give GPLVM the best chance and put it on level footing with CEBRA and SIMPL we initialised the its latent with behaviour. Note that in order to make use of all datapoint we had to restrict ourselves to using 1000 inducing points. Total compute time was just under 12 minutes.\\n* Pi-VAE: Performed well, beating both CEBRA and GPLVM but finishing with an overall error (8.38 cm) about twice that of SIMPL and a compute time about 15.8x higher (we used a CPU for both). Like CEBRA, but less so, it's latent appears noisy which, also like CEBRA, we expect is due to the fact there is no explicit temporal dynamics/smoothing. \\n \\nWe agree with the reviewer that these two methods are both more relevant that CEBRA which we initally compared to because of its popularity and ease of access. We will leave CEBRA there for completeness but we are happy to move it to the Appendix if the reviewer thinks this would help the clarity of the paper. \\nAnother viable contender could be a technique such as PfLDS (Gao, 2016). This satisfies the first three of our desiderata, but, to our knowledge, is not identifiable in the sense that there is no meaningful way to give behaviour as an input. \\n\\nFinally, related to benchmarking, we would also like to draw the reviewers attention to a new analysis and figure (Fig. 7), where we apply SIMPL to a hand-reaching macaque dataset from somatosensory cortex. This dataset on which SIMPL performs well, from the Neural Latents Benchmark suite, is commonly used in the LVM literature further supporting our claim that SIMPL is performant in comparison to similar LVM methods.\\n\\nWe hope that our response and additional comparisons cleared the reviewer's concerns on this topic. We are happy to further clarify the position of SIMPL in the spectrum of data analysis methods by incorporating elements of this response in the Related Work section for a camera-ready version. We will be happy to address any remaining concerns the reviewer may have. Otherwise, we would be appreciate if the reviewer could adapt their score and recommend acceptance of our submission. Once again, we thank the reviewer for their time and insightful comments which, in our opinion, have led to substantial improvements in the manuscript.\"}", "{\"title\": \"Follow-up\", \"comment\": \"As the deadline for responding is approaching we would like to ask whether the reviewer has had time to consider our response and the changes we made to the manuscript. We remain available for discussion if needed.\"}", "{\"title\": \"...(response continued)\", \"comment\": \">**\\\"Lines 477\\u2013479 are ambiguous as to whether the authors discuss place cell remapping or another concept. If remapping is the intended topic, I recommend explicitly stating this and including a citation to aid readers from the machine learning community who may be unfamiliar with this concept.\\\"**\\n\\nWe apologise for the confusion. We were not refering to hippocampal remapping in lines 477-479 but were discussing how the trajectory makes a discontinuous \\\"jump\\\" to another position in latent space. We will try and re-word in the camera-ready to avoid confusion. \\n\\n> **\\\"The writing can be improved a lot\\\"**\\n\\nThank you for this very thorough check! We will certainly fix all of these errors in time for the camera-ready version. \\n\\n> **\\\"What the arrow of epoch 1 -> 10 in Figure 3 is different from others? Does it have a specific meaning?\\\"**\\n\\nThis just refers to the fact that epochs 2--9 are collapsed and not shown to reduce clutter. \\n\\n> **\\\"What does \\u201cdata now shown in L466 mean? Does it mean that the authors intentionally did not include the results since it is insignificant? If so, please include it in the appendix.\\\"**\\n\\nYes, we did no show this result because it is insignificant and we wanted to keep only the most important results in the main figure to reduce clutter. We will add these statistics in for the camera ready. \\n \\nWe thank the review for their time and thorough review. We would be very happy to hear about, and address, about any remaining major concerns they may have regarding technical contributions, soundness and impact. We remain commited to addressing any concerns that may prevent the reviewer from recommending acceptance via an updated score.\"}", "{\"comment\": \"We thank the reviewer again for their review and comments which have improved the manuscript.\"}", "{\"comment\": \"I appreciate the authors for their sincere response to the clarification. I raised my rating.\"}", "{\"title\": \"Response: New analysis on Macaque dataset and additional benchmarks\", \"comment\": \"We thank the review for their comments. In response we have added new benchmarks with alternative techniques and new analysis on their suggested macaque dataset. As such we believe we have satisfied all of their concerns. We now respond to their comments point-by-point:\\n\\n> **\\\"The author claims general but limited to specific regions of brain place cells and grid cells. I highly recommend authors compare with various datasets such as the macaque dataset and the mouse visual cortex datasets used in Schneider et al. (2023).\\\"**\\n\\nFollowing this review we have now tested SIMPL on one of the Neural Latents Benchmark datasets (specifically the [Area2_Bump data](https://neurallatents.github.io/datasets) from somatosensory cortex for a macaque doing a centre-out reaching task collected by Chowdhury and Miller), the same dataset as used in the Schneider et al. (2023) paper. We find SIMPL works well (see the revised manuscript for a more detailed discussion of the results). In the results we have now added a new section and figure detailing our findings briefly summarised as follows: \\nWe tested 3 versions of SIMPL on the hand-reaching data. \\n- SIMPL2D(position): The latent is initialised with the monkeys x- and y-hand position. \\n- SIMPL2D(velocity): The latent is initialised with the monekys $v_x$- and $v_y$-hand velocity.\\n- SIMPL4D(postion&velocity): A 4D latent is initialised with $x$, $y$, $v_x$ and $v_y$. \\nIn all three models SIMPL optimises the latent variable (the test-log-likelihood improves), uncovering a smooth latent variable correlated to (but substantially different from) the behavioural initiates. Corresponding tuning curves revealed neurons with \\\"hand-position-like\\\" or \\\"hand-velocity-like\\\" selective receptive fields. The 4D version of SIMPL performed better than either 2D version, revealing disentangled latents with a higher overall log-likelihood than either of the 2D models. Our finding reveal:\\n1. SIMPL can be applied to the types of non-hippocampal datasets commonly used in the LVM literature. \\n2. Both position and velocity _seperately_ do a good job explaining the latent dynamic but position and velocity _combined_ is better. \\n3. In the optimised latent space neurons have localised receptive fields reminiscent of place cells in the hippocampus.\\n4. SIMPL can be extended beyond the 2-dimensional latent spaces initially tested. \\n\\n> **\\\"The authors misuse big-O notation.\\\"**\\n\\nWe'll correct this for the camera-ready version. \\n\\n> **\\\"3. Except for Section 4.2, there are no comparisons with existing methods\\\"** \\n\\nThank you for this suggestion, which was echoed by reviewer gB66 (where you will find a more detailed discussion). In summary, we have added two more comparsion to well known techniques (piVAE and GPLVM) which are, arguably, more relevant. In both cases these models perform well on the synthetic grid cell dataset but still worse than SIMPL and with over 10x compute cost. We think these additional benchmarks substantially strengthen our claim that SIMPL outperforms the most relevant, popular alternatives. \\n\\n> **\\\"Although experiments with Tanni et al. (2022) represent the main result in this paper, the authors scarcely describe the task and the significance of each plot (particularly in Figure 6c), simply referring back to the original paper.\\\"**\\n\\nWe will add a section to the discussion providing more details of this task. Regtarding panel 6c this shows violin plots of summary statistics for the tuning curves before and after SIMPL is applied, showing how they have changed. We'll clarify this for the camera ready. \\n\\n> **\\\"The authors compare SIMPL with CEBRA, a machine learning-based method running on CPUs. If they provide a runtime comparison showing that SIMPL on a CPU is faster than CEBRA on a GPU, it would further support their efficiency claims.\\\"**\\n\\nThank you for this suggestion however we don't think this is an appropriate or fair comparison. Because the compute-heavy step in SIMPL (calculating the likelihood maps, see Appendix for why) is parallelizable, we would also expect huge speed up for SIMPL on a GPU. Since it is all written in JAX this should be readily acheivable. If the reviewer still thinks it worthwhile we can add SIMPL-CEBRA GPU-vs-GPU time comparison. On the other hand, we believe one of the core values of SIMPL is it's speed _on a CPU_. GPU usage is a barreir for some researcher and not everybody has access to to these compute resources so we would still like to focus attention on CPU compute times. We hope this argument makes sense.\"}", "{\"comment\": \"I thank the authors for their detailed response to me and the other reviewers.\\nI belive the additional experiments and the updated writing better highlight the position of the proposed approach within the landscape of lvms in neuroscinece. While the potential impact of the method remains a concern, I think the paper and the is constructive addition to the literature of neural-behavioural data analysis. I have raised my score.\"}", "{\"summary\": \"The paper proposes a new method, SIMPL for estimating neural representation regarding behaviors. The method is based on the Expectation and Maximization method where during E-step, a model produces estimates of latent trajectories using Kalman smoothing and during M-step, the model uses KDE for fitting the intensity function.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written. Especially, the references in the introduction can be a good guide to readers who are not familiar with neuroscience, especially, the hippocampus and MEC although the authors simply listed 10 papers in the actually related work section.\", \"The figure is generally helpful in understanding the method and the results.\", \"The proposed method is a very simple E-M-based algorithm but it is more efficient and has better performance than CEBRA in the grid cell simulation with the RatsInABox simulator.\"], \"weaknesses\": \"1. The author claims general but limited to specific regions of brain place cells and grid cells. I highly recommend authors compare with various datasets such as the macaque dataset and the mouse visual cortex datasets used in Schneider et al. (2023).\\n\\n2. The authors misuse big-O notation. For example, the authors mentioned O(1 hour, 200 neurons, 10^6 spikes) ~ O(1 min) in L 120 and O(hours) in L219. This causes confusion, especially to computer scientists who have been trained in big-O notation for measuring time or space complexity. If the authors want to use big-O notations, please change them to match the function of neurons/spikes and so on. However, I believe, just changing to plain text might be better.\\n\\n3. Except for Section 4.2, there are no comparisons with existing methods or ablation studies. Although the results appear promising, it remains challenging to discern whether this is due to the simplicity of the task or the efficacy of the proposed method. Moreover, I am not sure whether CEBRA is the only method to compare with.\\nThe authors should provide more details on the training and testing splits. L305\\u2013306 and 416\\u2013417 lack information: are the splits based on spatial segmentation (using specific parts of the box for training) or trial separation?\\n\\n4. Although experiments with Tanni et al. (2022) represent the main result in this paper, the authors scarcely describe the task and the significance of each plot (particularly in Figure 6c), simply referring back to the original paper.\\n\\n5. The authors compare SIMPL with CEBRA, a machine learning-based method running on CPUs. If they provide a runtime comparison showing that SIMPL on a CPU is faster than CEBRA on a GPU, it would further support their efficiency claims.\\n\\n6. Lines 477\\u2013479 are ambiguous as to whether the authors discuss place cell remapping or another concept. If remapping is the intended topic, I recommend explicitly stating this and including a citation to aid readers from the machine learning community who may be unfamiliar with this concept.\\n\\n7. The writing can be improved a lot:\\n\\n7.1 It is possible that the readers do not know what the \\u201cturning curve\\u201d is since it is used in neuroscience literature and ICLR is generally a machine learning community. It is good to define what it is in the introduction.\\n\\n7.2 It would be better to move the Related Work section to Before Method (standard ML conference styles) or Discussion (some ICML styles). Currently, it disconnects methods and results.\\n\\n7.3 Please choose between British English and American English. Currently, it is used together (e.g., In L477, optimization, behaviour).\\n\\n7.4 Consider that readers may not be familiar with \\u201ctuning curve,\\u201d a term more common in neuroscience literature. Since ICLR has a general machine learning audience, defining it in the introduction would be helpful.\\n\\n7.5 Moving the Related Work section to a position before the Method section (per standard ML conference styles) or to the Discussion (in line with some ICML styles) may improve flow, as it currently separates the methods and results sections.\\n\\n7.6 Maintain consistency in English dialect, as the paper currently alternates between British and American English (e.g., \\\"optimization\\\" vs. \\\"behaviour\\\" in line 477).\\n\\n7.7 In line 131, specify \\u201cAppendix A\\u201d instead of \\u201cAppendix,\\u201d even if there is only one section.\\n7.8 Ensure the font for v is consistent in lines 142 and 143.\\n\\n7.9 Since space permits, it may be preferable to adjust the placement of Figure 5 so that it doesn\\u2019t span pages 7 and 8, avoiding the empty space near lines 399\\u2013405.\\n\\n7.10 IN L816, 1, \\u2026, P => \\\\{ 1, \\u2026, P \\\\}.\\n\\n7.11 Correct citation formatting:\\n* In line 116, \\u201cTanni et al. (2022)\\u201d should use citep.\\n* In lines 486, 493, and 790, change citep to citet.\\n* In line 522, switch citet to citep.\\n\\n7.12 It would also be beneficial to unify the reference formatting:\\n* Journals like Nature and Nature Neuroscience do not list volume numbers, while JMLR and most eLife papers do (except in lines 583\\u2013585). Additionally, both \\u201celife\\u201d and \\u201cElife\\u201d are used inconsistently, as are \\u201cNature\\u201d and \\u201cnature.\\u201d\\n* Clarify \\u201ccite\\u201d with a red background in L608.\\n* While \\u201cICLR\\u201d is abbreviated, other conferences like NeurIPS and ICML are listed in full; consider unifying this approach.\", \"questions\": [\"What the arrow of epoch 1 -> 10 in Figure 3 is different from others? Does it have a specific meaning?\", \"What does \\u201cdata now shown in L466 mean? Does it mean that the authors intentionally did not include the results since it is insignificant? If so, please include it in the appendix.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response: New analysis on new neural dataset and parameter sweeps\", \"comment\": \"We thank the reviewer for their time taken to assess our paper. We were especially happy to read, and would like to reiterate, their point about SIMPL being a \\\"fresh departure\\\" from previous techniques. We have made a number of changes to the manuscript in response to their review, several new figures and sub-panels as well as an entirely new analysis on a non-hippocampal dataset from the Neural Latents Benchmarks suite. We hope that these changes address their concerns.\\n\\n> **\\\"Higher dimensional latent states...and datasets in the current literature on LVMs\\\"** \\n\\nFollowing this review we have now tested SIMPL on one of the Neural Latents Benchmark datasets (specifically the [Area2_Bump data](https://neurallatents.github.io/datasets) from somatosensory cortex for a macaque doing a centre-out reaching task collected by Chowdhury and Miller), the same dataset as used in the CEBRA paper. We find SIMPL works well (see the revised manuscript for a more detailed discussion of the results). In the results we have now added a new section and figure (Fig. 7) detailing our findings briefly summarised as follows: \\nWe tested 3 versions of SIMPL on the hand-reaching data. \\n- SIMPL2D(position): The latent is initialised with the monkeys x- and y-hand position. \\n- SIMPL2D(velocity): The latent is initialised with the monekys $v_x$- and $v_y$-hand velocity.\\n- SIMPL4D(postion&velocity): A 4D latent is initialised with $x$, $y$, $v_x$ and $v_y$. \\n\\nIn all three models SIMPL optimises the latent variable (the test-log-likelihood improves), uncovering a smooth latent variable correlated to (but substantially different from) the behavioural initiates. Corresponding tuning curves revealed neurons with \\\"hand-position-like\\\" or \\\"hand-velocity-like\\\" selective receptive fields. The 4D version of SIMPL performed better than either 2D version, revealing disentangled latents with a higher overall log-likelihood than either of the 2D models. Our finding reveal:\\n1. SIMPL can be applied to the types of non-hippocampal datasets commonly used in the LVM literature. \\n2. Both position and velocity _seperately_ do a good job explaining the latent dynamic but position and velocity _combined_ is better. \\n3. In the optimised latent space neurons have localised receptive fields reminiscent of place cells in the hippocampus.\\n4. SIMPL can be extended beyond the 2-dimensional latent spaces initially tested. \\n\\n> **\\\"Settings where the behavioural recordings are missing or latent spaces not dominated by behaviour\\\"** \\n\\nWe have already shown in Fig. 4 that SIMPL still works well _without_ behaviour. The \\\"catch\\\", in such instances, is that you lose the identifiability guarantees that come when a relevant behavioural variable is used for initialisation. Because of this we acknowledge and agree with the reviewer that the most likely use-case for SIMPL will be to discover/refine tuning curves in neural data which is well characterized by a behavioural variable already available to the user. \\n\\nAlthough this is, on the face of it, a limitation, it reflects a very conscious choice made to bypass a fundamental issue in latent variable modelling. Latent variable modelling is fundamentally hard and an entirely general purpose technique which is (i) totally assumption free and (ii) computationally cheap and hassle-free across all datasets simply doesn't (perhaps never will) exist. We contest that many popular techniques have actually _under relied_ on behaviour which has resulted in methods which are overly complex (putting off non-theoreticians) or compute heavy. Our observation is that in a majority of neuroscience experiments the latent space _is_ closely related to a behaviour which is (or could be easily) measured concurrently - in such a majority of cases SIMPL strikes a good balance between simplicity, interpretability and computational efficiency. We will add a discussion of this point in the revised manuscript.\"}", "{\"summary\": \"The authors propose a new latent variable model for neuroscience that uses kernel density estimation to learn tuning curves from an unobserved latent variable for population spiking data. The model is fit using a closed form EM algorithm and is scalable to large datasets. The authors validate the model on simulated and real neural data.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The validation on synthetic and real neural data is a nice addition. The simplicity of the approach has potentially some appeal to experimentalists who want to avoid some of the more bespoke or complex latent variable models for the neural spiking data.\", \"weaknesses\": \"The authors provide a very brief overview a very broad field of latent variable models used in neuroscience to motivate their approach. They claim existing methods do not scale well, or have other shortcomings compared to SIMPL. To robustly demonstrate that this model indeed scales better than most or all of these existing approaches while retaining accurate latent and neural tuning identification would require far more comparisons, discussion and evaluation.\\n\\nIn particular, the authors model is motivated in a similar way to the GPLVM, which is discussed but not compared to. As the authors point out GP based models often do have issues with scalability. However, there have been many steps toward improving the scalability of these models in recent years -- note that 1 and 2 uses inducing points, a well-known approach to improve scalability in GP models. These models however have the same 'tuning curve' interpretations that the authors posit SIMPL has, and so it would be important to directly compare to them for both scalability and neural tuning identification. \\n\\n1 \\\"Manifold GPLVMs for discovering non-Euclidean latent structure in neural data\\\" \\n2. Learning interpretable continuous-time models of latent stochastic dynamical systems\\n\\nThere is also no mention of switching linear dynamical systems models, which often scale better than GP based methods due admitting an EM based approach and utilizing forward-backward passing (see e.g. 3, 4, but there are many others). \\n\\n3 Bayesian learning and inference in recurrent switching linear dynamical systems.\\n4. A general recurrent state space framework for modeling neural dynamics during decision-making.\\n\\nStill there are others in this space that scale well and admit flexible non-linear latent characterizations. See for example 4, 5. and of course LFADS, as the authors discuss. The authors also mention Pi-VAE (and there are of course now many other VAE based models used in neuroscience), but they don't provide a principled comparison or discussion of these approaches. \\n\\n4 Inferring Latent Dynamics Underlying Neural Population Activity via Neural Differential Equations\\n5 Collapsed amortized variational inference for switching nonlinear dynamical systems.\\n\\nThe choice of CEBRA as the only benchmark is not well motivated. CEBRA is a fundamentally different model it uses behavioral information for contrastive learning, and it was designed to visualize a behaviorally-informed latent space. In this sense it isn't an unsupervised model used purely on spiking data (like SIMPL as well as the ones cited above, among others) and it does not permit a tuning curve interpretation -- which the authors emphasize is an important component of SIMPL and existing in many of these other LVMs. Comparison to the GPLVM, as well as switching LDS, or other nonlinear dynamical models would be far more appropriate then CEBRA. \\n\\nIn it's current state, it is unclear exactly how much better SIMPL scales than any of the existing approaches, and how it's latent identification or tuning identification may compare to these many state-of-the-art methods.\", \"questions\": \"None\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your prompt response.\\n\\n> **\\\"Regarding response for big-O....I highly recommend fixing it during the reviewing period\\\"**\\n\\nWe've gone ahead and corrected all of the minor formatting issues you mentioned (e.g. the misuse of big-O notation, citation style, British/American spelling and referencing inconsistencies). \\n\\n> **\\\"It would be nice if the authors compared with CEBRA result in the same figure (Figure 7 in the revision).\\\"**\\n\\nWe have added a panel to Fig. 7e showing the equivalent CEBRA embedding. This data was taken and adapted directly from the CEBRA paper. We think it is a fair comparison to the SIMPL data we are showing alongside it in panel e since: \\n* Both SIMPL and CEBRA are trained with 4D latent spaces \\n* Both SIMPL and CEBRA align and average the latent across active-trials for plotting (-100ms to 500ms from movement onset time) \\n* Both SIMPL and CEBRA use behaviour (e.g. hand-position) to inform (SIMPL-->initialise, CEBRA-->contrastive loss labels) latent discovery.\\nThis panels shows how SIMPL generates latent variables which are comparable to CEBRA (a more established technique). \\n\\n> **\\\"Regarding \\u2018data not shown\\u2019 in L466, It would be nice if you included it in the supplementary materials for thoroughness.\\\"**\\n\\nIt is now shown in the Appendix (Fig. 10).\\n\\n> **\\\"I believe the page limit is also applied to the revision.\\\"**\\n\\nWe had initially checked and the ICLR instructions are ambiguous on whether the 10 page limit applies to revisions as opposed to just initial and camera-ready submissions - we apologise for running over! For now we have moved Fig. 7 (new motor-task dataset) to the Appendix making it 10 pages again. In the camera-ready version this result will be moved back to the main text and Fig. 2 (the discrete-latent task, which in our opinion is less important) will be pushed to the Appendix.\"}", "{\"title\": \"Follow-up\", \"comment\": [\"As the discussion period is coming to a close we would like to ask whether the reviewer has had the time to consider our comments and, in light of these, reconsider their score. To summarise; the reviewer's major concern was that we had not benchmarked SIMPL against sufficient nor relevant existing methods. Largely we were in agreement with the reviewer and, as such, responded with:\", \"**Two new benchmarks** against more relevant LVM techniques mentioned by the reviewer: pi-VAE and GPLVM-with-inducing-point.\", \"These techniques work well on our dataset but still significantly underperform SIMPL in terms of both absolute final error and compute time.\", \"These have been added to Fig. 5.\", \"Both our new benchmarks (assuming a fixed number of inducing points for GPLVM) have linear time complexity, matching SIMPL. But our experiments suggest they are both slower by approximately a factor of 10x.\", \"A thorough **review of the literature** flagged by the reviewer\", \"Substantial **discussion on what methods constitute relevant benchmarks** and why.\", \"Full details can be found in our original response above. We thank the reviewer again for their suggestions and ask if they would let us know if they have any remaining major concerns.\"]}", "{\"title\": \"Missing citations for neural-behavioral modeling\", \"comment\": \"There are a bunch of methods for neural-behavioral modeling which are not cited in this paper. It would be good to include these references as building latent variable models that exploit behavior is a very active field of study.\\n\\n- Sani, Omid G., et al. 2021: This work proposes a linear dynamics approach for modeling neural-behavioral data.\\n\\n- Hurwitz et al. 2021: This work proposes a sequential VAE for modeling neural-behavioral data.\\n\\n- Gondur et al. 2024: This work proposes a multi-modal gaussian process variational autoencoder for neural-behavioral data\\n\\n- Sani et al. 2024: This work proposes an RNN-based architecture for neural-behavioral data.\\n\\nRabia Gondur, Usama Bin Sikandar, Evan Schaffer, Mikio Christian Aoi, and Stephen L Keeley. Multi-modal gaussian process variational autoencoders for neural and behavioral data. In International Conference on Learning Representations, 2024.\\n\\nSani, Omid G., et al. \\\"Modeling behaviorally relevant neural dynamics enabled by preferential subspace identification.\\\" Nature Neuroscience 24.1 (2021): 140-149.\\n\\nCole Hurwitz, Akash Srivastava, Kai Xu, Justin Jude, Matthew Perich, Lee Miller, and Matthias Hennig. Targeted neural dynamical modeling. Advances in Neural Information Processing Systems, 34:29379\\u201329392, 2021.\\n\\nOmid G Sani, Hamidreza Abbaspourazad, Yan T Wong, Bijan Pesaran, and Maryam M Shanechi. Modeling behaviorally relevant neural dynamics enabled by preferential subspace identification. Nature Neuroscience, 24(1):140\\u2013149, 2021.\"}", "{\"title\": \"Summary of the reviews and our response\", \"comment\": \"_Nb: PLEASE RE-READ this has been edited to reflect further discussion with reviewer gB66_\\n\\nBased on the reviews and subsequent follow-up discussion, we have submitted a revised manuscript which has undergone significant improvements. \\n\\n**To summarise _strengths_**, the reviewers appreciated the novelty of our paper stating that it is \\\"a fresh departure from current latent variable models relying on deep neural networks\\\" (6vN3) as well as its computation efficiency and scalability (Kcef, 5Qja, gB66) and that \\\"the simplicity of the approach has potentially some appeal to experimentalists who want to avoid more complex latent variable models\\\" (gB66). They noted that it reveals new scientific insights (6vN3) and that we made \\\"careful experiments on synthetic as well as real datasets\\\"(Kcef, gB66) as well as \\\"rigorous comparisons with existing methods\\\"(Kcef) . They also commented that the paper was well presented (5Qja, 6vN3), the figures were clear (5Qja) and that we \\\"acknowledged the limitation of the proposed method\\\" well (6vN3) \\n\\n**To summarise _weaknesses_**, there were two primary concerns. Firstly, three of the reviewers (5Qja, 6vN3, Kcef) wanted to see SIMPL tested on additional non-spatial datasets, all pointing towards the same Macaque hand-reaching dataset used in previous LVM studies (we have now done this, consequently the reviewers have raised their scores to accept/strong accept-status). Secondly, reviewer gB66's primary concern was that we have not made sufficient, nor relevant comparisons to alternative techniques (we have discussed this in depth, added three more methods to our benchmarks, rewritten the related work and added a technique-comparison table to the appendix). The reviewers also had other more minor concerns about whether SIMPL performance depends strongly on the choice of hyperparameters (6vN3, we tested this and it did not), dataset size (Kcef, likewise), or continuity of the latent variable (6vN3, likewise). To a large extent, we are in agreement with the reviewers regarding all of these concerns. Consequently we have run new benchmarks, performed new experiments/hyperparameter sweeps, run SIMPL on the Macaque dataset and rewritten the related work section, all of which leave the manuscript in a much stronger position. Our most substantial changes are summarised in the following table: \\n\\n**Summary of most notable changes** \\n\\n| Addition | Comment | Relevant reviewers | \\n|-------|-------|------|\\n| **Macaque dataset analysis** |SIMPL was applied to a somatosensory macaque dataset and performed well demonstrating its applicability to non-spatial and higher-dimensional latent spaces. | 6vN3, Kcef, 5Qja |\\n| **Additional benchmarks**| We have now benchmarked SIMPL against three additional, more relevant methods (pi-VAE, GPLVM and GPDM as well as the original CEBRA). In all cases SIMPL performs best and computes fastest. These comparisons cover the range of relevant LVM features (see extended discussion below), confirming SIMPL's superiority. | gB66, 5Qja |\\n| **Hyperparameter sweep** | A new figure has been adding confirming the robustness of SIMPL to its two main hyper parameters ($v$ and $\\\\sigma$). | 6vN3 | \\n| **Dataset size sweep** | We ran SIMPL on datasets of increasing smaller and smaller size (both neuron count and duration) to see how it performs in the low-data limit. | KceF | \\n| **Discontinuous latent experiment** | We ran a new experiment where the latent is non-continuous and confirmed SIMPL can accurately recapitulate it (despite having an explicit smoothness prior) | 6vN3 |\\n| **Discussion of relevant alternatives** | Additional discussion of what models constitute relevant alternatives and why. | gB66 |\\n| **Rewritten Related Work** | To better explain the field of LVMs and frame SIMPL in relation to alternative techniques specifically clarifying a minimal set of unique features SIMPL and only SIMPL satisfies. | gB66 | \\n| **Added missing citations** | All citations mentioned on this forum have not been added to the manuscript | gB66, C. Hurwitz (public commenter) | \\n\\nAll new figures have been added to a revised manuscript in order that the reviewers can assess them. In this manuscript new text is shown in blue.\"}", "{\"title\": \"Thanks\", \"comment\": \"We'll take a look at these references and filter them into the manuscript for the camera-ready version.\"}", "{\"title\": \"Response: Sections rewritten and additional benchmark/comparisons to GPLVM added\", \"comment\": \"We have just uploaded a revised version of our manuscript.\\nIn response to the reviewers comments we have adjusted the paper to (i) more tightly constrain the scope of SIMPL and (ii) more clearly describe, and compare to, its most relevant alternatives. We elaborate on these changes below: \\n\\n\\n1. We have **completely rewritten the Related Work** section to more comprehensively cite existing literature and frame SIMPL within the field of LVMs, clarifying its benefits with respect to alternatives. We have organised this section around the desiderata described above. We then give more focussed discussions on CEBRA, pi-VAE, GPLVM (+ associated models); three good candidates for comparison to SIMPL in that they do not place restrictive linear assumptions on the tuning curves and can naturally exploit behaviour. \\n\\n _All_ the citations mentioned in this review forum have now been added to the manuscript. \\n2. **Made edits to the Introduction, Discussion and Methods** with the intention of more clearly defining, indeed _shrinking_, the scope of SIMPL. The intention here is to make clear that existing techniques were developed in different contexts with different goals in mind (some, for example, were developed entirely outside of neuroscience). Thus SIMPL is not a be-all and end-all solution to LVMs but a specific one. In the Methods we added a sentence clarifying the benefits of our KDE M-step over neural network approaches in terms of interpretability.\\n3. **Added a table to the appendix summarising _all_ the studies we have discussed** as well as some others. This table, containing 24 methods, clarifies each methods position with respect to our desiderata. We hope that this goes some way towards, as the reviewer suggested, \\\"appropriately depicting this large research space, and conveying to a reader how the features of SIMPL relate to the different features of the many latent variable models used in neuroscience\\\". We also hope this will help future readers comprehend the field of LVMs for neural data analysis and drive further development. \\n4. **Additional comparisons to GPLVM-style methods**. Following the reviewers suggestions to better compare SIMPL to GPLVM-based techniques we have added an additional benchmark against GPDM (Wang, 2005), a variant of GPLV which imposes smooth latent dynamics through a Gaussian process prior. GPDM does not come with an induction point variant thus we were restricted to running it on a subset (10,000 / 36,000) of data points. We found its performance to be comparable to GPLVM.\\n\\n Furthermore, following the suggestions of the reviewer, we investigated the impact of GPLVMs misspecified noise model by running it on a control dataset generated with the same grid cell tuning curves but instead of a Poisson noise model, a Gaussian noise model. For the Gaussian data, like with out original Poisson data, the tuning curves specified the means of the observations and we set the standard deviation to a fixed value of 0.1. The results are shown in a figure in a new figure added to the Appendix. While, GPLVM performed slightly better on the control dataset (as expected) the improvement remained small compared to the difference between GPLVM and SIMPL. Other differences between SIMPL and GPLVM (e.g. the inducing point approximation used for GPLVM, optimisation issues, or other forms of misspecification/model differences) must be more important. \\n\\n\\nOnce again we are very grateful for the reviewers continued engagement and insightful comments which have led to meaningful improvements in our manuscript. We have tried our best to handle all of their concerns an will remain available for any further discussion or clarification which might be required.\"}", "{\"title\": \"Response: New analysis on a non-spatial dataset and sweep testing data-size requirements\", \"comment\": \"We thank the reviewer for their time take to thoroughly assess our paper. In response we have run new analyses as well has hyperparameter sweeps ot the manuscript. As a result we believe the paper is now in a much stronger position to be accepted. Our point-by-point response is as folllows:\\n\\n> **\\\"SIMPL has only two hyperparameters. Minimal hyperparameters make it practical for experimentalists.\\\"**\\n\\nFurthermore, we have now added a new figure to the Appendix (Fig. 8) where we sweep across these two parameters and show the model performs well across a large area of parameter space.\\n\\n> **\\\"Rigorous comparison with existing methods (CEBRA). It outperforms CEBRA (whose latent embedding was noisier and had larger final error) and is over 30 faster.\\\"**\\n\\nWe have also added a comparison to GPLVM (and equivalent but Gaussian process based technique) and Pi-VAE as suggested by reviewer gB66. \\n\\n> **\\\"Limited Evaluation on Non-Spatial Tasks...[it] would benefit from evaluation on other tasks, such as the ones tested in the CEBRA paper\\\"**\\n\\nFollowing this review we have now tested SIMPL on one of the Neural Latents Benchmark datasets (specifically the [Area2_Bump data](https://neurallatents.github.io/datasets) from somatosensory cortex for a macaque doing a centre-out reaching task collected by Chowdhury and Miller), the same dataset as used in the CEBRA paper. We find SIMPL works well (see the revised manuscript for a more detailed discussion of the results). In the results we have now added a new section and figure (Fig. 7) detailing our findings briefly summarised as follows: \\nWe tested 3 versions of SIMPL on the hand-reaching data. \\n- SIMPL2D(position): The latent is initialised with the monkeys x- and y-hand position. \\n- SIMPL2D(velocity): The latent is initialised with the monekys $v_x$- and $v_y$-hand velocity.\\n- SIMPL4D(postion&velocity): A 4D latent is initialised with $x$, $y$, $v_x$ and $v_y$. \\nIn all three models SIMPL optimises the latent variable (the test-log-likelihood improves), uncovering a smooth latent variable correlated to (but substantially different from) the behavioural initiates. Corresponding tuning curves revealed neurons with \\\"hand-position-like\\\" or \\\"hand-velocity-like\\\" selective receptive fields. The 4D version of SIMPL performed better than either 2D version, revealing disentangled latents with a higher overall log-likelihood than either of the 2D models. Our finding reveal:\\n1. SIMPL can be applied to the types of non-hippocampal datasets commonly used in the LVM literature. \\n2. Both position and velocity _seperately_ do a good job explaining the latent dynamic but position and velocity _combined_ is better. \\n3. In the optimised latent space neurons have localised receptive fields reminiscent of place cells in the hippocampus.\\n4. SIMPL can be extended beyond the 2-dimensional latent spaces initially tested. \\n\\n> **\\\"Scalability Concerns for High-Dimensional Latents...What alternatives might be suitable when dealing with truly high-dimensional data?\\\"**\\n\\nIt is true that we remain cautious about SIMPL's performance in very high-dimensional latent spaces. This is primarily because, as a function approximation technique, KDE suffers from the \\\"curse of dimensionality\\\". We believe that neural-network based approaches might have the potential to perform better in such high-dimensional scenarios, a point we made in the orignal submission but will now further clarify. However, these models are harder to optimize, often requiring tedious hyperparameter tuning, and may put off prospective scientists. \\n\\n> **\\\"How does the method perform with smaller neural populations?...How much data is required to optimize SIMPL?\\\"**\\n\\nWe thank the reviewer for bringing this important point to our attention. These are important questions. We repeated our synthetic grid cell experiment using decreasing amounts of data (fewer neurons and shorter duration), the results are shown in a new subpanel of Fig. 3 (panel e). In summary we find that the size of our original dataset (225 neurons, 60 minutes) was much larger than actually required. SIMPL still performs well (and runs _much_ faster) with only 50 neurons and 10 minutes of data. Of the two, we found that number of neurons is more important than the duration, with performance dropping off sharply when $\\\\leq$ 20 neurons are used. This is easily achieved by most modern neural datasets and performance of SIMPL in lower data regimes is empirically confirmed in our new analysis on a somatosensory dataset which has only 65 neurons and 37 minutes.\"}", "{\"title\": \"Response: Additional benchmarks have been added and discussion regarding which models are appropriate benchmarks\", \"comment\": \"We thank the reviewer for their very thorough review and for pointing us to this additional literature on latent variable modelling. We will cite these works in the camera-ready version of the paper. We provide below some important clarifications regarding which methods constitute relevant alternatives to SIMPL, as well as describing additional benchmarks we have run in response to the reviewers valid concerns.\", \"from_a_modelling_standpoint_simpl_has_four_key_properties_which_we_view_as_desiderata_for_comparable_techniques\": \"1. Complex, non-linear tuning curves \\n2. Time dependence of the latent variable.\\n3. Poisson emission probabilities \\n4. Identifiability from behaviour\\n\\nBy identifiability we mean whether there are any reassurances (theoretical or empirical) that, given behaviour, the model can recover the true latent up to some affine transformation. Many models simply do not admit any way to consume behavioural information. \\n\\nWe see desiderata 1 as non-negotiable since we are primarily focused on modelling cells with complex tuning curves (e.g. grid cells). Any model with a substantially restrictive intensity functions (specifically linear-type, of the form $y_t \\\\sim \\\\textrm{NoiseModel}(f(\\\\mathbf{M}\\\\mathbf{z}_t+\\\\mathbf{c}))$ where $f$ is a simple function e.g. identity, softplus, exponential etc.) can never interpretably account the sorts of datasets (e.g. grid cells) we consider here. With that in mind we carefully checked all the references that the reviewer pointed us to. None of them satisify all these desiderata and, in particular, five of them (2, 3, 4, 5 and 7 as numbered in the table below), including LFADs, have linear-type tuning curves and, as such, we consider them mis-specified. Unless we have misunderstood, or the reviewer has a strong counter-reason, these method simply could not model the types of tuning curves we are interested in optimising and therefore we do not see them as valid comparisons. We will make this point much clearer in an updated manuscript.\", \"the_following_table_summarises_all_papers_mentioned_by_the_reviewer_and_their_status_with_respect_to_our_four_desiderata\": \"| Paper | Complex tuning curves | Time dependence | Poisson emissions | Identifiability | \\n|-------|-------|----|-----|----|\\n|1. Manifold GPLVMs for discovering non-Euclidean latent structure in neural data|Y|Y|N|Y|\\n|2. Learning interpretable continuous-time models of latent stochastic dynamical systems |N|Y|N|Y|\\n|3.Bayesian learning and inference in recurrent switching linear dynamical systems.|N|Y|N|Y|\\n|4. A general recurrent state space framework for modeling neural dynamics during decision-making.|N|Y|Y|Y|\\n|5. LFADS|N|Y|Y|Y|\\n|6. Pi-VAE|Y|N|Y|Y|\\n|7. Inferring Latent Dynamics Underlying Neural Population Activity via Neural Differential Equations |N|Y|Y|Y|\\n|8. Collapsed amortized variational inference for switching nonlinear dynamical systems.|Y|Y|N (strictly categorical) |N|\\n| 9. GPLVM | Y | Y | N | Y |\\n\\nTo be clear, the fact none of these references satisfy all four desiderata does not mean that all these methods aren't useful in cases where SIMPL could be used however it means that we believe, independent of explicit comparisons, that SIMPL still represents a valuable contribution to the field. \\n\\nMore broadly, the closest alternative to SIMPL is possibly Poisson-GPLVM (Wu et al. 2017). However, unfortunately, the current P-GPLVM algorithm doesn't make use of inducing points, resulting in cubic complexity. We tried running this on 1000 datapoints and got a compute time of ~2 mins, already exceeding the runtime of SIMPL on _all_ 36000 datapoints by a factor of ~3. We would be happy see future work directed at improving the scalability of P-GPLVM however, in this work, we have taken a different approach by defining a model that bypasses GPs altogether. \\n\\nNonetheless, we take seriously the reviewers suggestion that we need to improve our comparisons of SIMPL to alternatives. For this reason we have added comparisons to two other methods which only lack one of the desiderata listed above. Pi-VAE which imposes _time independence_ on the latent (i.e. only fails desiderata 2) and GPLVM which assumes _Gaussian emissions_ (desiderata 3). Both come with well maintained code bases. The GPLVM algorithm can exploit inducing points to make it scalable. Results are shown in the updated version of the paper. In summary:\\n\\n(continued in next comment...)\"}", "{\"title\": \"Response (part 1)\", \"comment\": \"### Response summary (rewrite incoming)\\n\\nWe thank the reviewer for their response and continued commitment to this paper. Their engagement has immensely helped position SIMPL within the landscape of neural-behavioural data analysis methods. We apologise for not having updated the introduction and related work section sooner; now that the reviewer has agreed that framing the background around our desiderata arguments clarifies the positioning of SIMPL **we will work immediately towards rewriting the relevant sections in a revised version of our manuscript in order that the reviewer will have time to assess before the end of the discussion period.** Please bear with us. \\n\\n**Regarding SIMPL vs. GPLVM** we will add a paragraph comparing GPLVM and SIMPL more precisely, as requested. However, we don't believe that GPLVM is so much closer to SIMPL than alternatives that the whole paper needs rewriting to exclusively frame SIMPL as an improvement on GPLVM. For instance, the GP prior on the latent dynamics significantly differs from the Markovian one of SIMPL. Under that specific light, models like PfLDS are closer to SIMPL than GPLVM (but they have other differences as previously discussed). SIMPL is a _new_ neural data analysis method, which makes its own modelling choices coming with their own trade-offs and should be considered as a new technique in its own right.\\n\\n\\nOn a related note, we thank the reviewer for proposing new ablations to understand the performance gains between SIMPL and GPLVM; we think running them would constitute interesting avenues for future work. However, in light of our last paragraph, we do not believe that these experiments are essential for (or would fit into the body of) a paper which does not frame itself as a direct variant of GPLVM. As a reminder, here are the major benefits of SIMPL as it stands:\\n\\n1. We still maintain **SIMPL is the only technique satisfying all four desiderata** in a scalable way (see point-by-point below).\\n2. We demonstrate **SIMPL out-performs CEBRA, pi-VAE and, most importantly, GPLVM** on a non-trivial synthetic dataset. SIMPL returned tuning curves which matched ground truth much closer than the alternatives and had a latent error of less than half that of the next best competing model.\\n3. We show that **SIMPL is very fast**; we haven't identified any comparable technique that runs within even a tenth of SIMPL's speed. \\n4. We show **SIMPL already gives meaningful scientific insights** re. place cell analysis, Fig. 6. These finding are possible due to SIMPLs core features including its identifiability and scalability. \\n5. We show **SIMPL can be applied to spatial and motor-task datasets** (most existing techniques only show applicability to one domain e.g. Hurwitz 2021), beneficial to improve adoption of the technique across neuroscience subfields.\\n6. **SIMPL is conceptually simpler than alternative techniques** enhancing its appeal to experimentalists (as pointed out by the reviewer). \\n\\nThese are major, not incremental, improvements, with the potential to greatly enhance the efficiency of experimental analysis pipelines; we believe they are sufficient to justify the publication of our submission.\\n\\nWe genuinely appreciate the reviewer's sincere dedication to advancing good science. As shown through the new experiments and benchmarks we've already conducted in response to the reviews, we are open to putting in substantial effort to improve the manuscript.\"}", "{\"summary\": \"The paper introduces SIMPL, an EM-style algorithm for refining latent variables and tuning curves from spiking neural data. The key idea is using behavior as initialization and combining kernel density estimation with Kalman smoothing in an iterative optimization framework. The authors validate their approach on synthetic datasets and real hippocampal recordings, showing improvements in both computational efficiency and biological interpretability compared to CEBRA (a recent popular deep learning approach for learning latent embeddings from neural and behavioural data)\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is a fresh departure from current latent variable models relying on deep neural networks, with a number of practical advantages (e.g., fast optimization, minimal hyperparameters (only 2), simple implementation based on standard components (Kernel Density Estimation and Kalman smoothing))\", \"The paper reveals scientific insights through the use of the proposed method (finer structure in place field representations, new interpretation of place field size distribution, relationship between behavioral uncertainty and neural encoding)\", \"The authors do a good job presenting the approach clearly and acknowledging the limitations of the proposed method.\"], \"weaknesses\": \"The proposed approach relies on strong assumptions that seem to limit the application of the method for more complex cases beyond the datasets used here (e.g., settings where behavioral recordings are missing, higher dimensional latent states, latent spaces not dominated by behavior, non-smooth latent trajectories). It would be great to demonstrate the applicability of the approach on commonly used datasets in the current literature on LVMs in neuroscience (e.g., Neural Latents Benchmark).\", \"questions\": [\"The authors mention improved identifiability of the proposed approach. Is that solely because of the behavioral initialization?\", \"The authors say: \\\"we see SIMPL as a specific instance of a broader class of latent optimization algorithms, ...\\\" and then move on to the idea of replacing KDE with neural networks. Can the authors elaborate on that?\", \"Can the authors provide the intuition for choosing the kernel bandwidth in the KDE step? This seems critical for the method's performance.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors for dealing with my concerns. I will read revisions and other reviews to ask further questions and adjust my rating.\\n\\n> \\\"The authors compare SIMPL with CEBRA, a machine learning-based method running on CPUs. If they provide a runtime comparison showing that SIMPL on a CPU is faster than CEBRA on a GPU, it would further support their efficiency claims.\\\"\\n\\n\\nThank you for the clarification. The comment was not an important concern so I do not think the authors must use GPU to measure time right now.\\n\\n\\n\\n> The author claims general but limited to specific regions of brain place cells and grid cells. I highly recommend authors compare with various datasets such as the macaque dataset and the mouse visual cortex datasets used in Schneider et al. (2023).\\\"\\n\\nIt would be nice if the authors compared with CEBRA result in the same figure (Figure 7 in the revision).\\n\\n\\n\\n**Regarding response for big-O, description of Figure 6c, thread \\u201c...(response continued)\\u201d**\\n\\nI highly recommend fixing it during the reviewing period as one of the advantages of ICLR is the authors can freely revise the manuscript. Regarding \\u2018data not shown\\u2019 in L466, It would be nice if you included it in the supplementary materials for thoroughness.\\n\\n**Others**\\n\\nI believe the page limit is also applied to the revision. Could you revise the manuscript so that it fits on 10 pages?\"}", "{\"title\": \"...(response continued)\", \"comment\": \"> **\\\"How much of the performance inferiority of CEBRA VS SIMPL comes from not carefully searching through the hyperparameter space? Is the comparison fair? Would increasing the iterations improve the performance of CEBRA?\\\"**\\n\\nWe don't think so. The relative underperformance of CEBRA is not because it hasn't converged but rather because it does not assume any dynamics on the underlying latent variable which would \\\"smooth\\\" (or \\\"denoise\\\") it. We made this comment in the original submission but will make it clearer in the camera-ready version. In CEBRA each spike bin is treated independently and is thus subject to its own irreducible noise. Conversely, SIMPL (and other dynamical LVMs) assumes the latent follows a linear dynamical systems thus employs Kalman smoothing which denoises the latent substantially. \\n\\n> **\\\"How could the author disentangle the possibility that this observation is a result of artifacts by their method versus reflecting true representation feature of the hippocampus?\\\"**\\n\\nThis is a an important question which we spent time addressing in the original submission (see paragraph 4 of the section on hippocampal dataset). In summary, we ran SIMPL on a control dataset of spikes sampled from behaviour and behaviour-fitted tuning curves. Results are shown in a grey shade on Fig. 6. For these spikes behaviour and behaviour-fitted tuning curves should be stable (since they are, by definition, exactly the generative model) and any changes reflect artifacts of the SIMPL algorithm. We observe no significant changes to the place fields for the control spikes suggesting the changes observed in the real neural data are not artifacts\\n\\nLastly, we thank the reviewer again for their thorough review and would appreciate if they would reconsider their score in light of our new additions to the manuscript which, we believe, have strengthened it substantially.\"}", "{\"title\": \"Follow-up\", \"comment\": \"As the discussion period is coming to a close we would like to ask whether the reviewer has had the time to consider our comments and, in light of these, reconsider their score. To summarise; along with a point-by-point discussion to each of the reviewers questions we believe we have handled their most important concerns by making the following changes to the paper:\\n\\n* Adding a new analysis/figure where we **analyse a non-spatial motor-task** dataset.\\n* Adding a new subpanel to fFig. 3 where we **test performance against dataset size**, and show SIMPL works well with significantly smaller datasets. \\n\\nFull details can be found in our original response above. Please would the reviewer let us know if they have any remaining major concerns.\"}", "{\"metareview\": \"This paper introduces a latent variable model (LVM) for neural and behavioural representations. The model is able to learn low-dimensional latent variables from high-dimensional neural activity. The learning algorithm is based on the Expectation-Maximization framework and it is more scalable than competitive LVMs and can be applied to large datasets.\\n\\nThe reviewers appreciate that this method is simple (only two hyperparameters), yet intuitive and effective. In particular, scalability gains are high, with results showing a 30x boost over comparable methods without losing accuracy. Furthermore, the reviewers find it interesting that the paper reveals scientific insights, such as the finer structure in place field representations and the relationship between behavioral uncertainty and neural encoding. \\n\\nThere has been extensive discussion about adding clarifications, and the revised version has largely addressed those. With the addition of new experiments during the rebuttal period, the overall experimental evaluation section is stronger. \\n\\nHowever, there is one concern remaining: the comparisons are done in a simulated setting (figure 4) rather than showing the comparison in the real-world setting. This doesn\\u2019t seem to be a major concern about the convincingness of the method per se, but it is rather a concern about whether these evaluations support claims like _\\u201cwhen it is applied to a large rodent hippocampal dataset, SIMPL efficiently finds a modified latent space with smaller, more numerous, \\u2026\\u201d_. The wording of these claims should be easy to modify appropriately for the camera-ready version. \\n\\nOverall, I recommend the paper for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"There has been satisfactory discussion where all reviewers and authors were engaged. A lot of the discussion focused on clarifications and suggestions for improving the text, in particular reviewer `5Qja` gave many useful suggestions which the authors eventually incorporated in the manuscript.\\n\\nBesides, most of the discussion focused on experimental evaluation. The reviewers requested comparison on non-spatial datasets, and the authors have presented new experiments which were appreciated by the reviewers. The authors also offered new ablation studies, as requested by reviewer `6vN3` (for hyperparameters) and `Kcef` (for dataset size).\\n\\nThere has also been discussion about placement of this work within the literature and generally how impactful the method is expected to be in the real-world, given the evaluation section. On this aspect, the reviewers partially align, and in private discussions it is acknowledged that evaluation of more baselines on the real setting would help, however not all reviewers judge this omission equally in terms of importance. \\n\\nOverall, even if there is not full consensus, it seems that the reviewers are _generally_ positive about this paper.\"}", "{\"title\": \"...(response continued)\", \"comment\": \">**\\\"Non-smooth latent trajectories\\\"**\\n\\nWe agree with the reviewer that it is important to test SIMPL when the latent variable is non-smooth. Currently the amount of smoothness in the latent trajectory is controllable via the velocity hyperparameter and can be set arbitrarily close to zero (at which point the decoding procedure amounts to pure maximum likelihood estimation, capable of modelling non-smooth latents). \\n\\nTo further alleviate concerns over whether SIMPL can model non-smooth latents, we generated a synthetic \\\"replay\\\" dataset. In this dataset the latent and behaviour are identical except for regular, brief instances where the latent siccontinuously jumps to a new location in the environment then jumps back. With the same parameters as before SIMPL is able to accurately recover the latent, correctly recapitulating the discontinuous jumps. In summary, although SIMPL is biased towards smooth latents this is just a prior and, with reasonable parameter settings, non-continuous latents can be modelled as well. These new results are summarised in a new figure (Fig. 9) in the appendix. \\n\\n> **\\\"The authors mention improved identifiability of the proposed approach. Is that solely because of the behavioral initialization?\\\"** \\n\\nYes, we find that the initialization near behaviour results in convergence on a latent space which is the same as the ground truth (at least, in our sythetic experiments, which are the only examples where true ground truth is knowable). Results in favour of this hypothesis include that fact that when behavioural intialisation is removed SIMPL learns a warped/fragmented latent space. As well as the fact that CEBRA, which was not initialised with behaviour, did not find such a good latent and it's grid fields appear slightly warped relative to the ground truth. \\n\\n> **\\\"The authors say: \\\"we see SIMPL as a specific instance of a broader class of latent optimization algorithms, ...\\\" and then move on to the idea of replacing KDE with neural networks. Can the authors elaborate on that?\\\"**\\n\\nThis point makes clear that there could, in theory, be many ways of fitting tuning curves. We have chosen KDE because it is simple, interpretable and fast but a neural network (e.g. trained to output firing rates given a latent) could be used instead and would come with its own advantages and disadvantages which we have not studied here. We have not tested this but it is an interesting avenue for future work. We will rewrite this section to make this clearer.\\n\\n> **\\\"Can the authors provide the intuition for choosing the kernel bandwidth in the KDE step? This seems critical for the method's performance.\\\"**\\n\\nThere are some general heuristics for choosing the kernel bandwidth, for exampled see Silverman's rule of thumb. Roughly, the goal is to choose the smallest bandwidth allowed by the data without over- or under-fitting: too small and individual spikes will be resolved, too big and high-frequency structure in the receptive fields will be smoothed. In practice SIMPL is not particulary sensitive to this parameter as shown in our new hyperparameter sweep performed in respjse to the reviewers concerns (see appendix Fig. 8) where we find that performance is good across an order of magnitude of kernel bandwidths (between 0.5 cm and 5 cm) and an order of magnitude of speed priors (between 0.1 ms$^{-1}$ and 1 ms$^{-1}$). These ranges will, of course, be dataset dependent, but they do suggest that the range of appropriate hyperparameters might not be very sharp. In summary, our new hyperparameter sweep shows SIMPL has only soft dependence on the hyperparameter. \\n\\nOn the basis of our adjustments and additional analysis on the Neural Latents Benchmark dataset, we believe that the paper is now in a much stronger position to be accepted. Please would the reviewer let us know if they have any further concerns or questions will might prevent them recommending our paper for acceptance.\"}", "{\"summary\": \"The paper introduces a new method they call SIMPL, an EM-style algorithm, aiming to recover low-dimensional, time-evolving latent variables from high-dimensional neural activity in large neural datasets. Their approach fits tuning curves to observed behaviour and iteratively refines these through a two-step process (EM like). The originality of SIMPL lies in its novel combination of expectation-maximization (EM) techniques with behavior-driven initialization, providing a straightforward yet effective approach to refine neural representations. It\\u2019s main advantage is that it is a scalable and fast approach compared to existing popular methods like CEBRA.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"SIMPL has only two hyperparameters. Minimal hyperparameters make it practical for experimentalists.\\n\\nRigorous comparison with existing methods (CEBRA).It outperforms CEBRA (whose latent embedding was noisier and had larger final error) and is over 30 faster.\\n\\nThe capability to achieve results over 30 times faster than comparable methods while maintaining accuracy makes SIMPL particularly attractive for large-scale neuroscience research\\n\\nCareful experiments on synthetic as well as real data.\", \"weaknesses\": \"Limited Evaluation on Non-Spatial Tasks: While the paper demonstrates impressive results in the domain of spatial navigation, it would benefit from evaluation on other tasks, such as the ones tested in the CEBRA paper so that the two methods can be directly compared in more diverse settings.\", \"scalability_concerns_for_high_dimensional_latents\": \"The authors mention that kernel density estimation (KDE) may not scale well to high-dimensional latent spaces. This potential limitation needs more elaboration. What alternatives might be suitable when dealing with truly high-dimensional data.\", \"questions\": \"How does the method perform with smaller neural populations? Can SIMPL perform well when there are limited neurons recorded. Some downsampling analysis on neural data can be helpful to check if the method is sensitive to neuron number.\\n\\nHow much data is required to optimize SIMPL? How does the mother deal with short recordings? For example, would a short half hour recording be sufficient to fit the model?\\n\\nThe author mentioned that they trained CEBRA on the synthetic grid cell data using out-of-the-box hyperparameters training for the default 10000 iterations. How much of the performance inferiority of CEBRA VS SIMPL comes from not carefully searching through the hyperparameter space? Is the comparison fair? Would increasing the iterations improve the performance of CEBRA?\\n\\n\\nThe author mentioned that SIMPL finds a modified latent space with smaller, more numerous, and more\\nuniformly-sized place fields, suggesting the brain may encode space with greater resolution than previously thought. How could the author disentangle the possibility that this observation is a result of artifacts by their method versus reflecting true representation feature of the hippocampus?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you to the authors for their detailed response and additional experiments. I have raised my score in light of the new experiments.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"I commend the authors on their thorough follow-up and additional evaluation. The response helps clarify how SIMPL relates to LVMs in neuroscience, but I unfortunately still believe that the current manuscript does not appropriately depict this large research space, nor does it clearly convey to a reader how the different features of SIMPL relate to the different features of the many latent variable models used in neuroscience.\\n\\nSpecifically outlining the four desiderata is helpful in honing the conversation of relevant LVMs in neuroscience. Another relevant model might be VIND* - a VAE type model with a dynamical latent space and Poisson observations. This might not meet the authors criteria for identifiable latent space, but there are indeed latent plots with nice scientific interpretation in their manuscript.\\n*Hernendez et al. Nonlinear Evolution via Spatially-Dependent Linear Dynamics for Electrophysiology and Calcium Data \\n\\nNote that Hurwitz et al and Gondur et al - highlighted in the public comment - meet the four desiderata, but these models, like CEBRA, use behavioral data as an observation, and are not purely based on spiking like in SIMPL (save a behaviorally-informed initialization) and the others discussed.\\n\\nHowever, based on the current discussion and author clarifications, I don't believe VIND, nor these other deep LVMs are the best comparison, and I think the 4 desiderata are missing something crucial. It seems to me that an advantage to SIMPL is _interpretable_ tuning curves in addition to identifiable latent spaces. I agree with the authors that the GPLVM (w inducing points but with Gaussian emissions, or without inducing points but with Poisson emissions) is the closest class of models with very similar features to SIMPL. These approaches use identifiable smooth functions as the mapping from the latent space to observations, both achieved through kernel-based metrics. The authors have described both class of nonlinear models (those with NNs as well as those with GP or kernel-based functions) as having 'complex' tuning curves, as to separate them from linear approaches, but all of the models that use neural networks have the classic black-box problem and thus don't have this nice tuning curve interpretation (e.g. piVAE, CEBRA, VIND, MMGPVAE (Gondur et al ), TNDM (Hurtwitz et al)). The authors make a point similar to this when they talk about identifiability. \\n\\nIt seems to me that at it's core SIMPL is an alternative to the GPLVM - but even with the additional GPLVM comparison and evaluation on a simple synthetic example the manuscript doesn't make this clear. It is clearly favorable from a scalability point of view, which is nice, and having Poisson observations is another nice addition that could present some advantages to the Jensen et al GPLVM variant. However, the paper should demonstrate this and focuses specifically on the advantages and disadvantages of this approach to it's most closely related model(s) in neuroscience (that is, primarily the GPLVM- though a comparison to a linear LVM, say PfLDS or LFADs, and deep LVM, say PiVAE, could be helpful but in my opinion are not necessary).\", \"for_example\": \"the authors claim Poisson observations are a key desired feature of the model, and highlight it above as the important distinguishing feature from the Jensen GPLVM. If the addition of Poisson observations is indeed a core contribution above the competing approach, plots reinforcing the importance of this point would be crucial. Does SIMPL perform better than the GPLVM because of the Poisson observations, as the authors speculate? It might be nice to demonstrate this not just in a synthetic setting where Poisson observations are used to generate the data, but also in a real-world dataset. One could measure held-out spike prediction from the tuning curves comparing a Gaussian noise SIMPL to the GPLVM, to see if they match, and then are improved by adding Poisson observations to SIMPL, for example.\\n\\nAlternatively, the authors could focus on the scalability of their model while achieving similar performance in both tuning curve and latent identification to the (Poisson and Gaussian) GPLVM. They could further evaluate real-world latent-identification: e.g. does the latent space of SIMPL match position in place-cell or grid-cell data, or reaching position in motor data better than the GPLVM? And because scalability is a huge advantage to SIMPL, more thorough plots demonstrating would go a long way in strengthening the paper. E.g. in what limits is inference of SIMPL feasible compared to an inducing point GPLVM, and how does this speed-accuracy tradeoff change with amount of data and number of inducing points? \\n\\nIn short, I believe this paper would greatly benefit from a re-write with these points in mind and a more thorough comparison to the GPLVM in a simulated and real setting.\"}", "{\"title\": \"Response (part 2)\", \"comment\": \"### Point-by-point\\nIn the meantime, we respond to a few points individually:\\n\\n> **\\\"Note that Hurwitz et al and Gondur et al - highlighted in the public comment - meet the four desiderata\\\"** \\n\\nWe checked and both models do not meet desideratum 1, which we consider \\\"non-negotiable,\\\" as they both rely on exponential-linear-type tuning curves for their generative model. \\n\\n\\n> **\\\"neural networks have the classic black-box problem and thus don't have tuning curve interpretation (e.g. piVAE\\\"** \\n\\nJust to clarify, most NN approaches like pi-VAE have a tuning curve component (in the sense they define a firing rate for all possible latent values $r_{it} = f_{\\\\theta}(\\\\mathbf{z}_t)_i$). We agree with the reviewer that this approach is less interpretable than the KDE approach taken by SIMPL.\\n\\n> **\\\"Does SIMPL perform better than the GPLVM because of the Poisson observations\\\"**\\n\\nIt possible that this is why. It is also possible that GPLVM suffers from innaccuracies due to using sparse induction points. Or it has to do with the intricacies of the inference procedures both methods employ. We are open to performing the Poisson analysis yet hesitant because this would require a substantial rewrite of SIMPL codebase (current likelihood functions have the Poisson-ness baked in). From a performance-only perspective it is also a moot point: SIMPL _does_ perform better as shown in Fig. 5 and we would be happy to see future work on figuring out why. If the reviewer would like we can sweep the number of induction points (up to our own compute limits) to see if GPLVM benefits. \\n\\n> **\\\"does the latent space of SIMPL match position in place-cell or grid-cell data, or reaching position in motor data better than the GPLVM?\\\"\\\"**\\n\\nWe are open to applying GPLVM to the hippocampal dataset and including these results in the camera-ready. However, it is not immediately clear how such a comparison would inform which of the techniques is \\\"better\\\". Evaluating an LVM by how similar it's latent is to behaviour is objectively _incorrect_ (though it can be scientifically revealing for other reasons) as it makes the assumption that behaviour == the latent. For this reason we focus on our sythetic dataset where ground truth is known and thus comparisons can be made concrete using the L2-error between latent and ground truth. Although previous studies, such as Fig. 3 in the P-GPLVM paper, have used correlation with behaviour as their benchmark metric, we find this approach problematic. Nonetheless, it could still be interesting the check if GPLVM finds the same changes in the place fields as SIMPL.\"}", "{\"title\": \"Follow-up\", \"comment\": [\"As the discussion period is coming to a close we would like to ask if the reviewer has had the time to consider our comments and, in light of these, reconsider their score. To summarise; along with a point-by-point discussion we believe we have handled their most important concerns by:\", \"Adding a new analysis/figure where we **analyse a non-spatial motor-task** dataset.\", \"Adding a new experiment/figure where we show **SIMPL can account for discontinuous latent trajectories**.\", \"Performing a **sweep across the kernel bandwidth and velocity prior hyperparameters** showing SIMPL performance is _not_ sharply dependent on their values.\", \"Full details can be found in our original response above. Please would the reviewer let us know if they have any remaining major concerns.\"]}" ] }
9juyeCqL0u
Causal Order: The Key to Leveraging Imperfect Experts in Causal Inference
[ "Aniket Vashishtha", "Abbavaram Gowtham Reddy", "Abhinav Kumar", "Saketh Bachu", "Vineeth N. Balasubramanian", "Amit Sharma" ]
Large Language Models (LLMs) have recently been used as experts to infer causal graphs, often by repeatedly applying a pairwise prompt that asks about the causal relationship of each variable pair. However, such experts, including human domain experts, cannot distinguish between direct and indirect effects given a pairwise prompt. Therefore, instead of the graph, we propose that causal order be used as a more stable output interface for utilizing expert knowledge. When querying a perfect expert with a pairwise prompt, we show that the inferred graph can have significant errors whereas the causal order is always correct. In practice, however, LLMs are imperfect experts and we find that pairwise prompts lead to multiple cycles and do not yield a valid order. Hence, we propose a prompting strategy that introduces an auxiliary variable for every variable pair and instructs the LLM to avoid cycles within this triplet. We show, both theoretically and empirically, that such a triplet prompt leads to fewer cycles than the pairwise prompt. Across multiple real-world graphs, the triplet prompt yields a more accurate order using both LLMs and human annotators as experts. By querying the expert with different auxiliary variables for the same variable pair, it also increases robustness---triplet method with much smaller models such as Phi-3 and Llama-3 8B outperforms a pairwise prompt with GPT-4. For practical usage, we show how the estimated causal order from the triplet method can be used to reduce error in downstream discovery and effect inference tasks.
[ "Causal Order", "Imperfect Experts", "Causal Inference", "LLMs" ]
Accept (Poster)
https://openreview.net/pdf?id=9juyeCqL0u
https://openreview.net/forum?id=9juyeCqL0u
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xDbkE6pz8P", "tneujwK3WQ", "r4l2fJzJSI", "ny6IthlAoy", "mmulikGFFm", "lNX4n7Emri", "lCW5QRlwhm", "kVrCPiYI8z", "jCAt7e3qro", "fA9irGr1je", "dFZ2Q0CAuV", "bOzPq1OUUX", "az9kCxW53K", "aNNJwvQT0w", "Wd9ddfzIHi", "W4tyA84D91", "U7nQ2hKP5i", "T6zPH5pdLL", "RBBqGr3e80", "On99DnmvlF", "NhH2Vh7Fhb", "DKctcHMXir", "DCChCwrdWv", "CKsA2zP0ZV", "9yqwr13niE", "8gg2VtyYzV", "8EcNPDqSHH", "80KzGEVtx7", "5A8BPul2iW", "3xNlW7GF1D", "2tDQl25CqC" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732212150505, 1732676684561, 1732632608951, 1732263318540, 1732574750184, 1733036873059, 1733291694097, 1737524244289, 1732212910568, 1732614507824, 1730096991010, 1732555757520, 1729350394754, 1732614330934, 1732211070402, 1732212741196, 1732914931539, 1730643226570, 1732210925393, 1732574784566, 1732214287150, 1732211536541, 1732554609360, 1732591883039, 1730735793571, 1734794301789, 1732555064331, 1733074218970, 1730583188308, 1732214665454, 1732214175400 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Reviewer_W4GZ" ], [ "ICLR.cc/2025/Conference/Submission13202/Reviewer_WUxn" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Reviewer_W4GZ" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Reviewer_JgEV" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Reviewer_JgEV" ], [ "ICLR.cc/2025/Conference/Submission13202/Reviewer_5DRM" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Reviewer_JgEV" ], [ "ICLR.cc/2025/Conference/Submission13202/Reviewer_5DRM" ], [ "ICLR.cc/2025/Conference/Submission13202/Reviewer_WUxn" ], [ "ICLR.cc/2025/Conference/Submission13202/Area_Chair_Nxik" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Reviewer_HLpf" ], [ "ICLR.cc/2025/Conference/Submission13202/Reviewer_HLpf" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ], [ "ICLR.cc/2025/Conference/Submission13202/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer HLpf's comments\", \"comment\": \"We thank the reviewer for their insightful comments, we try our best to incorporate their suggestions and answer their queries. We will update our final version to reflect the same as well.\\n\\n**Response to weaknesses**\\n\\n**> Despite a sound experimental analysis, there a few details missing, i.e., how many repetitions were done in Tables 1 and 2**\\n\\n**Response**: Each LLM based experiment (Table 2) was run three times, and the average of the final score was reported. \\nSimilarly, for the human-based graph construction (Table 1), for each dataset, three human annotators were asked to annotate the final graph and the aggregate of that was reported. Each annotator was randomly allotted a graph for both pairwise and triplet query strategies while ensuring no annotator got the same graph to query with both strategies. To get an estimate of the upper bound of human performance, for resolving tie-breaking conflicts in the triplet method, we used a ground truth-based oracle (proxy for a human domain expert). We will make sure to add this detail in the final version of the paper.\\n\\n**Response to questions**\\n\\n**> Can you clarify the experimental setup in Table 4? .... search space inside the causal discovery method.**\\n\\n**Response:** We thank the reviewer for this point. We agree, LLM + < causal discovery > is a more appropriate title for our CamML based hybrid method, since we use LLM triplet's output to reduce the discovery algorithm\\u2019s search space for getting to the most optimal causal graph. However for PC hybrid approach, we use LLM triplet's output to identify the most optimal graph from the MEC obtained from PC, therefore < causal discovery > + LLM might be more suitable for this case (that said, we also propose an LLM+PC algorithm at the end of this response that may be interesting!). We will make updates in the paper to make this more clear.\\n\\n**> Can you describe in more details the causal effect component? .... are the counterfactuals estimated?**\\n\\n**Response:** Following Proposition 3.2, we use all the variables that precede the treatment variable in the estimated topological order as the adjustment set. This set qualifies as a valid backdoor adjustment set. Once the adjustment set is identified, the causal effect is estimated using the DoWhy library and linear regeression as the estimator. Appendix Table A14 presents an analysis comparing the causal effects estimated using this approach to those obtained by adjusting for the (minimal) backdoor set in the Asia dataset. The results show almost no differences in the estimated causal effects.\\n\\n**> Another ablation study that would be helpful is to ... improve this current causal discovery bottleneck?**\\n\\n**Response:** Focusing on scalability for larger graphs would be a great direction of research---thanks for suggesting this. Below we discuss how incorporating LLM triplet-based order in causal discovery algorithms can reduce the search space over graphs. \\n\\n**LLM Triplet + Score-based methods**: In the paper, we presented how causal order from the Triplet method can be used to provide a level order prior for the CaMML algorithm. In general, score-based methods sample different graphs, evaluate a score function for each graph, and iteratively select the graph with the best score value. Causal order from LLMs can be used to significantly decrease this search space. For instance, we can pre-compute the \\\"forbidden\\\" edges that violate the causal order (as a hard constraint); consequently graphs with those edges are no longer explored by a score-based discovery algorithm. \\n\\nFor example, the GES score-based algorithm runs in two phases, forward and backward, and uses a score function to greedily search for the best graph. In the forward phase, it starts with an empty graph. Next, it adds a single edge to the graph among all possible edges that could be added that maximizes the score of the new graph. This process is repeated multiple times until no additional edges can be added. \\nTo reduce the search space, we propose our modification: Instead of scoring every possible edge addition, the algorithm can rule out those edges that violate the LLM Triplet order (this can be precomputed), thus helping reduce the search space at each iteration. \\n\\n**LLM Triplet + Constraint-based methods**: Constraint-based methods such as the PC algorithm depend on conditional independence tests. The PC algorithm starts with complete undirected graph and then for every pair of nodes connected by an edge, it removes the edge if the two nodes are independent or conditionally independent given other subsets of nodes. In the second stage, edges are oriented. The main computational burden in the PC algorithm is the skeleton building. However, as the skeleton is undirected, it is unclear how causal order-based constraints can help in this step. Therefore, in this work, once the skeleton is obtained, we propose using the Triplet method to help orient the edges (Sec. 3 (line 282)). \\n\\n_(continued...)_\"}", "{\"comment\": \"Thank you for your response, I have raised my score.\"}", "{\"title\": \"Raised score\", \"comment\": \"Dear authors, thanks for engaging and providing responses to my questions and concerns. Given that you've addressed them to a good extent, I went ahead and raised my score in support of your work.\"}", "{\"title\": \"Continuation of Response to Reviewer HLpf\", \"comment\": \"_(continued...)_\\n\\nNote that while we discussed a hard constraint based on the triplet method above, probabilistic variants of the order constraints which are weighted based on confidence of the Triplet method can also be developed. A key benefit of the triplet method is that it can provide a measure of LLM's uncertainty (for each variable pair $<A, B>$, fraction of triplets that predict $A$ causes $B$, $B$ causes $A$, or no relationship) that can be useful for weighting the LLM-based prior for discovery algorithms.\"}", "{\"comment\": \"Thanks for engaging on the rebuttal. We clarify on the importance and usecases of causal order below.\\n \\n> W2 (regarding 3.2) - Could the authors answer my original question that \\\"Proposition 3.2 offers a sufficient condition for backdoor adjustment, but applying this condition broadly to include everything that satisfies it seems impractical.\\\"?\\n\\nIncluding variables appearing before treatment (in the causal order) is actually **a widespread practice in biomedical and social science empirical studies**. In these studies, such variables are called \\\"pre-treatment variables\\\" and a common practice is to condition on all of them. For this reason, we do not think that our proposal is impractical. The importance of Prop 3.2 is to show the utility of the causal order to identify such a commonly used adjustment set.\\n\\nFor example, refer to the Covariate selection chapter [1] by Sauer, Brookhart, Roy and Vanderwheele in a User Guide (\\\"Developing a Protocol for Observational Comparative Effectiveness Research\\\"). In the section on \\\"Adjustment for all observed pre-treatment covariates\\\", **they mention the widely used propensity score adjustment and write, _\\\"The greatest importance is often placed on balancing all pretreatment covariates.\\\"_** They also add that while theoretically colliders can bias the result, _\\\"in practice, pretreatment colliders are likely rarer than ordinary confounding variables.\\\"._\\n\\nFurther, when unobserved confounding cannot be ruled out (as is the case with most observational studies), evidence is not clear on whether we should include all pre-treatment covariates or select a few, especially because the true graph may be unknown. _\\\"Strong arguments exist for error on the side of overadjustment (adjusting for instruments and colliders) rather than failing to adjust for measured confounders (underadjustment). Nevertheless, adjustments for instrumental variables have been found to amplify bias in practice\\\"._ As the last sentence suggests, note that **we are not claiming that adjusting for all pre-treatment variables (variables before treatment in causal order) is always the correct approach; but rather showing that it can be practical in many situations.**\\n\\nTheoretically, of course, improvements to this causal order criterion are possible. Vanderweele and Shpitser (2011) [2] cite the popular practice of using \\\"all pre-treatment variables\\\" and propose the Disjunctive Cause criterion as an improvement. This criterion states that if a pre-treatment variable causes the treatment, outcome, or both; then it should be included in the adjustment set. Note that this criterion---effectively including all pre-treatment ancestors of treatment and/or outcome---is quite close to the causal order-based criterion in our paper. Except for possibly conditioning on a collider in cases where there are unobserved variables in the graph (see Fig. 1 from [2]), additional variables in the causal order adjustment superset will not have a significant effect on the estimate.\\n\\n\\n[1] Sauer, Brockhart, Roy, Vanderweele. Chapter 7: Covariate Selection https://www.ncbi.nlm.nih.gov/books/NBK126194/ in Developing a Protocol for Observational Comparative Effectiveness Research: A User's Guide (2013). \\n\\n[2] Vanderweele, Shpitser (2011). A new criterion for confounder selection. *Biometrics.* https://pmc.ncbi.nlm.nih.gov/articles/PMC3166439/\"}", "{\"title\": \"Follow up to Reviewer 5DRM's points\", \"comment\": \"Thanks again for your response and feedback on our work.\\n\\n\\n\\n**Weakness 1:** \\n \\nFor the results above, we chose the same datasets as in Table 2. \\n\\nWhile using GPT-4 with Triplet provides slightly improved performance compared to using GPT-3.5, the main result is that under both LLMs (GPT-3.5 and 4), Triplet method leads to substantially better results than the pairwise method (see expanded table below).\\n\\nAs noted in Table 2, another benefit of Triplet is that even when using it with smaller models such as Phi3 and Llama3, it leads to better graph metrics than the pairwise method with GPT-4. This is an important advantage of the Triplet method since using these smaller models can lead to substantial efficiency gains. \\n\\nFor completeness, below we provide Table 2 with the updated results including Triplet (GPT-4); with bolded best metrics per LLM model (GPT-3.5 and 4).\\n\\n\\n\\n| Dataset | Metric | A:Pairwise GPT-3.5-Turbo | A: Triplet GPT-3.5-Turbo | B: Pairwise GPT-4 | B: Triplet GPT-4 | Triplet Phi-3 | Triplet Llama3 |\\n|---------------|-----------|------------------------|-----------------------|----------------|---------------|---------------|----------------|\\n| **Asia** | D_top | - | **1** | 1 | **0** | 0 | 2 |\\n| | SHD | 21 | **14** | 18 | **10** | 13 | 17 |\\n| | Cycles | 1 | **0** | **0** | **0** | 0 | 0 |\\n| | IN/TN | **0/8** | **0/8** | **0/8** | **0/8** | 1/8 | 0/8 |\\n| **Alzheimers**| D_top | - | **4** | - | **4** | 7 | 5 |\\n| | SHD | 42 | **28** | 30 | **23** | 25 | 24 |\\n| | Cycles | 684 | **0** | 1 | **0** | 0 | 0 |\\n| | IN/TN | **0/11** | **0/11** | **0/11** | **0/11** | 0/11 | 0/11 |\\n| **Child** | D_top | - | **1** | - | **1** | 17 | 12 |\\n| | SHD | 177 | **28** | 148 | **24** | 69 | 129 |\\n| | Cycles | >10k | 0 | >10k | 0 | 0 | 0 |\\n| | IN/TN | **0/20** | 10/20 | **0/20** | 6/20 | 0/20 | 0/20 |\\n\\n\\n\\n\\n\\n**Weakness 2**:\\n\\nOverall, Table 4 and the results for N=100 (above) show that adding LLM Triplet results helps improve accuracy of discovery algorithms. As noted above, for N=100, the impact is significant for medium-sized graphs such as Child (incorporating the Triplet output reduces $D_{top}$ of PC from 6.33 to 2.33) and Neuropathic (incorporating the Triplet output reduces $D_{top}$ of CaMML from 12.5 to 5). That said, we agree that extending to additional complex graphs may provide more comprehensive understanding wrt. sample size and will be useful as future work.\"}", "{\"title\": \"Rebuttal Summary\", \"comment\": \"We sincerely appreciate all the reviewers for their time, insightful comments and positive feedback. Reviewers appreciated the technical and writing clarity, novelty of our contribution on inferring causal order, and the comprehensive experiments including both LLMs and humans as imperfect experts.\\n\\nReviewers' suggestions have significantly contributed to improving our work, and we thank them for increasing their scores based on our response and additional experiments. The majority of the suggestions pertain to clarifications of our approach and ablations to provide a better understanding of the proposed Triplet method and its effectiveness (for example comparison with recent LLM based methods, or how causal order from our pipeline would serve as an effective prior for tasks like causal effect estimation). These ablations have further strengthened the impact of our approach.\\n\\n1. Additional results of our triplet method using a stronger model such as GPT-4 (in response to reviewer 5DRM) complete a comprehensive comparison across a spectrum of models---from smaller ones like Phi3 and Llama3 to larger ones like GPT-4---wrt. the baseline pairwise approach. Across all models, we observe a consistent trend that the triplet method outperforms existing pairwise-based methods for causal discovery. Moreover, as reported in Table 2 of the paper, triplet method using smaller models such as Phi3 and Llama3 obtains better accuracy than the pairwise method with GPT-4. \\n\\n2. We highlight how the triplet method yields computational efficiency while ensuring high accuracy for causal discovery, as answered to reviewer W4GZ. As requested, we have also implemented additional baselines from recent work (LLM-BFS, LLM-BFS+Stats) and find that the triplet method outperforms both methods.\\n \\n3. We have provided clarifications on the practical utility of causal order for downstream tasks such as graph discovery and effect inference; and more generally, on the utility of LLM-based methods for causal discovery. We thank the reviewers for these questions and will add this discussion to the final paper.\\n\\nWe again thank the reviewers for engaging with us and helping to clarify the contributions of the work.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Questions by reviewer W4GZ\", \"comment\": \"**>1. The current method produces a causal order rather than a full causal graph .... would be valuable to assess the practicality and risks of this trade-off.**\\n**Response:**\\n\\nThis is a great question. Contrary to the intuition above, we find that the triplet order method obtains more accurate effect estimates than the pairwise method that outputs a graph. Below we show an analysis using the Survey dataset and five combinations of treatment and outcome. The backdoor set computed from the pairwise method's graph results in a higher error than the \\\"maximal\\\" backdoor set computed from the triplet method's order. \\n\\n\\n| Treatment, Target | $\\\\epsilon_{ATE}$ for Pairwise Prompt | $\\\\epsilon_{ATE}$ for Triplet Prompt |\\n| ----------------- | ------------------------------------ | ----------------------------------- |\\n| A, E | 0.07 | **0.00** |\\n| S, E | 0.03 | **0.00** |\\n| A, T | 0.02 | **0.00** |\\n| A, R | 0.02 | **0.00** |\\n| T, E | 0.04 | **0.00** |\\n\\n\\n\\nThe result can also be understood analytically. In Proposition 3.3, we show that causal order($D_{top}$ metric) can be a suitable measure for the downstream error in causal effect estimation. Recall that from the Table 3 of main paper, for survey causal graph, the causal order obtained by pairwise prompt gets $D_{top}=3$ and triplet prompt gets $D_{top}=0$. This difference in $D_{top}$ directly impacts the estimated causal effects as shown in the table. \\n\\nThat said, the triplet method produces a causal order only because of the limitation of expert knowledge extraction through prompts. In practice, the triplet method can be combined with a data-based discovery algorithm (such as PC or CaMML) to obtain a causal graph and then compute the optimal backdoor set for causal effect inference. We use DoWhy libray for estimating the causal effects and linear regression as the estimator, and the sample size is 1000.\\n\\n**>2. In the triplet method, the authors employ a four-step process to ..... differences between the pairwise approach under this four-step method and the current approach.**\\n\\n**Response:** If we consider only the prompts provided to an LLM, then yes, pairwise approach can be considered as a special case of the triplet approach (with auxiliary variables being the null set). \\nHowever, the inclusion of auxiliary variables is the key ingredient that makes the triplet _method_ substantially more accurate. Specifically, the auxiliary variable provides context for determining the relationship between a variable pair; and enables querying the LLM multiple times for the same variable pair, leading to a more robust answer. Also, selective use of GPT-4, for resolving clashes ensures further robustness for node pairs where the model might face difficulty. \\nWith reference to the four steps of Triplet method, the second and the third step are the key for the accuracy of the Triplet method. In comparison, for the pairwise method, the third step is absent and the second step does not provide additional context.\\n\\nAs a result, even using smaller models such as Llama3 or Phi-3 with the Triplet method leads to better accuracy than the Pairwise method with GPT-4.\"}", "{\"title\": \"Requesting feedback on the rebuttal\", \"comment\": \"Thank you again for your helpful feedback. We tried our best to address your concerns and have added additional experiments and clarifications, especially wrt. the time complexity and real-life applicability of the triplet causal order method.\\n\\nPlease let us know if you have any further questions or comments. We would be happy to provide additional clarifications.\"}", "{\"summary\": \"This paper proposes a new technique for causal discovery algorithms by leveraging triplets. The authors argue that traditional causal discovery algorithms based on pairwise variable relationships cannot handle situations involving mediating variables and can lead to circular structures in causal graphs. Thus, they emphasize the importance of utilizing causal order, meaning modeling based on the topological order among variables. Causal order is effective for downstream tasks, and the triplet approach can efficiently identify causal order. The authors discuss the error of estimated causal order from both empirical and theoretical perspectives, especially in the presence of imperfect experts.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The approach introduces the use of causal order as the output for causal discovery algorithms employed by LLMs, with D_{top} topological divergence as a metric. This novel approach brings a fresh perspective and insight into the fields of LLMs and causal discovery.\", \"weaknesses\": \"1. Typo error: Lines 333-334 seem to have missed a closing parenthesis in O(V^3).\\n2. Complexity considerations: According to the paper, the triplet-based method has a complexity of O(V^3), whereas the traditional pairwise method has a complexity of O(V^2). For larger-scale graphs, the time complexity of these methods poses significant limitations for causal discovery.\\n3. The approach of using LLMs to simply determine causal relationships between variables is challenging to implement effectively in real-world applications. It requires the large model itself to possess a high level of expert knowledge, which is often difficult to achieve, as real causal inference typically demands deep domain-specific expertise and nuanced understanding beyond general-purpose language models.\", \"questions\": \"1. The current method produces a causal order rather than a full causal graph. While the authors suggest that, in the absence of confounders, this causal order can offer a superset of backdoor adjustment sets, the presence of \\u201cimperfect experts\\u201d in real-world applications may lead to additional error accumulation in the causal order. This raises uncertainty about whether the derived causal order is sufficiently reliable for downstream tasks, such as causal effect estimation, and how significantly these biases could impact the results. Experimental validation would be valuable to assess the practicality and risks of this trade-off.\\n2. In the triplet method, the authors employ a four-step process to determine causal order. Unlike the pairwise approach, each iteration uses a third auxiliary variable \\\\(C\\\\) to determine the causal edge direction between \\\\(A\\\\) and \\\\(B\\\\), followed by aggregation across all variables. My understanding is that the pairwise approach is simply a special case of this method where the number of auxiliary variables equals zero. The authors could further clarify the distinctions and differences between the pairwise approach under this four-step method and the current approach.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Requesting feedback on the rebuttal\", \"comment\": \"Thank you again for your helpful feedback. We tried our best to address your concerns and have added additional experiments with GPT-4 and with data-scarce settings.\\n\\nPlease let us know if you have any further questions or comments. We would be happy to provide additional clarifications.\"}", "{\"summary\": \"This paper explores the use of large language models (LLMs) for causal inference tasks, particularly causal discovery. The authors present two key arguments: (1) For both LLMs (termed as imperfect experts) or perfect experts, it is more effective to identify the topological order of causal variables rather than directly attempting to discover the full causal graph. They demonstrate that this approach yields more robust results and remains valuable for downstream tasks such as causal discovery or treatment effect estimation. (2) Since LLMs are imperfect experts, relying on existing pairwise prompts to determine causal order may still result in cycles. To address this, the authors propose a triplet-based prompt design. They empirically validate their claims on several datasets.\\n\\n**Claim**: I have research experience in causality. However, I have limited experience in LLMs or LLMs for causality. Hence, I might miss something especially when it comes to evaluation of novelty or comparison with existing works.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Regarding the first key contribution, I find the proposal of identifying causal orders rather than full causal graphs to be both useful and insightful.\\n2. Overall, the paper is easy to follow. While there are some typos, the writing remains generally clear.\", \"weaknesses\": \"I will present my main concerns here. Minor concerns or questions can be found in the Question section.\\n\\n**Significance of contribution**\", \"i_have_several_concerns_regarding_the_methodology_section\": \"1. 3.1 - While I agree with the intuition that identifying causal order is easier and leads to fewer errors, I\\u2019m not sure the result in this subsection is particularly interesting by itself. It seems almost obvious, as knowing the correct causal order is a prerequisite for identifying the causal graph. What matters more is whether the reduced chance of errors still preserves the usefulness of the method.\\n2. 3.2 - This section should have provided evidence addressing the question above. However, I find it difficult to assess the significance of the method presented. Proposition 3.2 offers a sufficient condition for backdoor adjustment, but applying this condition broadly to include everything that satisfies it seems impractical.\\n3. 4.1 - Should these be considered as contributions of the paper?\\n4. 4.2 - The authors try to justify why triplet prompt is better than pairwise prompt here. One thing unclear to me is that, as more variables are provided in the context (as the authors also mentioned), there is a higher chance of LLMs making mistakes. In practice, does that mean an $\\\\epsilon$-expert LLM would become an $\\\\epsilon\\u2019$-expert where $\\\\epsilon\\u2019>\\\\epsilon$? If so, would this become a trade-off and how should we interpret this tradeoff?\\n5. 4.2 - Another question regarding this theorem: It seems that the major problem the authors try to resolve is to avoid cycles. However, Proposition 4.1 focuses on the error of edge prediction given the assumption of acyclicity. Hence, I am not sure if this theorem provides insight on how to resolve the key problem of this paper.\\n\\n**Experiment**\\n\\nMy main concern is the lack of comparison with existing LLM-based methods. For example, while it\\u2019s valuable to verify that providing causal order helps with downstream causal tasks, do we know if integrating causal order into causal discovery outperforms directly using LLMs to identify the causal graph or providing other types of inputs to a causal discovery algorithm? Could the authors justify their choice not to compare against existing baselines, such as those mentioned in Section 2?\", \"questions\": \"1. Experiment - Missing reference to Table 2 in the main paper.\\n2. Line 114 - Could the authors provide some justification on why the redundancy leads to a more reliable order?\\n3. Line 52 - Would using triplet prompt lead to a more efficient method in this sense?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Requesting feedback on the rebuttal\", \"comment\": \"Thank you again for your helpful feedback. We tried our best to address your questions and have included additional clarifications. We also discuss how LLM-based Triplet method can help reduce the search space for discovery algorithms.\\n\\nPlease let us know if you have any questions or comments. We would be happy to engage further.\"}", "{\"title\": \"Continuation of the reply to reviewer WUxn's comments\", \"comment\": \"**> 2) Despite the results in Figure 3 on synthetic data, it would be useful in the broader empirical studies to compare the causal ordering queries to the flawed policy of orienting edges based no responses to \\\"does A cause B\\\"-style questions.**\\n\\n**Response**: Thanks for this suggestion. In addition to the _Pairwise (Edge)_ method, we developed a order variant of the pairwise method: Rather than using a pairwise query's answer to infer an edge, we infer a partial/relative order for each pair of nodes. We then aggregate all pairwise orders to get the final graph order. However, as in the case of _Pairwise (Edge)_ method where cycles were obtained for many graph datasets (such as Asia, Alzheimers), as we progress to build a final aggregated causal order for the full graph, we find pairwise responses that violate causal order (acyclicity) consistency for the same datasets. As a result, we get cycles in the obtained causal order and we could not proceed further to compare _Pairwise (Order)_ to the Triplet-based method. We conclude that similar to edge orientations for pairwise level, inferring order based on pairwise queries is also susceptible to erroneous and cyclic structures.\"}", "{\"title\": \"Response to reviewer W4GZ's comments\", \"comment\": \"We thank the reviewer for their insightful comments, we have tried our best to incorporate their suggestions and answer their queries.\\n\\n**Response to weaknesses**\\n\\n**>1. Typo error: Lines 333-334 seem to have missed a closing parenthesis in O(V^3).**\\n\\n**Response**: Thanks for pointing it out. We will fix it.\\n\\n**>2. Complexity considerations: According to the paper, the triplet-based method ... methods poses significant limitations for causal discovery.**\\n\\n**Response**: For larger graphs, we propose a $O(kV^2)$ variant of the triplet method below. Before that, however, we want to highlight the significant **increase in both accuracy and efficiency** that the triplet method provides compared to the pairwise method. \\n\\nWith the triplet method, our main aim was to develop a method that significantly improves the accuracy of LLM-based causal discovery. A second aim was to build a robust method such that even smaller LMs can be used and they provide better accuracy than the pairwise method. As a result, the triplet method leads to a higher accuracy and (cost and time) efficiency compared to the pairwise method, even though its theoretical complexity is $O(V^3)$. For a large graph with many nodes, using the triplet method with GPT-3.5 can obtain significantly higher accuracy than pairwise method using GPT-4, and costs significantly less: based on OpenAI's pricing if we want to query for a 100 node graph, pairwise (GPT-4) would cost `$574` while the triplet method costs `$55` (see Appendix E, line 1371). Further, for larger graphs, we can even run Triplet method with much smaller models such as Phi-3 (3.8B params) and Llama3 8B, leading to further efficiency gains in both wall clock time and costs (while obtaining better accuracy than pairwise with GPT-4, see Table 2). \\n\\nThat said, we believe modified approaches of triplet could be adopted for improved scalability. Rather than incorporating votes from all triplet subgraphs while deciding on edge direction between a pair of nodes, we can sample a fixed number of triplets per variable pair. So if number of triplets used per pair is *k*, then the overall time complexity becomes O(kV^2), where *k* can be constant. Identifying optimal ways of choosing the subset can be a future extension of the work.\\n\\n**> 3. The approach of using LLMs to simply determine ... understanding beyond general-purpose language models.**\\n\\n**Response:** We agree that using LLMs for causal discovery is challenging in real-world scenarios. However, many real-world graph discovery problems involve a combination of inferring novel and widely known relationships, and we believe that LLMs can save significant effort in extracting the known relationships.\\n\\nTherefore, the focus of this work is to develop a robust method to extract causal relationships from LLMs, at least the ones that are known to a general-purpose language model. As can be seen from our experiments in highly domain-specific datasets such as Neuropathic and Covid-19, the triplet method substantially improves the accuracy of obtaining such causal relationships.\\n\\nThat said, for complex real-world scenarios, a combination of LLM and data-based algorithms is more suited. Therefore, we proposed two variants that combine LLMs with data-based algorithms such as PC and CaMML (for example, LLMs may be used for providing a prior on the known relationships, which may help algorithms to learn the remaining novel relationships). In a similar direction, a key benefit of the triplet prompt is that it can also provide a measure of LLM's uncertainty (for each variable pair $<A, B>$, fraction of triplets that predict A causes B, B causes A, or no relationship) that can be useful for weighting the LLM-based prior for data-based algorithms.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for the explanation.\\n\\nOverall, most of my concerns in the review have been addressed in the rebuttal and following discussion. While I am still kind of doubtful about the significance of the theoretical contribution of this paper, I think it could provide some interesting perspectives to the community of LLMs and causality.\\n\\nI have adjusted my score accordingly.\"}", "{\"summary\": \"The authors aimed to find a better interface to use domain knowledge for causal discovery, including LLM, but not restricted. For LLM, this is done via a pairwise prompting strategy. The authors found that expert knowledge often falls short in distinguishing direct and indirect causal relations between pairwise variables, correspondingly many cycles in causal graphs, but their causal order is relatively precise. Given this finding, the authors suggest using expert knowledge of causal order instead of causal relation. While integrating the strategy of using causal order and LLM prompting, the authors propose to introduce an auxiliary variable to improve the conventional pairwise prompting strategy by avoiding cycles within the triplet. Experimental results support that the proposed prompting methods increase the robustness and performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-written. Motivation, method, and experiments are logically straightforward.\", \"The attempts to utilize LLM for causal discovery have identified certain problems, such as the difficulty in distinguishing between direct and indirect causal relationships, which consequently lead to cycles in the causal graph. The authors convincingly argue that these issues are inherent to the pairwise prompting approach.\", \"The authors propose using causal order to leverage domain expert knowledge in causal discovery without encountering the issues associated with pairwise prompting. Additionally, it presents a novel prompting strategy to support this approach.\", \"When the proposed method was utilized, performance improvement was consistently observed across experiments. Additionally, the experimental design accommodates node size scaling from small to large datasets, allowing for a comprehensive understanding of how the method is affected by node size.\"], \"weaknesses\": [\"Overall, I found this work very interesting, though a few questions remain unanswered. Addressing these issues could potentially lead to a higher score.\", \"It is hard to determine whether triplet prompting is effective for high-performance LLMs (such as GPT-4, Claude3-opus). It appears that the prompting strategy and LLM can be applied orthogonally, so it is weird the corresponding results are omitted. Could you add results for Triplet prompting + high-performance LLMs in Table 2, to clarify this matter?\", \"In the experimental design related to the downstream causal discovery algorithm, it is difficult to determine how the proposed method's advantages change with varying observations. Could you add corresponding experiments with varying sample sizes to clarify this matter?\"], \"questions\": [\"While domain expert knowledge is independent of the number of observations, it significantly affects the causal discovery algorithm. Therefore, it raises curiosity about how much performance improvement the proposed method can achieve in situations of data scarcity. Could you conduct an experiment with a very small dataset as a showcase for a data scarcity regime? It would be very interesting.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer WUxn's comments\", \"comment\": \"We thank the reviewer for their insightful comments, we have tried our best to incorporate their suggestions and answer their queries.\\n\\n**Reply to weaknesses**:\\n\\n**1. \\\"In the broader context of causal discovery ..... causal discovery problems.\\\"**\\n\\n**Response**: We agree that in truly novel situations, $\\\\epsilon$ error of LLMs in causal discovery can be high. As in previous work on LLM-based discovery, we instead focus our attention on known relationships and LLM's capabilities on extracting them. This can be practically useful; in real-world problems, building a full causal graph often involves a combination of novel and previously known relationships, and we hope that LLMs can save significant effort in extracting the known relationships. \\n\\nFor the genuinely challenging causal discovery problems, we think a combination of LLM and data-based algorithms is more suited. That is why we proposed two variants that combine LLMs with data-based algorithms such as PC and CaMML (for example, LLMs may be used for providing a prior on the known relationships, which may help algorithms to learn the remaining novel relationships). In a similar direction, a key benefit of the triplet prompt is that it can also provide a measure of LLM's uncertainty (for each variable pair $<A, B>$, fraction of triplets that predict A causes B, B causes A, or no relationship) that can be useful for weighting the LLM-based prior for data-based algorithms.\\n\\n**2. \\\"The premise in this paper is that querying experts with questions ......this is interpreted only as a causal ordering.\\\"**\\n\\n**Response**: That's a great point. Existing LLM-based methods (Kiciman et al. 2023, Long et al. 2022, and others) used pairwise queries to infer an edge (which motivated our story), but we agree that it is not a necessary feature of a pairwise method. We will refine our introduction and Section 3 to reflect this.\\n\\n**3. \\\"There is a bit of extra context that I would have liked in the empirical studies: \\n> 1) can you contextualize the datasets and graphs that are studied further and discuss whether they are semi-synthetic, purely realistic, etc.?\\\"**\\n\\n**Response**: All graphs are real-world graphs constructed by human experts. The data is generated using different methods. We summarize the datasets below (also see Table A15).\\n\\n| Dataset | Graph | Data for Variables |\\n|--------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------|\\n| BN Learn Datasets (Asia, Cancer, Earthquake, Survey, Child) | Real-world graphs from scientific studies | Synthetic data generation based on bnlearn library |\\n| Neuropathic Pain | Real-world graph constructed with consensus from medical experts (Tue et al. 2019). Includes domain-specific variables as Right L1 Radiculopathy, Toracal Dysfunction, DLS L5-S1, etc. (see Fig. A8) | Synthetic data generation based on Tu et al. (2019) |\\n| Alzheimers Dataset | Real-world graph constructed with consensus from medical experts (Abadullah et al. 2023). Constructed in 2023, after the training cutoff date of GPT-3.5 and GPT-4 models used. | No data is available |\\n| Covid-19 Respiratory Dataset | Real-world graph constructed by experts to understand effect of Covid-19 on respiratory system (Mascaro et al. 2022). Constructed in 2022, after the training cutoff date of GPT-3.5 and GPT-4 models used. | No data is available. |\"}", "{\"comment\": \"> 1) The authors comment that \\\"The key result of this section is that even with a Perfect Expert (e.g., human domain expert), the inferred graph using pairwise queries can be incorrect (Prop 3.2).\\\" I don't quite get this point for the following reasons: (1) The authors dedicated more than 2 pages (Section 3) with a great emphasis on the importance and use case of causal orders, whose significance is what I was questioning about (\\n\\nApologies, we meant to say Prop 3.1 here. Let us reiterate the two important usecases of causal order. \\n1) **Identifying a suitable adjustment set for effect inference**. We already discussed it above. \\n2) **Providing a prior or constraint to causal discovery methods**. As discussed in Section 3.2, causal order provides an accurate interface between domain experts and discovery algorithms. (Obtaining a causal graph from the domain expert is not suitable, as Prop 3.1 shows that even a perfect expert can output the wrong graph.) Moreover, causal order is not just a simpler structure, it also helps improve the accuracy of discovery methods: we show how causal order provides non-trivial improvements to existing discovery algorithms, both algorithmically (Sec 3.2) and empirically (Section 5). \\n\\n> W4 (regarding 4.2) - I was mainly asking about after involving a third variable, if would change accordingly. If so, I wonder if that still leads to a fair comparison between pairwise and triplet prompt.\\n\\nAh, thanks for clarifying. At least in our experiments, we did not see an increase in error rate ($\\\\epsilon$) as we move from pairwise to triplet prompt. Perhaps the effects of higher error due to longer context lengths may kick in at larger context sizes than what triplet prompt provides. Also, note that Prop 4.1 shows theoretically that _effective_ pairwise error rate for the triplet prompt is lower than that of a pairwise prompt (assuming an Expert that given a triplet of nodes, predicts causal relationship between all pairs of nodes sequentially).\"}", "{\"title\": \"Continuation of response to reviewer JgEV\", \"comment\": \"**>4.2 - Another question regarding this theorem: It seems that ... to resolve the key problem of this paper.**\\n\\n**Response:**\\nThe key difference is between local acyclicity for a triplet (assumption) and global acyclicity for the graph (empirical result). Based on our experiments with LLMs such as GPT-3.5 and 4, we find that they can obey the acyclicity constraint over 3 variables when specifically instructed to do so. Therefore, we assume that imperfect expert's output over 3 variables will not contain a cycle. Based on this, Prop 4.1 shows that error in predicting causal relationship between pairs of nodes is lower for the triplet prompt compared to the pairwise prompt. (For context, in the triplet method, all nodes are divided into all possible groups of three, and the expert (like LLMs) is specifically prompted to create a Directed Acyclic Graph which models the causal relationship between the nodes for each subgroup (refer Table A24 for the prompt template used for triplet subgraph)). \\n\\nThis result, however, does not ensure that the global graph has no cycles. This is what the triplet method achieves, through aggregation over triplets and further steps. Overall, since the triplet method ensures a higher quality prediction of the edge direction, we believe this leads to lesser cycle formations empirically as compared to pairwise.\\n\\n>My main concern is the lack of comparison with existing LLM-based methods. ... against existing baselines, such as those mentioned in Section 2?\\n\\n**Response**:\\n\\nWe thank the reviewer for this point. Our work focused on tackling the shortcomings of the standard pairwise LLM-based approaches for causal discovery. We compare our method against the pairwise prompting strategy proposed by (Kiciman et al., 2022). For a fair comparison, we further propose stronger variants of the pairwise method based on additional context and advances in LLM prompting (All Directed Edges, One Hop Iteration and Chain of Thought, refer Table A5).\\n\\n\\nAs an additional baseline reflecting the state-of-the-art in LLM-based methods, we have implemented two more methods from Jiralerspong et al., 2024 (\\\"Efficient Causal Graph Discovery using Large Language Models\\\") on 4 complex graphs used in our analysis: Covid, Alzheimers, Child and Asia. We evaluate the methods over two LLMs: GPT-3.5-Turbo and GPT-4. \\nThis paper presents an efficient breadth-first approach (BFS) to causal discovery using LLMs. The pipeline operates level-wise, starting by querying the LLM to identify all nodes independent of others in the graph. From these independent nodes, the LLM is queried to determine dependent nodes, adding edges accordingly. The dependent nodes are then added to a queue, and the process is repeated iteratively for each node in the queue until it is empty. For BFS+Stats method, the pearson corelation coefficient for the independent nodes to all the other nodes is added in the prompt as additional context for graph discovery. \\n\\nAs the results below show, BFS and BFS-Stats methods obtain lower accuracy than the Triplet method. In particular, except BFS with GPT-4, all configurations lead to cycles in Child and/or Covid datasets. SHD and Dtop for BFS and BFS-Stats methods (especially with GPT-3.5) are also higher than the Triplet method.\\n\\n\\nBFS (GPT-3.5-turbo)\\n\\n| Dataset | Dtop | SHD | IN | Cycles |\\n| -------- | -------- | -------- | -------- | -------- |\\n| Asia | 2 | 7 | 0 | 0 |\\n| Alzheimers | 5 | 17 | 2 | 0 |\\n| Child | - | 40 | 0 | 6 |\\n| Covid | - | 28 | 0 | 4 |\\n\\nBFS (GPT-4)\\n\\n| Dataset | Dtop | SHD | IN | Cycles |\\n| -------- | -------- | -------- | -------- | -------- |\\n| Asia | 0 | 1 | 0 | 0 |\\n| Alzheimers | 0 | 34 | 0 | 0 |\\n| Child | 11 | 30 | 0 | 0 |\\n| Covid | 5 | 20 | 0 | 0 |\\n\\nBFS + Statistics (GPT-3.5-Turbo)\\n\\n| Dataset | Dtop | SHD | IN | Cycles |\\n| -------- | -------- | -------- | -------- | -------- |\\n| Asia | - | 23 | 0 | 33 |\\n| Alzheimers | - | 27 | 1 | 17 |\\n| Child | - | 52 | 2 | 21 |\\n| Covid | - | 30 | 0 | 15 |\\n\\nBFS + Statistics (GPT-4)\\n\\n| Dataset | Dtop | SHD | IN | Cycles |\\n| -------- | -------- | -------- | -------- | -------- |\\n| Asia | 0 | 3 | 0 | 0 |\\n| Alzheimers | - | 14 | 0 | 1 |\\n| Child | 2 | 27 | 4 | 0 |\\n| Covid | - | 32 | 1 | 10 |\\n\\nFor reference these are the results of triplet with GPT-3.5-turbo\\n\\n| Dataset | Dtop | SHD | IN | Cycles |\\n| -------- | -------- | -------- | -------- | -------- |\\n| Asia | 1 | 14 | 0 | 0 |\\n| Alzheimers | 4 | 28 | 0 | 0 |\\n| Child | 1 | 28 | 10 | 0 |\\n| Covid | 0 | 30 | 0 | 0 |\"}", "{\"title\": \"Response to Reviewer 5DRM's comments\", \"comment\": \"We thank the reviewer for their insightful comments, we have tried our best to incorporate their suggestions and answer their queries.\\n\\n**Reply to weaknesses:**\\n\\n**> 1. It is hard to determine whether triplet prompting is effective for high-performance LLMs ..... LLMs in Table 2, to clarify this matter?**\\n\\n**Response:**\\nWe thank the reviewer for this point. We did not run Triplet with GPT-4 due to efficiency reasons. We have now run the experiments with GPT-4 as expert for orienting subgraphs and then re-using GPT-4 for resolving clashes during merging phase. Following are the results for graph discovery on Asia, Alzheimers and Child graphs. Upgrading to a superior model (GPT-4) leads to better results for all three graphs. \\n\\n\\n\\n| Dataset | Metric | Triplet GPT-4 | Triplet GPT-3.5-Turbo |\\n|---------------|------------|----------------------|--------------------|\\n| **Asia** | $D_{top}$ | 0 | 1 |\\n| | SHD | 10 | 14 |\\n| | Cycles | 0 | 0 |\\n| | IN/TN | 0/8 | 0/8 |\\n| **Alzheimers**| $D_{top}$ | 4 | 4 |\\n| | SHD | 23 | 28 |\\n| | Cycles | 0 | 0 |\\n| | IN/TN | 0/11 | 0/11 |\\n| **Child** | $D_{top}$ | 1 | 1 |\\n| | SHD | 24 | 28 |\\n| | Cycles | 0 | 0 |\\n| | IN/TN | 6/20 | 10/20 |\\n\\n**> 2. In the experimental design related to the downstream ..... experiment with a very small dataset as a showcase for a data scarcity regime? It would be very interesting.**\\n\\n**Response:**\\nWe thank the reviewer for their insightful question. As shown in Table 4 (main paper) and Table A3 (appendix), we analyzed the impact of observational dataset sizes on performance, ranging from data-scarce settings (250 instances) to data-rich ones (up to 10,000 instances) covering varying sample sizes (N=250, 500, 5000, 10000). Our results reveal that incorporating causal order using the triplet method consistently improves causal discovery, regardless of dataset size. These results are across graphs of varying sizes (4\\u20135 nodes to 24+ nodes), including domain-specific contexts like healthcare.\\n\\nBased on your suggestion, we also ran an experiment for even more data-scarce setting, N = 100. We observe that incorporating triplet prior has a stronger positive impact on graph discovery performance for data scarce settings. In particular, for the medium-size graphs, at N=100, $D_{top}$ for PC reduces from 6.3 to 2.3 for Child, and $D_{top}$ for CaMML reduces from 12.5 to 5, a significant improvement.\\n\\nN=100\\n\\n| Dataset | PC | SCORE | ICA Lingam | Direct Lingam | NOTEARS | CaMML | PC+LLM | CaMML+LLM | PC+Human | CaMML+Human |\\n|----------|----------|----------|----------|----------|----------|----------|----------|----------|-----------|-----------|\\n| Earthquake | $0.5\\\\pm 0.5$ | $4.00\\\\pm 0.00$ | $0.66\\\\pm0.94$ | $0.00\\\\pm0.00$ | $1.33\\\\pm0.47$ | $2.00\\\\pm0.00$ | $0.00\\\\pm 0.00$ | $0.00\\\\pm 0.00$ | $0.00\\\\pm 0.00$ | $1.00\\\\pm0.00$ |\\n| Cancer | $0.00\\\\pm 0.00$ | $2.66 \\\\pm 0.47$ | $1.33\\\\pm0.47$ | $2.00\\\\pm0.00$ | $1.66\\\\pm0.47$ | $3.00\\\\pm0.00$ | $0.00\\\\pm 0.00$ | $0.33\\\\pm0.00$ | $0.0\\\\pm 0.0$ | $0.00\\\\pm0.00$ |\\n| Asia | $1.75\\\\pm1.25$ | $6.33\\\\pm0.47$ | $3.00\\\\pm0.81$ | $0.66\\\\pm0.94$ | $3.33\\\\pm0.47$ | $2.33 \\\\pm 0.14$ | $0.33\\\\pm0.57$ | $0.97\\\\pm0.62$ | N/A | N/A |\\n| Asia-M | $1.00\\\\pm0.00$ | $6.00\\\\pm0.00$ | $0.66\\\\pm0.47$ | $2.33\\\\pm1.69$ | $3.00\\\\pm0.81$ | $1.55 \\\\pm 0.00$ | $0.00\\\\pm0.00$ | $1.00\\\\pm0.00$ | $1.00\\\\pm 0.00$ | $2.00\\\\pm0.00$ |\\n| Child | $6.33 \\\\pm 0.86$ | $13.33 \\\\pm 1.24$ | $13.66\\\\pm1.24$ | $15.33\\\\pm1.24$ | $14.33\\\\pm0.47$ | $3.00 \\\\pm 0.00$ | $2.33 \\\\pm 1.15$ | $4.00\\\\pm0.00$ | N/A | N/A |\\n| Neuropathic | $1.00\\\\pm0.00$ | $6.00\\\\pm0.00$ | $13.33\\\\pm2.81$ | $13.33\\\\pm1.54$ | $9.66\\\\pm0.00$ | $12.50\\\\pm0.00$ | $1.00\\\\pm0.00$ | $5.00\\\\pm0.00$ | N/A | N/A |\\n\\nWe plan to extend the results in paper to show how LLM priors help in data scarce settings for causal discovery.\"}", "{\"title\": \"Follow-up\", \"comment\": \"Thank you for the detailed rebuttals. However, a few of my questions in the section **Significance of contribution** have not been fully addressed. Could you please provide further clarification?\\n1. W1 (regarding 3.1) - The authors comment that \\\"The key result of this section is that even with a Perfect Expert (e.g., human domain expert), the inferred graph using pairwise queries can be incorrect (Prop 3.2).\\\" I don't quite get this point for the following reasons: (1) The authors dedicated more than 2 pages (Section 3) with a great emphasis on the importance and use case of causal orders, whose significance is what I was questioning about (2) It is unclear to me how Prop 3.2 explains why pairwise query is not good enough (3) Prop 3.2 is not novel result \\n2. W2 (regarding 3.2) - Could the authors answer my original question that \\\"Proposition 3.2 offers a sufficient condition for backdoor adjustment, but applying this condition broadly to include everything that satisfies it seems impractical.\\\"?\\n3. W4 (regarding 4.2) - I was mainly asking about after involving a third variable, if $\\\\epsilon$ would change accordingly. If so, I wonder if that still leads to a fair comparison between pairwise and triplet prompt. \\n\\nOverall I like the idea of the paper but I am kind of doubtful about the significance and usefulness of the results in Section 3 and 4.\"}", "{\"comment\": \"Thanks for the author's response. I appreciate the efforts of the authors' for the additional experiments.\\n\\n- Regarding weakness 1, my concern is partially resolved, for those experimental results seem not to be completely conclusive. The chosen pool of graphs for the experiment is not representative. In addition, metrics other than SHD (even the most highlighted metric, $D_{top}$) are almost identical between GPT-4 and GPT-3.5, so they seem not to provide enough information. However, I understand that the difference in general performance between GPT-4 and GPT -3.5 is not necessarily proven through single experiments. I expect future works might analyze this point more appropreiately.\\n- Regarding weakness 2, it seems that the authors tried to directly address my concern, but the chosen benchmark datasets are not appropriate for the analysis; there is no transparent trend along the change of the number of observations (N=100, 250, 10000).\\n\\nOverall, I think the benchmark datasets are not optimal for analyzing the proposed method. \\nThe benchmarks are not challenging enough for the proposed method, so in many cases, it results in performance near the upper bound, 0.00. This does not harm the novelty of the proposed method itself but inhibits the comprehensive understanding and highlighting of the characteristic behavior of the LLM integrated framework.\\nTaking these points together, I choose to maintain the current score.\"}", "{\"summary\": \"This paper considers optimal ways of querying imperfect experts (e.g., LLMs) when the aim is to discover a causal DAG over a set of variables. The key proposed idea is to query imperfect experts about the causal ordering over variables instead of graphical relationships like edges. The paper shows that certain ways of orienting edges based on queries to experts can lead to errors in the overall DAG, while the causal ordering from a perfect expert will never be incorrect. The paper proposes ways of integrating causal ordering into known causal discovery algorithms, and then focuses on extracting orderings from imperfect experts. To minimize errors by imperfect experts, the paper proposes querying for the order over triplets of variables while enforcing acyclicity, proving that this makes fewer errors than querying about pairs of variables (under some assumptions). The work then studies this querying strategy empirically using a variety of hand-curated known causal DAGs. They study both human and LLM imperfect experts, and find evidence that the triplet querying strategy improves causal discovery and effect estimation performance over both the pairwise querying strategy and using no experts at all.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The technical as well as written clarity of the paper is excellent. Technically, the notation is clear, all preliminaries and results are well-defined and well-explained. The paper is also written in a way that is easy to read and follow.\", \"The technical result about the optimality of the triplet querying strategy over the pairwise querying strategy is novel, to the best of my knowledge, and could have an impact on causal discovery algorithms more broadly.\", \"The empirical studies are comprehensive, and the initiative to contrast both humans and LLM imperfect experts is impressive since studies with humans are time-consuming and costly.\"], \"weaknesses\": [\"In the broader context of causal discovery, I question the idea that LLMs are $\\\\epsilon$-imperfect experts. Existing results including those in this work consistently consider causal discovery problems that involve standard medical knowledge, or general knowledge about the world that's fairly well-established by now. However, causal discovery as a field ought to be about discovering new knowledge, in domains where we have very only faint hypotheses about variables relate, e.g., consider abstract variables that represent aspects of human behavior like \\\"trust\\\" or \\\"aggression.\\\" In these settings, we should expect $\\\\epsilon$ to be so large as to render LLM experts unreliable. I'm curious to see the authors better justify and contextualize the role of LLMs in genuinely challenging causal discovery problems.\", \"The premise in this paper is that querying experts with questions of the form \\\"does A cause B\\\" and directing the edge A->B if the expert says yes leads to incorrect graphs. However, the query \\\"does A cause B\\\" isn't by itself inherently flawed -- it's already a question about causal ordering -- the policy to orient edges based on ancestral causal relations is what's flawed. Nevertheless, the introduction and parts of section 3 (e.g., definition 3.1) seem to critique the query itself, when really, the issue is that responses to this query should not be used to directly orient edges (e.g., the perfect expert in 3.1 is defined based on a flawed policy). Therefore, the argumentation strategy in the paper feels like setting up a strawman argument. Note, though, that this is just a critique about the storytelling; the technical results are valid and make sense regardless. One could pivot the story slightly to argue that the flaw isn't with the \\\"does A cause B\\\" style of query, but in ensuring that this is interpreted only as a causal ordering.\", \"There is a bit of extra context that I would have liked in the empirical studies: 1) can you contextualize the datasets and graphs that are studied further and discuss whether they are semi-synthetic, purely realistic, etc.? 2) Despite the results in Figure 3 on synthetic data, it would be useful in the broader empirical studies to compare the causal ordering queries to the flawed policy of orienting edges based no responses to \\\"does A cause B\\\"-style questions. (I apologize if this comparison is indeed there and I missed it -- if that ended up being the case, please point me to the correct figure or table in the empirical studies.)\"], \"questions\": [\"Why do you think the premise of LLMs as $\\\\epsilon$-imperfect experts makes sense more generally in causal discovery, given that $\\\\epsilon$ *should* large if we're solving a truly interesting causal discovery problem?\", \"Can you the details about the empirical studies that I mention above?\"], \"edit\": \"The authors satisfactorily addressed my questions and concerns in their response; I'm happy to thus raise my score and support their work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper introduces a strategy for querying imperfect experts (such as LLMs) for causal discovery, focusing on causal ordering over graphical relationships.\", \"strengths\": [\"The paper proposed triplet querying strategy for causal discovery.\", \"The inclusion of both human and LLM experts in experiments provides valuable insights, especially given the cost and time involved in human studies.\"], \"weaknesses\": [\"The paper lacks justification of the use of LLMs as imperfect experts, especially in challenging causal discovery tasks where LLMs might be unreliable.\", \"The critique of the \\\"does A cause B\\\" question feels misplaced, as the issue lies more in how edge orientation is determined, not the query itself.\", \"Some key details in the empirical studies, such as the dataset context and variations in experimental setups, are missing or unclear, limiting the robustness of the findings.\"], \"additional_comments_on_reviewer_discussion\": \"The reviewers unanimously lean toward acceptance.\"}", "{\"title\": \"Requesting feedback on the rebuttal\", \"comment\": \"Thank you again for your helpful feedback. We tried our best to address your concerns with additional experiments and details about the benchmark datasets.\\n\\nPlease let us know if you have any further questions or comments. We would be happy to provide additional clarifications.\"}", "{\"title\": \"Thanks for your feedback\", \"comment\": \"Thanks for the author's response. I choose to keep my score.\"}", "{\"summary\": [\"LLMs are often used as an expert for finding causal graphs, but they can\\u2019t distinguish between direct and indirect edges. Authors proposed to use causal order as a more stable version of it, along with a triplet prompt approach.\", \"Their proposed approach facilitates the recovery of causal order despite imperfect experts.\", \"Prompt strategy that introduces auxiliary variables that introduces an auxiliary variable for every variable pair and instructs the LLM to avoid cycles within this triplet.\", \"Triplet prompt leads to fewer cycles than pairwise\", \"Causal order is a simpler structure that can still encode helpful information for down-stream tasks;\"], \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Well-written motivation and discussion of limitations;\\n\\nGood discussion on the utility of the causal order, and related works. \\n\\nSound experiments with empirical evidence of the effectiveness of the proposed method.\", \"weaknesses\": \"Despite a sound experimental analysis, there a few details missing, i.e., how many repetitions were done in Tables 1 and 2,\\n\\nOne potential area not explored/mentioned is what happens with a large number of features. As the proposed method decreases the feature space, would the proposed method also facilitate the adoption of causal discovery areas that contains larger graphs?\", \"questions\": \"Q1: Can you clarify the experimental setup in Table 4? The title shows \\u2018ours - < causal discovery > + LMM\\u2019. But in line 279 it is described that \\u201ccausal order is used to reduce search space of causal discovery methods\\u201d. In that sense, should it be LLM + < causal discovery > instead? As the LLM with triplet is used to reduce the search space inside the causal discovery method.\", \"q2\": \"Can you describe in more details the causal effect component? How is the causal order used in that context? Is it used to drop variables (aka, no informative features or down-stream features)? How are the counterfactuals estimated?\", \"q3\": \"Another ablation study that would be helpful is to see the robustness to the number of variables. Most causal discovery methods struggle to scale for larger graphs / number of features. How LLMs + triplet could be adopted to reduce search space and improve this current causal discovery bottleneck?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Continuation of response to reviewer JgEV\", \"comment\": \"While BFS and BFS+stats fail to give 0 cycles across all graphs even with GPT-4, triplet consistently gives acyclic causal graphs, with lower Dtop.\\n\\n**PC + BFS**. We present another ablation which explores a hybrid approach where the PC algorithm integrates an LLM-derived prior (GPT-4) obtained via BFS for Alzheimer's and COVID graphs. The prior directly provides edge orientations, which guide the initial graph structure, while PC subsequently orients remaining edges. Unlike triplet that used only causal order, this approach incorporates the full graph as a prior. The PC algorithm is further supported by a large observational dataset of 10,000 samples.\\n\\nThe results show that PC + BFS (GPT-4) is also outperformed by Triplet method (GPT-3.5). Specifically, PC+BFS yields 1 cycle and higher SHD on Covid dataset. On the Alzheimers dataset, PC+BFS is comparable: it yields higher Dtop but a lower SHD. \\n\\n| Dataset | Dtop | SHD | IN | Cycles |\\n| -------- | -------- | -------- | -------- | -------- |\\n| Alzheimers | 5 | 14 | 0 | 0 |\\n| Covid | - | 36 | 0 | 1 |\\n\\n\\n**Response to Questions:**\\n\\n>1. Experiment - Missing reference to Table 2 in the main paper.\\n\\nThanks for pointing it out. We will fix this.\\n\\n>2. Line 114 - Could the authors provide some justification on why the redundancy leads to a more reliable order?\\n\\n**Response**: In the triplet method, for each pair of nodes, we have multiple answers from the LLM, each considering a different auxiliary node as context.\\nTo aggregate the final graph, we take majority vote on the answers from each edge, further leading to robustness (See our explanation in Section 4.2 (line 334)).\\nRedundancy thus makes the causal order more reliable, in comparison to the pairwise method where each pair is queried only once and without any additional context about other nodes in the graph. See our explanation in \\nSec 4.2 (line 334).\\n>3. Line 52 - Would using triplet prompt lead to a more efficient method in this sense?\\n\\n**Response**: Yes, rather than providing all the nodes as context, the triplet prompt only adds a single node as additional context. As a result, the LLM handles only 3 nodes at a time, even for large graphs.\"}", "{\"title\": \"Response to reviews by reviewer JgEV\", \"comment\": \"We thank the reviewer for their insightful comments, we have tried our best to incorporate their suggestions and answer their queries.\\n\\n**Response to weaknesses:**\\n\\n**>3.1 - While I agree with the intuition that identifying causal .... chance of errors still preserves the usefulness of the method.**\\n\\n**Response:** The key result of this section is that even with a Perfect Expert (e.g., human domain expert), the inferred graph using pairwise queries can be incorrect (Prop 3.2). Given that inferring edges using pairwise queries is the dominant method in the LLM-based discovery literature, we believe that this is result of significance.\\n\\n**>3.2 - This section should have provided evidence addressing the ..... this condition broadly to include everything that satisfies it seems impractical.**\\n\\n**Response:** \\nSection 3.2 provides 3 key results showing the usefulness of causal order.\\n\\n1. Prop 3.1 shows that causal order can be used to find a valid, unbiased backdoor set. For example, in Table A14, we compare the estimation error when using the minimal backdoor set obtained using the full graph, to the \\\"maximal\\\" backdoor sets obtained using the causal order. Across sample sizes from 250-10000 for the Asia dataset, we find the difference in estimation error between the two kinds of adjustment sets is minimal (see also Table 5). However, as Cinelli et al. (2022) argue, such a maximal set may have high variance. Note that we consider the causal order as output only because it is a structure that we can obtain accurately from an expert. To obtain both unbiased estimation and low variance, we can use the causal order as input to a data-based graph discovery algorithm (as described in pt 3 below) and then derive the optimal backdoor set using the obtained graph. \\n\\n2. In addition to Prop 3.2, the key result of this section is that the causal order metric, $D_{top}$ is a more accurate measure of effect estimation error than SHD. This result assumes significance since SHD is a popular method for evaluating graph quality especially for LLM-based methods; we show that if the downstream task is effect inference, measuring $D_{top}$ is more suitable. \\n\\n3. We also provide algorithms on how causal order can be used to improve accuracy of data-based graph discovery algorithms. Empirically, we show that the accuracy of incorporating (expert-predicted) causal order in discovery algorithms is higher than that of the discovery algorithms alone. \\n\\n**>4.1 - Should these be considered as contributions of the paper?**\\n\\n**Response:** Yes, our extensions of pairwise strategies can be considered as contributions of the paper. To evaluate the benefit of the Triplet method, we create stronger versions of the standard pairwise method that utilize additional graph context and the latest advances in LLM prompting. The objective is to make sure that we have explored possible extensions of the pairwise method as much as possible, before proposing the Triplet method. One of these methods, Pairwise (CoT), obtains significantly better results than the standard pairwise method used in the literature (see Table 3), but is still less accurate than the Triplet method. \\n\\n**>4.2 - The authors try to justify why triplet prompt is ... would become an \\u03f5\\u2032-expert where \\u03f5\\u2032>\\u03f5? If so, would this become a trade-off and how should we interpret this tradeoff?**\\n\\n**Response:** Yes, as we increase the number of nodes in the prompt, there is a tradeoff: Adding more nodes provides more context and thus is beneficial, but more nodes in the LLM's prompt can also lead to higher error (Levy et al., 2024) and higher computational cost. Therefore, we tackled this question empirically by comparing pairwise, triplet, and quadruplet-based prompts. As Table A9 shows, using a quadruplet prompt slightly increases accuracy but leads to a significant increase in the number of LLM calls. In contrast, the increase in accuracy (especially cycle avoidance) is substantial when moving from pairwise to the triplet method. \\n\\nGiven these considerations, we decided to go with the Triplet prompt, as it allows for adding more context with minimal increase in prompt complexity and total number of LLM calls. Note that future iterations of language models might be able to handle longer context better with more improvements, therefore the \\u03f5\\u2032 will vary with model size, architecture and data the model is trained on. Since we don't have the information about this, it will be difficult to model \\u03f5\\u2032 accurately. However, with the LLMs that we have tried (GPT-4, GPT-3.5, Phi-3 and LLama3), we do not see an increased error when using the triplet prompt compared to the pairwise prompt.\"}" ] }
9iN8p1Xwtg
Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction
[ "Zhenmei Shi", "Yifei Ming", "Xuan-Phi Nguyen", "Yingyu Liang", "Shafiq Joty" ]
Large Language Models (LLMs) have demonstrated remarkable capabilities in handling long context inputs, but this comes at the cost of increased computational resources and latency. Our research introduces a novel approach for the long context bottleneck to accelerate LLM inference and reduce GPU memory consumption. Our research demonstrates that LLMs can identify relevant tokens in the early layers before generating answers to a query. Leveraging this insight, we propose an algorithm that uses early layers of an LLM as filters to select and compress input tokens, significantly reducing the context length for subsequent processing. Our method, GemFilter, demonstrates substantial improvements in both speed and memory efficiency compared to existing techniques, such as standard attention and SnapKV/H2O. Notably, it achieves a 2.4$\times$ speedup and 30\% reduction in GPU memory usage compared to SOTA methods. Evaluation on the Needle in a Haystack task shows that GemFilter significantly outperforms standard attention, SnapKV and demonstrates comparable performance on the LongBench challenge. GemFilter is simple, training-free, and broadly applicable across different LLMs. Crucially, it provides interpretability by allowing humans to inspect the selected input sequence. These findings not only offer practical benefits for LLM deployment, but also enhance our understanding of LLM internal mechanisms, paving the way for further optimizations in LLM design and inference.
[ "Large Language Models", "Long Context", "Inference Acceleration" ]
Reject
https://openreview.net/pdf?id=9iN8p1Xwtg
https://openreview.net/forum?id=9iN8p1Xwtg
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x37Zm0isrn", "wpbSmnEbeO", "s0I6CJm5g6", "reXuBbQ78y", "nVb8dZRTac", "jXWMnbH49X", "j2eVVfgOpl", "imaVv1slqW", "h6cSTfoogJ", "gVxb8hAL9w", "gVhtK4GvCQ", "fNhLTqmGip", "fKIJ2W5z7J", "bvAmRna18n", "Y80ieYMlG0", "TjFJsP3pId", "PA54ns2FPI", "Njq2QEv50Y", "NMhL8NhF6U", "M3FimuTXyE", "LumDaAK3Ts", "JgE7o7kyqw", "Hfo9YdD7SJ", "FO31PP6XMf", "DkO9iGppCt", "D13XispofH", "AFFHzVXwBw", "47eNd2zPen", "39UI1cxJzc", "34LYTxAZdM", "2NMJRgzLR3", "1flrJhRUyH", "0EPPYtRvaL" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732628096459, 1732272625212, 1732506336152, 1732396641290, 1737523450939, 1732933901396, 1732272511563, 1730566089426, 1732272689531, 1732690766994, 1732917748678, 1733111474888, 1730714808504, 1732532354453, 1732935609993, 1733081487259, 1730654206129, 1732906492837, 1733164649781, 1734712134237, 1732938228944, 1730578688970, 1733299835295, 1733071221230, 1732532382850, 1732532263277, 1732272432945, 1732690840587, 1732691116042, 1732532182542, 1732272401956, 1732586846856, 1732908733633 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1405/Reviewer_FAs6" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Reviewer_bddG" ], [ "ICLR.cc/2025/Conference/Submission1405/Reviewer_Np9u" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Reviewer_FAs6" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Reviewer_Np9u" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Reviewer_bddG" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Reviewer_bddG" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Reviewer_rvyU" ], [ "ICLR.cc/2025/Conference/Submission1405/Reviewer_Np9u" ], [ "ICLR.cc/2025/Conference/Submission1405/Reviewer_rvyU" ], [ "ICLR.cc/2025/Conference/Submission1405/Area_Chair_LBgU" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Reviewer_Np9u" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Reviewer_FAs6" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ], [ "ICLR.cc/2025/Conference/Submission1405/Reviewer_bddG" ], [ "ICLR.cc/2025/Conference/Submission1405/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thank you for the response\", \"comment\": \"I thank the authors for the experiments on larger models and the MInference baseline. Can the authors further explain why GemFilter falls short of MInference specifically in Single-Document QA?\"}", "{\"comment\": \"We extend our gratitude to the reviewer for their meticulous feedback. We offer the following elucidations:\\n\\n### W1: multi-hop Q&A task\\nThank you for your question. Note that the LongBench benchmark has multi-hop QA tasks like HotpotQA and 2WikiMQA. Our method beats all other baselines in Table 1, which shows that GemFilter has a good ability to handle questions focusing on different parts of the given long context. We find that GemFilter is able to select key information from multiple documents rather than only focusing on one part. \\n\\n### W2: The robustness of the method still needs to be further demonstrated.\\nThank you for your careful check! We agree that for different tasks the best filter layer may be different. However, we show that some layers have competitive overall tasks, although not the best. On the other hand, we can adaptively choose the layer $r$ on a given task:\\n- For a fixed $r$, we use the $r$-th layer as the filter to generate the output. Compare the output with filtering and the output without filtering, and compute the similarity as a metric. \\n- Then, we pick the $r$ that leads to the largest similarity. \\n\\nWe also point out that one can use traditional techniques for hyperparameter selection for picking $r$, such as using a validation set from the target task. Note that hyperparameters are typical in machine learning methods. Our method has only one hyperparameter, and there is a simple rule of thumb for setting it and a good value range to get stable performance, which is also believed as a strength point from reviewer FAs6. Thus, we believe setting the hyperparameter $r$ is not difficult.\\n\\n### W3: For Mistral Nemor and Phi 3.5, the filter layer appears in the middle (or even later) of the model, limiting the efficiency gain that could be achieved with this method.\\nThank you for your comments. Our method reduces both the prompt computation and iterative generation time.\\n- Filtering using the middle layer or nearby still reduces the prompt time by nearly half (see Figure 6).\\n- Furthermore, note that the reduction in the generation time is not affected by the filter layer selection. The reduction is significant, i.e., 2x to 5x speedup.\\n\\nOn the other hand, as our method reduces both the prompt computation and iterative generation time, reviewer FAs6 values our novelty that \\u201cWhile most KV Cache papers focus on the generation phase, this paper demonstrates the possibility of token eviction in the prefilling phase, which to my knowledge is interesting.\\u201d\\n\\n### Q1: Baseline (in Table 1) seems to be lower than expectation\\nThank you for your careful reading. We also noted that Llama-3-8B does not perform well on some QA tasks. The reason is that we have not used the chat template specialized for Llama. As we have not used the chat template for Mistral Nemo and Phi 3.5, for a fair comparison, we just keep the same protocol among all models. \\n\\n### Q2: Some analysis for the improvements above the dense attention\\nThank you for your valuable questions! There are two reasons that our method is better than dense attention:\\n- Our method's filtering already helps filter out most useless tokens. This significantly helps the LLMs that were otherwise lost in the haystack.\\n- In the second run of GemFilter, the input length will be reduced from $n$ to $k$. The RoPE positional distance among the tokens will be much smaller, and the LLMs have better performance in a short context. \\n\\nOn the other hand, we add new ablation studies in Section 4.4 and Appendix D.5 in the revision (Line 483-485 & 903-962) to further verify our second points above. We refer the reviewer to our revision for more details. \\n\\n### Q3: How is the running time of decoding stage (gen time) computed in Figure 3/6? What is the specific setting for the evaluation (i.e., generation length)?\\n\\nThank you for your question! The iterative generation running time and memory consumption are evaluated using $50$ tokens generation. We add the setting for the evaluation of running time and memory consumption in the revision Line 121-122 & 489-490.\\n\\nWe hope our response addresses your concerns.\"}", "{\"comment\": \"I extend my gratitude to the authors for their comprehensive response, particularly the inclusion of additional baselines and ablation studies of *GemFilter-One-Run*. I have a follow-up question:\\n\\n- In the GemFilter methodology, the initial $r$ layers are employed to identify significant tokens, whereas in GemFilter-One-Run, the entirety of the $m$ layers is utilized. Could you provide insights into the consistency of this approach? Specifically, are the tokens selected by the initial $r$ layers analogous to those chosen by the entire $m$ layers, or perhaps even superior? An analysis of the consistency or divergence of tokens selected by different layers, along with a comparison to tokens selected by SnapKV, would be highly appreciated.\"}", "{\"comment\": \"I would like to thank the authors for the detailed response. And I still have a few follow-up concerns regarding the clarifications provided:\", \"regarding_w1\": \"I would like to see more clarification on the token filtering process in multi-hop question-answering scenarios, particularly in interactive settings. When questions are presented sequentially, particularly in a multi-round dialogue format rather than simultaneously, does the token filtering process need to be repeated for each new query? This raises a potential concern about information loss, as tokens that might be crucial for answering future questions could be inadvertently pruned during filtering steps in earlier rounds of conversation.\", \"regarding_w2\": \"While the authors propose adaptively choosing layer $r$ for specific tasks to address accuracy degradation, I believe the robustness concerns persist for several reasons. First, in the context of deploying a general-purpose LLM, it may be impractical to calibrate hyperparameters in advance given the diverse nature of user requests. Second, the hyperparameter search process itself incurs computational overhead. The task-specific nature of these hyperparameters could potentially limit the method's practical applicability in real-world scenarios where task requirements are not known a priori.\", \"regarding_q1\": \"I have concerns about the evaluation methodology when the chat template differs from the model's original template. In cases where the model's baseline performance is already much lower than expectation, the evaluation may be less informative / convincing. The accuracy loss caused by the proposed method may not be accurately reflected, since the accuracy of the original model is already impaired due to the template mismatch.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Thank you\", \"comment\": \"We are glad that our response fixes your concerns! If the reviewer has any further concerns, we are willing to address them. We appreciate the reviewer for the time and effort involved in the review.\"}", "{\"comment\": \"We extend our gratitude to the reviewer for their meticulous feedback. We offer the following elucidations:\\n\\n### W1: Public LLMs that are not well-trained to leverage those deeper layers\\nThank you for pointing out these insightful comments. The author has a different opinion from the reviewer. We think the \\\"not well trained\\\" argument might not be justified/true with current LLMs, particularly note that LLaMA3.1 has been trained on 15T tokens. \\n\\nFrom our perspective, the phenomenon that early layers can be used as filters is not because the LLMs are not well-trained. They are well-trained, such that the early layers are more responsible for understanding and analyzing the input while the later layers are more responsible for generating outputs. From this perspective, such property is inherent in well-pretrained language models. While things can change when completely new paradigms are proposed, we believe our techniques are significant contributions to the community.\\n\\n### W2.1: It may not be easy to robustly choose the layer $r$ for different tasks and models.\\nThank you for pointing this out! Due to the limited time of the rebuttal period, we are not able to fully solve this problem by some end-to-end automatic methods. We have some possible adaptive methods to choose the layer $r$ on a given task:\\n- For a fixed $r$, we use the $r$-th layer as the filter to generate the output. Compare the output with filtering and the output without filtering, and compute the similarity as a metric. \\n- Then, we pick the $r$ that leads to the largest similarity. \\n\\nOn the other hand, we can simply pick some layer near the middle layer, which then leads to good performance as shown in Table 2. Thus, the rule of thumb is to set $r$ to be close to half of the number of layers. \\n\\nWe also point out that one can use traditional techniques for hyperparameter selection for picking $r$, such as using a validation set from the target task. Note that hyperparameters are typical in machine learning methods. Our method has only one hyperparameter, and there is a simple rule of thumb for setting it and a good value range to get stable performance, which is also believed as a strength point from reviewer FAs6. Thus, we believe setting the hyperparameter $r$ is not difficult.\\n\\n### W2.2: More datasets should be included for evaluation.\\nWe believe that LongBench may be already a good representative set of tasks or datasets, where it is a multi-task benchmark designed to rigorously evaluate long-context understanding capabilities across various datasets, including single- and multi-document Question Answering (QA), summarization, few-shot learning, and synthetic tasks. \\n\\nDue to the limited time of the rebuttal period, we are not able to fully examine our methods on a new benchmark. However, we have run some new ablation studies (Line 476-485 & 860-962) and compared our method with more baselines (Line 378-407 & 419-424 & 430-431). We refer the reviewer to our revision for more details. \\n\\n### Q1: In practice, LLMs are generalists, and therefore need to handle different tasks and requests dynamically. How do you make sure that the proposed method can robustly find a good early-layer signal to get high accuracy for a mixture of tasks and requests.\\n\\nThank you for your questions! On the one hand, in our experiments, we use a fixed filter layer for all experiments, and we can see the results are great overall tasks. Thus, we believe that our filter layer is robust. On the other hand, our conjecture is that the LLMs use early layers to understand and analyze the input while the later layers are more responsible for generating outputs. To fully verify the conjecture requires substantial efforts and may be beyond the scope of this work. We leave this interesting and important topic as our future direction. \\n\\n\\nWe hope our response addresses your concerns.\"}", "{\"summary\": \"Stemming from the observation that only selecting 100 tokens in the pre-filling phase is enough to preserve the performance achieved by kv cache eviction methods, this paper proposes GemFilter that identifies relevant tokens with early layers and pass fewer tokens to the generation phase.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) While most KV Cache papers focus on the generation phase, this paper demonstrates the possibility of token eviction in the prefilling phase, which to my knowledge is interesting. Also, I am not actively working on KV cache eviction and might miss relevant references.\\n\\n(2) The explanation of Algorithm 1 in section 3.2 is very clear. \\n\\n(3) The flexible choice of filter layer is quite useful in practice, avoiding extensive hyperparameter tuning. I suspect this is highly related to the pre-LN used in most LLMs. I am wondering if it is possible to report a similar study of the filter layer for Gemma 2, as it uses a different layer norm.\", \"weaknesses\": \"(1) What is the intuition behind selecting tokens with the most attention specifically from the last query? I guess this will likely focus on the initial tokens and the last tokens that are close to the query. I would like to see more ablation studies here. Instead of the last query, can we choose other queries? Baselines here can be randomly selecting rows of the attention matrix, and selecting rows with the largest l2 norm.\\n\\n(2) A certain level of performance loss is observed for Phi 3.5 Mini. While for larger models like LLaMA 8B and Mixtral 12B, we somehow observe improved average scores. Perhaps, full tokens are still needed for small models.\\n\\n(3) I am not sure how important to reduce the memory requirement for the pre-filling phase, as this phase is more computing-intensive than memory-intensive. And the computation here can be done in parallel. In contrast, memory is a big issue in the decoding phase.\\n\\n(4) A very important baseline is missing: MInference (https://github.com/microsoft/MInference), which also aims to reduce pre-filling memory.\", \"questions\": \"1. The flexible choice of filter layer is quite useful in practice, avoiding extensive hyperparameter tuning. I suspect this is highly related to the pre-LN used in most LLMs. I am wondering if it is possible to report a similar study of the filter layer for Gemma 2, as it uses a different layer norm.\\n\\n2. What is the intuition behind selecting tokens with the most attention specifically from the last query? I guess this will likely focus on the initial tokens and the last tokens that are close to the query. I would like to see more ablation studies here. Instead of the last query, can we choose other queries? Baselines here can be randomly selecting rows of the attention matrix, and selecting rows with the largest l2 norm.\\n\\n3. A certain level of performance loss is observed for Phi 3.5 Mini. While for larger models like LLaMA 8B and Mixtral 12B, we somehow observe improved average scores. Perhaps, full tokens are still needed for small models.\\n\\n4. I am not sure how important to reduce the memory requirement for the pre-filling phase, as this phase is more computing-intensive than memory-intensive. And the computation here can be done in parallel. In contrast, memory is a big issue for the decoding phase.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We extend our gratitude to the reviewer for their meticulous feedback. We offer the following elucidations:\\n\\n\\n### W1 & Q2: I would like to see more ablation studies here. Instead of the last query, can we choose other queries?\\nThank you for your valuable suggestions! We update the experiments with the two baselines you suggested in Section 4.4 and Appendix D.4 in revision Line 476-482 & 860-901.\\n\\nIn Figure 9, we introduce two methods: (a) selecting middle rows of the attention matrix and (2) selecting rows with the largest $\\\\ell_2$ norm. Both methods fail in the Needle in a Haystack task, verifying that selecting the last query token is essential. \\n\\n### W2 & Q3: A certain level of performance loss is observed for Phi 3.5 Mini. While for larger models like LLaMA 8B and Mixtral 12B, we somehow observe improved average scores. Perhaps, full tokens are still needed for small models.\\nThank you for your careful check! We can see that, in Table 1, with more tokens preserved, the GemFilter will have a better performance on Phi3.5, i.e., GemFilter-4096 is much better than GemFilter-1024. Note that the small model is fast and consumes light memory. Then, how to balance the trade-off between token number and performance is an interesting and important topic. We leave it as our future work. \\n\\n### W3 & Q4: I am not sure how important to reduce the memory requirement for the pre-filling phase, as this phase is more computing-intensive than memory-intensive. And the computation here can be done in parallel. In contrast, memory is a big issue in the decoding phase.\\n\\nThank you for your comments! We agree with your insightful opinion. \\n\\nOur method can save both memory consumption and running time in both pre-filling phase and decoding phase, i.e., we save all four complexities, as shown in Figure 3 and Figure 6. Thus, we believe that this is a strength of our method rather than a weakness. \\n\\n### W4: A very important baseline is missing: MInference (https://github.com/microsoft/MInference), which also aims to reduce pre-filling memory.\\nThank you so much for pointing out these suggestions! We miss this important baseline in the original version. We have added this brilliant work in our revision Line 157-160. We also provide the comparison with Minference on LongBench [2] in Table 1, where our method is compatible with Minference, but faster than Minference in the prompt computation phase. We refer the reviewer to the **Missing baseline** section of Global Response for more details. \\n\\nWe only evaluate MInference on LLaMA 3.1 8B Instruct as it is not supported on the Mistral Nemo 12B Instruct and Phi 3.5 Mini 3.8B Instruct currently. \\n\\n### Q1: I am wondering if it is possible to report a similar study of the filter layer for Gemma 2, as it uses a different layer norm.\\nThank you for your question! We checked the official Gemma 2, which only has a context window of 8k, while our paper focuses on the long context settings. We will evaluate Gemma 2 in our next revision if the official Gemma 2 model supports a long context. \\n\\nWe hope our new experiments address your concerns. \\n\\n### Reference\\n\\n[1] Jiang, H., Li, Y., Zhang, C., Wu, Q., Luo, X., Ahn, S., ... & Qiu, L. (2024). Minference 1.0: Accelerating pre-filling for long-context llms via dynamic sparse attention. NeurIPS\\u201924.\"}", "{\"title\": \"Third version revision update\", \"comment\": \"We made a third revision, where the update is in the color purple (we also keep our old updates). In Line 1049-1179, we show the index selection difference between Gemfilter and SnapKV on the LLaMA 3.1 model and Phi 3.5 models.\"}", "{\"comment\": \"Thank you for the clarification! Now I understand that the total prefilling time of GemFilter would be still shorter than the dense baseline when dealing with multi-round dialogues. I raised my score to 5.\"}", "{\"title\": \"Looking forward to receiving your feedback\", \"comment\": \"Dear Reviewer rvyU,\\n\\nWe hope we have adequately addressed your issues. We would be very grateful if you could provide feedback on our rebuttal since the discussion deadline is approaching in one day. If you require further clarification or have any additional concerns, please do not hesitate to contact us. We are more than willing to continue communicating with you.\\n\\nWarmest regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper introduces GemFilter, which leverages early layers of large language models (LLMs) to compress input tokens and accelerate inference. GemFilter identifies relevant tokens in the early layers before generating answers to a query, significantly reducing the context length for subsequent processing. The proposed method achieves a 2.4 $\\\\times$ speedup and 30% reduction in GPU memory usage compared to SOTA methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper is targeting a critical issue in large language models (LLMs) by proposing a novel approach to accelerate inference and reduce GPU memory consumption. The proposed method, GemFilter, demonstrates substantial improvements in both speed and memory efficiency compared to existing techniques, such as standard attention and SnapKV/H2O. The paper is well-written and easy to follow, and the experimental results are convincing. The evaluation on the Needle in a Haystack task shows that GemFilter significantly outperforms standard attention, SnapKV and demonstrates comparable performance on the LongBench challenge. The proposed method is simple, training-free, and broadly applicable across different LLMs. The paper also provides interpretability by allowing humans to inspect the selected input sequence.\", \"weaknesses\": \"The method presented in the paper lies between prompt compression and KV cache compression. Its token selection approach aligns more closely with KV cache compression techniques like SnapKV and H2O, while its re-computation of selected tokens resembles prompt compression methods such as LLMLingua [1] and LongLLMLingua [2]. Although the authors compare their method with SnapKV and H2O, including a comparison with prompt compression methods, particularly LongLLMLingua, would enhance the analysis.\", \"the_performance_improvement_of_gemfilter_may_stem_from_two_factors\": \"(1) the selection of important tokens, and (2) the re-computation of these tokens, which might mitigate issues like \\\"lost-in-the-middle.\\\" An ablation study to isolate the contribution of each factor would be beneficial.\\n\\n[1] Jiang, Huiqiang, et al. \\\"Llmlingua: Compressing prompts for accelerated inference of large language models.\\\" arXiv preprint arXiv:2310.05736 (2023).\\n\\n[2] Jiang, Huiqiang, et al. \\\"Longllmlingua: Accelerating and enhancing llms in long context scenarios via prompt compression.\\\" arXiv preprint arXiv:2310.06839 (2023).\", \"questions\": \"Please refer to the weaknesses section for questions. I am willing to adjust the score if the concerns are addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you and further reply to new concerns (Part 1)\", \"comment\": \"We are glad that our reply has addressed some concerns from the reviewer! We sincerely thank you for your insightful response. We appreciate your time and we would like to fix your follow-up concerns.\\n\\n### W1.1: In a multi-round dialogue format rather than simultaneously, does the token filtering process need to be repeated for each new query?\\nThank you so much for your insightful questions! Here is the way to use GemFilter for multi-round dialogue. \\n\\nWe use LLaMA 3.1 8B Instruct as an example. After each round of dialogue, we keep the full KV Cache of the whole history for the first 13 layers. For a new coming query, we use the first 13 layers to get the index set and then conduct the second run of GemFilter. Finally, we update the first 13 layers\\u2019 KV Cache for the new round. \\n\\nThus, when $n \\\\gg k$, compared to the standard way:\\n- The KV Cache memory consumption of GemFilter is only 13/32 of the KV Cache memory consumption of the standard way. \\n- The running time is only 13/32 of the running time of the standard way, as we only compute the new query over the full KV Cache over 13 layers rather than 32 layers. \\n \\n### W1.2: This raises a potential concern about information loss, as tokens that might be crucial for answering future questions could be inadvertently pruned during filtering steps in earlier rounds of conversation.\\nThank you for your pointing out this! We agree that this is a certain concern in the KV-cache compression community. Indeed, most static KV-cache compression methods in this line, e.g., H2O [1], SnapKV [2], and MInference [3], suffer from this information loss problem. \\n\\nThus, some very recent preprint work [4,5], which was released a few days before the ICLR 2025 submission deadline, tried to solve the problem by using dynamic KV-cache compression. They need to save all layers full KV-cache in the memory and use the approximate nearest neighbor search method, e.g, IVF indexes in [4] and Local Sensitivity Hashing in [5], for dynamic queries. However, there are several concerns.\\n- There is no efficient GPU implementation of the approximate nearest neighbor search method. Thus, they need to move part of the computation to the CPU, which may introduce additional IO and communication consumption. \\n- They are not memory-efficient as they need to save all layers' full KV-cache. GemFilter only needs to store early layers\\u2019 full KV-cache. \\n- Dynamic KV-cache compression is a very new direction and the effectiveness of many methods has not been fully verified by the community. \\n\\nExtending GemFilter to a dynamic KV-cache compression setting is very interesting. It may be beyond the scope of this paper and the workload of its implementation definitely is over rebuttal. We are leaving it on our TODO list. We are willing to discuss more per the reviewer's request. Thank you again for your constructive feedback!\\n\\n### W2: The task-specific nature of these hyperparameters could potentially limit the method's practical applicability in real-world scenarios where task requirements are not known a priori.\\nThank you so much for your questions. We try to address your question below:\\n- If we know downstream tasks, e.g., a chatbot for a specific use, we can turn this hyperparameter according to our previous response. \\n- If we do not know the downstream task, we can choose some layer that is generally good, e.g., 13 layer for LLaMA 3.1 8B Instruct and 19 layer for Mistral Nemo 12B Instruct. Overall, their performance is not bad in all tasks as shown in LongBench. \\n\\nOn the other hand, [1,2,3] also have hyper-parameters and may meet the same issues. We also would like to point out that knowledge-free acceleration or adaptation may be a hard topic, also known as the no-free lunch Theorem. For example, instruction finetuning may make the model loss some ability. \\n\\nWe hope our proposed solution and discussion may relieve your concerns. \\n\\n### Q1: The accuracy loss caused by the proposed method may not be accurately reflected, since the accuracy of the original model is already impaired due to the template mismatch.\\nThank you so much for your suggestion. \\n\\nWe report the performance of different methods on the LongBench QA task using LLaMA 3.1 8B Instruct and its official LLaMA Chat template. We refer reviewer to Line 1020-1047 of the second version revision for more details. \\n\\nIn Table 3 of the revision, we can see that, after applying the template, all methods gain a large improvement in performance compared to Table 1. Also, we can see that GemFilter has a performance comparable to that of other state-of-the-art methods. \\nIt is interesting to understand the difference between the attention mechanisms with and without using a chat template. We leave it as our future work. \\n\\nWe hope our answer may address your new concerns.\"}", "{\"comment\": \"I would like to express my appreciation for the authors' efforts in addressing my concerns, which have significantly improved the clarity and addressed the concerns I raised in my previous review. I am satisfied with the authors' responses and the latest results. Thanks for the hard work and dedication to this research.\"}", "{\"title\": \"Thank you\", \"comment\": \"We thank the reviewer's valuable comments and positive score. We appreciate your time and response!\"}", "{\"summary\": \"This paper looks into KV cache compression for efficient LLM inference under long context. The key insight is that LLMs can identify important tokens in early layers before generating answers. As such, the paper proposes to use early layers to select important tokens, effectively reducing the context length for subsequent generation. Evaluation shows that the proposed method outperforms SnapKV and H2O with additional memory reduction and speedups.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Interesting observation that the last row of attention matrices in early layers may serve as a signal for locating important tokens.\", \"Promising results with good accuracy and memory savings.\"], \"weaknesses\": [\"Limited technical novelty. Prior work has identified that top layers of LLMs are not very effective, https://arxiv.org/abs/2403.17887, e.g, by pruning up to half of the layers, the model remain accurate. This seems to be more of an issue of the public LLMs that are not well-trained to leverage those deeper layers. From that perspective, the proposed method is more of a hack that tackles these issues, which may disappear as the pre-training recipe evolves.\", \"It may not be easy to robustly choose the layer r for different tasks and models. While using early layers to identify a fixed set of tokens, the method may lose adaptiveness in KV cache compression. As a result, it may suffer from poor generalizability on different tasks, e.g., those that are not tested in the paper. Therefore, for this particular work, more datasets should be included for evaluation.\"], \"questions\": \"In practice, LLMs are generalists, and therefore need to handle different tasks and requests dynamically. How do you make sure that the proposed method can robustly find a good early-layer signal to get high accuracy for a mixture of tasks and requests.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the response!\\n\\nFor W1, I agree with the authors that one can run GemFilter for multiple times when dealing with multi-round dialogues. Therefore, if the questions are provided in different conversation rounds, GemFilter may still be able to locate the related contexts. However, assume that GemFilter will use the first 13 of 32 layers to filter the important tokens, when there is more than 3 rounds of dialogue, the total prefilling time would be (n * 13/32) times slower than the dense baseline, where n is the number of conversation rounds. \\n\\nFor Q1, I would like to thank the authors for providing new evaluation results with official LLaMA Chat template. I think the results in Line 1020-1047 can more accurately capture the GemFilter's actual performance now (compared to Table 1).\"}", "{\"title\": \"Post-rebuttal response\", \"comment\": \"Dear authors,\\n\\nThank you for providing the response! The work has a good potential but the current state of the draft leaves multiple questions related to the robustness and generalizability of the proposed method on the table, such as the selection of r across different tasks and the benefit it brings to different models. Also, regardless of whether LLMs are well-trained or not, it would be better if the authors include simple baselines such as pruning of top layers like in https://arxiv.org/abs/2403.17887 with varying pruning ratio. This may make readers more appreciate the contribution of the work.\"}", "{\"metareview\": \"The paper proposes a novel approach, GemFilter, designed to accelerate inference and reduce memory consumption in large language models (LLMs) by utilizing early layers of the model to identify and compress relevant input tokens. This method aims to address the long-context bottleneck by filtering out unnecessary tokens early in the process, significantly decreasing the context size for subsequent layers. The authors present empirical results showing that GemFilter achieves a 2.4x speedup and a 30% reduction in GPU memory usage compared to state-of-the-art (SOTA) methods, such as standard attention and SnapKV/H2O. The method is claimed to be training-free, broadly applicable to various LLMs, and interpretable, with the ability to inspect the selected input tokens.\\n\\nThe strengths of the paper lie in its novel approach to tackling the long-context problem, a critical issue for LLMs. The proposed method demonstrates substantial empirical improvements, especially in terms of speed and memory efficiency, which are key for practical deployment. Additionally, the simplicity of GemFilter and its broad applicability across different LLM architectures make it a promising solution. The interpretability aspect is also noteworthy, as it allows users to inspect the token selection process, which is often opaque in large models. Overall, the paper is well-written and presents a clear and convincing argument for the effectiveness of the proposed method.\\n\\nDespite these strengths, there are a number of unresolved issues. Reviewers raised concerns about the lack of comparison to other prompt compression methods, particularly LongLLMLingua, which would help place the contribution in a broader context. Furthermore, the authors did not provide an ablation study to isolate the contributions of token selection versus re-computation, leaving some ambiguity around the specific factors driving performance improvements. While the method shows promise, the lack of such detailed analysis limits the overall impact and understanding of the approach. Despite the authors' attempts to address the concerns raised by the reviewers in their rebuttal, the key issues remained unresolved. \\n\\nGiven the highly competitive nature of the ICLR 2025 submission process, the paper\\u2019s overall contribution does not stand out enough to justify acceptance. While the proposed method is interesting and offers practical benefits, the lack of sufficient comparison, theoretical insight, and detailed analysis makes it less competitive compared to other submissions. Consequently, the decision is to reject the paper.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the reviewers raised several points, primarily focusing on the lack of comparison with prompt compression methods and the unclear attribution of performance improvements. One reviewer highlighted the need for a more detailed comparison with LongLLMLingua to better position GemFilter within existing optimization techniques. Another reviewer pointed out the absence of a thorough ablation study isolating the effects of token selection versus re-computation.\\n\\nThe authors addressed these concerns by providing additional comparisons, including a more in-depth discussion of LongLLMLingua. They also conducted an ablation study to clarify the contributions of token selection and re-computation. While the additional comparisons and the ablation study were helpful, they were not considered sufficient by the reviewers. The reviewers still felt the paper lacked the necessary depth and clarity, particularly in terms of theoretical insights and the broader applicability of the method. As a result, despite the authors\\u2019 efforts, the paper did not fully satisfy the reviewers' expectations, which contributed to the final decision to reject.\"}", "{\"title\": \"Thank you\", \"comment\": \"We are glad that our response fully solves all your previous concerns. We sincerely thank the reviewer for constructive feedback and for helping us improve the quality of the draft. On the other hand, we hope to know any points we could do to improve the paper further and get a positive score from the reviewer. We appreciate your time and suggestions!\"}", "{\"summary\": \"This paper introduces GemFilter for accelerating large language models (LLMs) when processing long-context inputs. The key insight is that LLMs can identify important tokens in their early layers before generating answers, which allows for significant input compression. The proposed GemFilter algorithm uses early layers of an LLM as filters to select and compress input tokens, reducing the context length for subsequent processing. The method achieves a 2.4\\u00d7 speedup and 30% reduction in GPU memory usage compared to state-of-the-art methods like SnapKV/H2O. Results show that GemFilter outperforms standard attention and SnapKV in the Needle in a Haystack test, while maintaining comparable performance on the LongBench benchmark.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1) This paper focuses on a timely and important problem: Accelerating Long-context LLM inference.\\n\\n2) The method introduced in GemFilter is simple-yet-effective, according to the experimental results provided in the paper (better accuracy than the dense baseline on the NIAH task, comparable performance on LongBench evaluation).\\n\\n3) GemFilter achieves a 2.4\\u00d7 speedup and 30% reduction in GPU memory usage compared to state-of-the-art methods like SnapKV/H2O.\", \"weaknesses\": \"1) GemFilter only uses the attention score of the last token during the context/prefilling stage to determine the important tokens to keep. And then uses the selected tokens to run the full inference process. The process may lose some information in the context. For instance, for multi-hop Q&A tasks/multi-needle in a haystack evaluation, there will be questions focusing on different parts of the given long context. I am wondering if GemFilter will still be able to keep the model's performance on such tasks.\\n\\n2) The selection of the filter layer seems to be results-oriented. And there seems to be no specific method to efficiently identify the filter layer. In Table 2, it seems that there is prominent performance loss on some benchmarks when choosing layer-13 as the filter layer (e.g., NrtvQA, Qasper, MF-en). The robustness of the method still needs to be further demonstrated.\\n\\n3) For Mistral Nemor and Phi 3.5, the filter layer appears in the middle (or even later) of the model, limiting the efficiency gain that could be achieved with this method.\", \"questions\": \"1) The LongBench evaluation scores for the dense attention baseline (in Table 1) seems to be lower than expectation. For example, the llama-3-8B model only achieves a score of 13.04 on qasper with full KV.\\n\\n2) In Figure 4, GemFilter achieves a much higher score than the dense baseline on the NIAH test. I would appreciate it if the authors could provide some analysis for the improvements above the dense attention.\\n\\n3) How is the running time of decoding stage (gen time) computed in Figure 3/6? What is the specific setting for the evaluation (i.e., generation length)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for your reply. We appreciate your valuable time.\\n\\nFor the robustness problem, the authors do not think it is an issue, as our response stated in **W2.1**. For the new experiments about the pruning ratio, as the time is very close to the end of the rebuttal, we could not finish that. We sincerely thank you for the valuable suggestions, and we will try to add them to the next version.\"}", "{\"title\": \"Thanks for the response.\", \"comment\": \"I sincerely thank the authors for their reply. I am happy to maintain my original division of week acceptance.\"}", "{\"title\": \"Thank you and further reply to new concerns (Part 2)\", \"comment\": \"### Reference\\n\\n[1] Zhang, Z., Sheng, Y., Zhou, T., Chen, T., Zheng, L., Cai, R., ... & Chen, B. (2023). H2o: Heavy-hitter oracle for efficient generative inference of large language models. NeurIPS\\u201923.\\n\\n[2] Li, Y., Huang, Y., Yang, B., Venkitesh, B., Locatelli, A., Ye, H., ... & Chen, D. (2024). Snapkv: Llm knows what you are looking for before generation. NeurIPS\\u201924.\\n\\n[3] Jiang, H., Li, Y., Zhang, C., Wu, Q., Luo, X., Ahn, S., ... & Qiu, L. (2024). Minference 1.0: Accelerating pre-filling for long-context llms via dynamic sparse attention. NeurIPS\\u201924.\\n\\n[4] Liu, D., Chen, M., Lu, B., Jiang, H., Han, Z., Zhang, Q., ... & Qiu, L. (2024). Retrievalattention: Accelerating long-context llm inference via vector retrieval. arXiv preprint arXiv:2409.10516.\\n\\n[5] Chen, Z., Sadhukhan, R., Ye, Z., Zhou, Y., Zhang, J., Nolte, N., ... & Chen, B. (2024). Magicpig: Lsh sampling for efficient llm generation. arXiv preprint arXiv:2410.16179.\"}", "{\"title\": \"Thank you and further reply to new concerns\", \"comment\": \"We are glad that the reviewer likes our response! We sincerely thank you for your insightful feedback. We appreciate your time and we would like to answer your follow-up questions.\\n\\n### Q1.1: GemFilter vs GemFilter-One-Run\\n\\nWe would like to highlight that, both GemFilter and GemFilter-One-Run use exactly the same index set for the input, where the input set is generated from the layer 19 of Mistral Nemo. We update the revision Line 910 to highlight this. \\n\\nIn detail, \\n- In the first run, GemFilter will stop at layer 19 for long prompt computation and use the selected index set for the second run of short prompt computation. \\n- GemFilter-One-Run will get the index set at layer 19, but not stop for its long prompt computation, i.e., it will finish all layer long prompt computation. Then, it uses the selected index set to evict the KV-Cache and starts iterative generation. There is no second run of short prompt computation. \\n\\nThus, the only difference between them is that their RoPE embedding distance is different in the KV cache, i.e., GemFilter-One-Run may have a large RoPE distance between their tokens. We refer the reviewer to Line 908-917 in the second version revision for more details. \\n \\n### Q1.2: An analysis of the consistency or divergence of tokens selected by different layers, along with a comparison to tokens selected by SnapKV, would be highly appreciated.\\nThank you for your brilliant suggestion! We have updated the corresponding experiment in Figure 11 and Line 964-1019 in the second version revision. \\n\\nIn short, in Figure 11, we visualize the top-$k$ indices of each attention layer in GemFilter and SnapKV when using the Mistral Nemo 12B Instruct model and evaluating on Needle in a Haystack. Figure 11 shows that GemFilter is only focused on the needle information and recent information, while SnapKV focuses on a wide range of tokens, which may distract its attention. We can also see that GemFilter and SnapKV have very different selection mechanisms. We refer the reviewer to revision for more details. \\n\\nWe hope our answer may address your new concerns.\"}", "{\"comment\": \"We extend our gratitude to the reviewer for their meticulous feedback. We offer the following elucidations:\\n\\n### W1: Including a comparison with prompt compression methods, particularly LongLLMLingua, would enhance the analysis.\\nThank you so much for pointing out these suggestions! We miss this important line of work [1,2,3] in the original version. We have added these brilliant works in our revision Line 174-179. We also provide a comparison with Llmlingua on LongBench [2] in Table 1, where our method outperforms Llmlingua [1]. We refer the reviewer to the **Missing baseline** section of Global Response for more details. \\n\\nWe skip LongLLMLingua for a fair comparison, as LongLLMLingua requires explicitly separating the input context into text information and questions, while other methods do not require that. \\n \\n\\n### W2: An ablation study to isolate the contribution of each factor would be beneficial.\\nThank you so much for your valuable suggestions! We have added the corresponding ablation studies in Section 4.4 and Appendix D.5 in revision Line 483-485 & 903-962. \\n\\nIn Figure 10, we introduce GemFilter-One-Run, which does not have the second run as GemFilter. In detail, after getting the indices as GemFilter, it directly uses this index set to evict the KV cache for all attention heads and attention layers and continuously conducts the iterative generation phase. \\n\\n**Difference from GemFilter and SnapKV.** It is different from GemFilter as (1) it requires computing full attention matrices for all layers for the KV cache eviction, so it does not save prompt computation phase complexity; (2) it does not have the second run so that the RoPE positional distance is not updated as GemFilter, where its distance between `needle' and query can be very large. \\n\\nIt is different from SnapKV as all attention heads and attention layers share the same index set, while SnapKV has different index sets for different attention heads and different attention layers. \\n\\n**Results.** As we can see in Figure 10, the GemFilter-One-Run has a comparable performance with GemFilter, while it is worse when the distance between the query and the `needle' is large. This is expected as the RoPE positional distance does not update in GemFilter-One-Run. On the other hand, the GemFilter-One-Run takes a larger running time complexity and a larger memory consumption than GemFilter as it requires computing full attention matrices for all layers, while GemFilter only needs to compute the first few layers. \\n\\nWe hope our new experiments address your concerns. \\n\\n### Reference\\n\\n[1] Jiang, H., Wu, Q., Lin, C. Y., Yang, Y., & Qiu, L. (2023). Llmlingua: Compressing prompts for accelerated inference of large language models. EMNLP\\u201923.\\n\\n[2] Jiang, H., Wu, Q., Luo, X., Li, D., Lin, C. Y., Yang, Y., & Qiu, L. (2023). Longllmlingua: Accelerating and enhancing llms in long context scenarios via prompt compression. ACL\\u201924.\\n\\n[3] Pan, Z., Wu, Q., Jiang, H., Xia, M., Luo, X., Zhang, J., ... & Zhang, D. (2024). Llmlingua-2: Data distillation for efficient and faithful task-agnostic prompt compression. ACL\\u201924.\"}", "{\"title\": \"Thank you and further reply to new concerns\", \"comment\": \"We have plotted similar figures of index selection for the LLaMA 3.1 model and Phi 3.5 models in the third version of the revision, in Line 1049-1179.\\n\\nAs we can see for the LLaMA 3.1 model and Phi 3.5, GemFilter has a wide range for layer index selection. However, we agree that GemFilter may be senestitity in the index selection on Mistral Nemo. All these results are also consistent with Figure 5, where Mistral Nemo has fewer 0 points than the other two models in Figure 5.\\n\\nOn the other hand, in Needle in a haystack and Longbench, we always use layer 19 for Mistral Nemo and gain a good performance overall tasks. This filter layer is quite robust to all tasks in Table 1. \\n\\nWe are also interested in the mechanism of the filter layer. We will continue to analyze and understand it in our follow-up works. Thank you again for your valuable suggestions!\"}", "{\"title\": \"Thank you and reply to new question\", \"comment\": \"Thank you so much for your insightful question!\\n\\nIn Table 1, we can see that GemFilter has a good performance on both Single-Document QA and Multi-Document QA when using Mistral Nemo or Phi 3.5, while GemFilter has drops in Single-Document QA compared to Minference. \\n\\nNote that Minference uses three types of attention sparsity, i.e., $\\\\land$-shape, vertical-slash, and block-sparse attention head. Our conjecture is that, for Single Document QA, block-sparse attention head may have a better performance, particularly for LLaMA, as the \\u201crelated information\\u2019\\u2019 may concentrate in continuous paragraphs. We will study the different attention pattern mechanisms in our follow-up works. Thanks for your valuable comments.\\n\\n[1] Jiang, H., Li, Y., Zhang, C., Wu, Q., Luo, X., Ahn, S., ... & Qiu, L. (2024). Minference 1.0: Accelerating pre-filling for long-context llms via dynamic sparse attention. NeurIPS\\u201924.\"}", "{\"title\": \"Further update in revision\", \"comment\": [\"Per reviewer bddG and Np9u valuable suggestions, we update the second version revision with two new experiments. We summarize major updates (in blue color) we made.\", \"Line 910: We clarify that GemFilter-One-Run and GemFilter share exactly the same index set.\", \"Line 964-1019: We show the index selection difference between Gemfilter and SnapKV.\", \"Line 1020-1047: We report the performance of different methods on the LongBench QA task using LLaMA 3.1 8B Instruct and its official LLaMA Chat template.\", \"We thank all reviewers for their constructive feedback and for helping us improve the draft.\"]}", "{\"title\": \"Global Response\", \"comment\": \"We gratefully thank all reviewers for their valuable and constructive feedback.\\n\\nWe appreciate that all reviewers agree that our paper is interesting and focuses on a timely and critical problem. We are encouraged that reviewers bddG, rvyU, and Np9u recognize our method is simple yet effective and has promising results with good accuracy and memory/running time savings compared to state-of-the-art methods. We are glad that reviewers bddG and FAs6 think our draft is clear and easy to follow. Reviewers bddG and FAs6 also value the novelty of our methods, and reviewer FAs6 highlights that the flexible choice of filter layer of our method is quite useful in practice, avoiding extensive hyperparameter tuning.\\n\\nWe have updated a **revision** for our draft. We summarize all the major updates (in brown color) we made. All line numbers in the rebuttal correspond to the revised version.\\n- Line 157-160: We add a brief introduction of MInference [1], an important baseline. \\n- Line 174-179: We add a brief introduction of LLMLingua [2], an important baseline.\\n- Line 378-407 & 419-424 & 430-431: We update Table 1 with the above two baselines and add related discussion. \\n- Line 476-485 & 860-962: We add two more ablation studies. \\n- Line 121-122 & 489-490: We add the setting for the evaluation of running time and memory consumption.\\n\\nThen, we will cover some questions that reviewers commonly ask. \\n\\n### Missing baselines\\nPer the reviewers\\u2019 request, we add two more baselines, MInference [1] and LLMLingua [2], in Table 1 and the corresponding discussion in the revision Line 378-407 & 419-424 & 430-431. We can see that MInference has compatible performance with SnapKV, while it requires offline to determine the best attention pattern, which cannot save the prompt computation phase running time. We can see that although LLMLingua achieves a good comparison rate, the performance may not be satisfactory. Thus, GemFilter outperforms LLMLingua and has a performance that is compatible with MInference. Also, GemFilter is faster than MInference in the prompt computation phase. \\n\\nWe directly follow the official GitHub of MInference and LLMLingua for implementation. We only evaluate MInference on LLaMA 3.1 8B Instruct as it is not supported on the Mistral Nemo 12B Instruct and Phi 3.5 Mini 3.8B Instruct currently. \\n\\n### Reference\\n\\n[1] Jiang, H., Li, Y., Zhang, C., Wu, Q., Luo, X., Ahn, S., ... & Qiu, L. (2024). Minference 1.0: Accelerating pre-filling for long-context llms via dynamic sparse attention. NeurIPS\\u201924.\\n\\n[2] Jiang, H., Wu, Q., Lin, C. Y., Yang, Y., & Qiu, L. (2023). Llmlingua: Compressing prompts for accelerated inference of large language models. EMNLP\\u201923.\"}", "{\"comment\": \"I appreciate the authors' efforts in addressing the concerns raised in the reviews, particularly the clarification about GemFilter-One-Run and the addition of Figure 11, which is indeed informative and helpful for understanding the methodology. One phenomenon that I notice in Figure 11 is the right window of $r$ is much sharper than expected (i.e., only 2 out of 40). This means that the selection of $r$ is more sensitive than it is claimed in Section 4.3, which claims performance remains robust in a much wider range. As pointed out by reviewer Np9u and rvyU, I also suspect that the selection of $r$ may be input-dependent. It is evident for the results in Table 2, which shows that no layer is consistently superior to others across all tasks.\"}", "{\"title\": \"Thank you, and more clarification for W1\", \"comment\": \"We are glad that our response fixes some of the reviewer's concerns. We sincerely thank you for your time and effort. We would like to clarify the new concerns.\\n\\nNote that when the context is long, the running time complexity bottleneck is full attention matrix computation and memory of full KV cache size. \\n\\nFor multi-round dialogues, where the round number is $r$ and the context length is $n$:\\n- The dense baseline needs to run 32-layer full attention matrix computation and save 32-layer full KV cache size, which is **independent with $r$**. \\n- The GemFilter needs to run 13-layer full attention matrix computation and save 13-layer full KV cache size, which is **independent with $r$** as well. Note that we do not need to re-prefilling the first 13 layers as we already saved the results in KV cache, the same as the dense baseline. Also, note that the last row of the attention matrix can be directly computed from the KV cache and new query. \\n\\nBelow, we give a detailed example. Assume length $n$ context $T$ is super long, and the $Q_1$ is the query. We can use GemFilter to get answer $A_1$, where the first 13 layer-th attention of KV($[T,Q_1]$) is saved in the GPU memory. Then, we compute the KV cache of KV([$T,Q_1, A_1$]), which is efficient based on KV($[T,Q_1]$) and only takes $O(n)$ time as the dense baseline.\\n- Note that desne baseline need to take $O(n)$ during generation decoding, while GemFilter takes $O(n)$ after $A_1$ generated. Thus, this $O(n)$ time can be run off-line during the user reads $A_1$ and types $Q_2$. The generation decoding complexity of GemFilter is $O(k)$, e.g., $k=1024$. \\nThen, after user type $Q_2$, we can use the same strategy get KV([$T, Q_1, A_1, Q_2$]) based on the KV([$T,Q_1, A_1$]) efficiently, e.g., $O(n)$. Then, the GemFilter can generate the answer $A_2$. \\n\\nNote that during the whole pipeline above, the full attention matrix computation of the first 13 layers only has once. Thus, compared with the dense baseline, the time and memory complexity of GemFilter is only 13/32 for prefilling and $k/n$ for generation decoding, which is independent of the round number $r$. \\n\\nWe hope that our clarification can relieve your concerns. We are willing to discuss more if the reviewer has more follow-up questions.\"}" ] }
9htTvHkUhh
Beyond Sequence: Impact of Geometric Context for RNA Property Prediction
[ "Junjie Xu", "Artem Moskalev", "Tommaso Mansi", "Mangal Prakash", "Rui Liao" ]
Accurate prediction of RNA properties, such as stability and interactions, is crucial for advancing our understanding of biological processes and developing RNA-based therapeutics. RNA structures can be represented as 1D sequences, 2D topological graphs, or 3D all-atom models, each offering different insights into its function. Existing works predominantly focus on 1D sequence-based models, which overlook the geometric context provided by 2D and 3D geometries. This study presents the first systematic evaluation of incorporating explicit 2D and 3D geometric information into RNA property prediction, considering not only performance but also real-world challenges such as limited data availability, partial labeling, sequencing noise, and computational efficiency. To this end, we introduce a newly curated set of RNA datasets with enhanced 2D and 3D structural annotations, providing a resource for model evaluation on RNA data. Our findings reveal that models with explicit geometry encoding generally outperform sequence-based models, with an average prediction RMSE reduction of around 12% across all various RNA tasks and excelling in low-data and partial labeling regimes, underscoring the value of explicitly incorporating geometric context. On the other hand, geometry-unaware sequence-based models are more robust under sequencing noise but often require around 2-5x training data to match the performance of geometry-aware models. Our study offers further insights into the trade-offs between different RNA representations in practical applications and addresses a significant gap in evaluating deep learning models for RNA tasks.
[ "Geometric deep learning", "Graph Neural Networks", "GNNs", "RNA property prediction", "Datasets and Benchmarks" ]
Accept (Poster)
https://openreview.net/pdf?id=9htTvHkUhh
https://openreview.net/forum?id=9htTvHkUhh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vPD9EMICBV", "uaOV4W4Eo1", "segZCqWleR", "qJ0lAd6GJJ", "ojJ7ft6Dd2", "mN46aJaWpY", "la83CjRcgB", "iM8UQKCIP0", "gd9MQUtM9G", "ep3GJN0Crm", "e7WgtNO8E7", "aai5CuUwxw", "WxHsNTsx8w", "P1qR4mYHxb", "OuG3xbULYv", "N8lOwNzNHT", "LbTUkEXypC", "HEeb7iBwia", "GxU3OQIWxL", "Ghil5r5gpT", "EZtCwaFVan", "DKeGfxKU4h", "B7m7ry47XJ", "B6cR7TkZ6a", "ARXehKiG67", "9bKdTKQMMS", "7t8w6EMsGV", "7Ls6HhhfLs", "7DTMvXPZxD", "6vDfpeydNK", "4g1MALTUoi", "1Y186a9EBE" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730650234771, 1733208441253, 1732279396823, 1732206243056, 1733192586139, 1732255516540, 1732432509268, 1737523393635, 1732206840130, 1733213813979, 1733196491486, 1733306831590, 1732206514176, 1732533992075, 1733079143390, 1733271765952, 1733192657228, 1732531626055, 1732597184077, 1732975112375, 1732531712035, 1733207257270, 1730280294779, 1732209080611, 1732973967964, 1732206705194, 1732207898816, 1729826043094, 1734485448977, 1732530306548, 1733192617200, 1732207074719 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission395/Reviewer_JSSw" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Reviewer_HBUE" ], [ "ICLR.cc/2025/Conference/Submission395/Reviewer_HBUE" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Reviewer_JSSw" ], [ "ICLR.cc/2025/Conference/Submission395/Reviewer_HBUE" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Reviewer_HBUE" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Reviewer_k6pU" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Reviewer_JSSw" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Reviewer_HBUE" ], [ "ICLR.cc/2025/Conference/Submission395/Area_Chair_pXt6" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ], [ "ICLR.cc/2025/Conference/Submission395/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduce a newly curated set of RNA datasets with enhanced 2D and 3D structural annotations, providing a resource for model evaluation on RNA data. The paper reveals that models with explicit geometry encoding generally outperform sequence-based models, and geometry-unaware sequence-based models are more robust under sequencing noise but often require around 2 \\u2212 5\\u00d7 training data to match the performance of geometry-aware models. The authors conducted thorough and detailed experiments to support their proposed arguments.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors investigated the enhancement of RNA property prediction through the utilization of both 2D and 3D data, and explored the performance degradation of corresponding models under various influencing factors, including noisy data and partial label.\\n\\n2. The authors collected a substantial amount of RNA sequence data across a wide range of nucleotide count intervals, ensuring comprehensive coverage.\\n\\n3. The authors conducted extensive experiments using various models on the dataset to support their proposed arguments.\\n\\nOriginality, quality, clarity, and significance:\\n\\nThe paper is original and provides a comprehensive exploration of the impact of different types of input data on RNA property prediction capabilities for the first time. The writing is clear, and the overall quality is marginally acceptable. This research makes a contribution to the exploration of RNA property prediction.\", \"weaknesses\": \"1. The authors utilized a limited number of 3D models for geometric structure modeling, most of which are relatively early models, and neither of the two models (EGNN and SchNet) is specifically designed for 3D RNA structure modeling. Therefore, I believe their performance does not fully reflect the potential improvements offered by geometric information across various datasets. The authors are supposed to validate models specifically designed for RNA 3D structure modeling, such as ARES [1] and PaxNet [2], as well as some classic models in the protein domain (like GVP [3], GearNet [4], and MEAN [5]).\\n\\n2. The pooling strategy is difficult to classify as an innovative contribution from the authors, as it is relatively simple. Thus, the models employed in the paper are essentially existing models, and the authors have not proposed their own model. Since 3D data encompasses more information than 1D and 2D data, the authors should reflect on how to better utilize 3D information to enhance model performance.\\n\\n3. As noted in the paper, \\\"all-atom SchNET and EGNN rely on a limited local neighborhood of adjacent atoms, limiting their receptive fields and preventing them from capturing long-range dependencies\\\", full-atom modeling indeed incurs a significant computational burden. The authors could consider adopting the approach from methods like MEAN [5], treating each nucleotide as a node in a graph, which would allow for the expansion of local neighborhood relationships. Alternatively, the strategy used in PaxNet [2] could be employed to model the long-range and short-range interactions.\\n\\n4. The analysis in section 4.2 lacks insights. Some conclusions are quite obvious, such as the enhancement of performance due to increased training data, which is even more pronounced in transformer-like structures. Moreover, the inferior performance of 3D models compared to 2D models can be attributed to the significantly lower prediction accuracy of existing methods for 3D structures compared to 2D structures.\\n\\n5. The analysis in section 4.3 presents similar issues. Since the authors introduced sequencing noise, this error accumulates greater noise in both 2D and 3D data as a result of using 2D and 3D predictive tools. Therefore, it is expected that the transformer1D, which directly models 1D sequence data, would exhibit stronger performance.\\n\\n[1] Geometric deep learning of RNA structure, Science 2021.\\n\\n[2] Physics-aware Graph Neural Network for Accurate RNA 3D Structure Prediction, NIPS 2022 workshop.\\n\\n[3] Learning from Protein Structure with Geometric Vector Perceptrons, ICLR 2021.\\n\\n[4] Protein representation learning by geometric structure pretraining, ICLR 2023.\\n\\n[5] Conditional Antibody Design as 3D Equivariant Graph Translation, ICLR 2023.\", \"questions\": \"1. Predicting the 3D structure of models directly from 1D sequence data can indeed introduce noise, affecting model performance. Have the authors considered finding a 3D RNA dataset, such as the RNAsolo [1]?\\n\\n2. This paper primarily focuses on predicting RNA properties and utilizes the MCRMSE metric. I would like to know what specific properties are being referred to, and what are the units for the predicted values?\\n\\n[1] Rnasolo: a repository of cleaned pdb-derived rna 3d structures, Bioinformatics 2022.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety', 'Yes, Legal compliance (e.g., GDPR, copyright, terms of use)']\", \"details_of_ethics_concerns\": \"The copyright of the collected datasets.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"About Ethics Concerns and Request To Reconsider Score\", \"comment\": \"In this paper, we collect RNA sequences from publicly available datasets and generate the 2D and 3D structures from publicly available tools. We cite all the sources in our paper, please refer to Section 2.1 and 2.2 in our paper. Therefore, we don't see any potential ethical issues of the data used in this paper.\\n\\nWe have also mentioned this in the \\\"Ethics Statement\\\" part in our paper. If all your concerns are addressed, could you please consider raising your score as only few hours of discussion period remain?\"}", "{\"title\": \"Author Response\", \"comment\": \"Thanks for your comments. Below are our responses:\\n\\n**RD1. Original improvements to the baseline models.** \\nWe thank the reviewer for acknowledging the contribution of our proposed datasets and the utmost importance of benchmarking for RNA property prediction. However, we must disagree with the reviewer's claims that \\\"datasets and benchmarks\\\" papers require contribution in improving the existing baselines. We kindly refer the reviewer to the ICLR \\\"call for papers\\\" (https://iclr.cc/Conferences/2025/CallForPapers) where \\\"datasets and benchmarks\\\" unambiguously stand as a separate category. To emphasize this further, please also see highly impactful and well-cited recent papers in the biology domain which were accepted at top conferences (including ICLR) under the \\\"datasets and benchmarks\\\" subject area [1, 2, 3] and focused purely on new datasets and solid benchmarks, an essential and first and foremost requirement for novel methodology development and evaluation for future research. \\n\\n\\n\\n**RD2. About FastEGNN settings.** \\nFollowing Table 1 of the original FastEGNN paper [4], we fixed the number of virtual nodes to 3. Edges were not removed; instead, we used a threshold to infer edge existence across all baselines. This threshold was treated as a tunable hyperparameter. \\n\\nWe agree that improving FastEGNN for RNA is indeed a promising direction for future work. However, we would like to point out that this is out of the scope of this work for the following reasons. \\n\\n1) This paper aims to establish high-quality datasets and benchmarks, which as the reviewer notices, are very rare for the RNA domain. We believe that establishing the benchmark and providing experimental baselines is the critical prerequisite before developing new models or modifying existing ones, and thus is a strong contribution in itself. \\n\\n2) The original FastEGNN paper limits the use of virtual nodes to a maximum of 10, with these virtual nodes fully connected to each other. Assigning a virtual node to each nucleotide would introduce $n^2$ additional edges for an RNA sequence with $n$ nucleotides, significantly increasing the model\\u2019s computational requirements. Significantly modifying FastEGNN to overcome these limitations in itself is a non-trivial task and will warrant a separate paper. \\n\\n\\n**RD3. More baselines.** \\nHigh-degree steerable methods are indeed an important category within 3D equivariant models. However, they typically rely on spherical harmonics which are computationally complex and resource-intensive. While we considered these steerable methods, they ran out of memory due to the length of RNA sequences. Specifically, we used the TFN code provided in the link you mentioned, but **it encountered out-of-memory (OOM)**. Similarly, MACE also faced OOM issues. This is also an important insight enabled by our paper that spherical models, such as TFN and MACE, which show promise in small molecule modeling domains are not out-of-the-box applicable for larger molecules such as RNA. This is an insight that will benefit the practitioners for real-world applications and we are happy to include it in our paper, if you suggest. \\n\\nAlso, we would like to point out that in prior work [5] (Figure 1 and is also mentioned in the Github link you provided), GVP-GNN is the most complex model which still allows modeling larger molecules such as RNA sequences and we have included it already in our revised version. \\n\\n&nbsp; \\n\\nPlease let us know if you have other questions. We hope the reviewer will consider a more positive evaluation of our work. \\n\\n&nbsp; \\n\\nReferences. \\n[1] FLIP: Benchmark tasks in fitness landscape inference for proteins, NeurIPS 2021. \\n[2] BEND: Benchmarking DNA Language Models on Biologically Meaningful Tasks, ICLR 2024. \\n[3] GeneDisco: A Benchmark for Experimental Design in Drug Discovery, ICLR 2022. \\n[4] Improving Equivariant Graph Neural Networks on Large Geometric Graphs via Virtual Nodes Learning \\n[5] On the Expressive Power of Geometric Graph Neural Networks.\"}", "{\"title\": \"Author Responses (1/2)\", \"comment\": \"We thank the reviewer for their feedback and appreciating the originality, thoroughness, and comprehensiveness of our experiments that highlight the value of our newly annotated RNA datasets. We also appreciate the reviewer's observation that our work presents a first study of its kind for RNA property prediction in range of real-world scenarios.\\n\\nNow we address each weakness and question raised by the reviewer. \\n\\n**RW1. Experiments with more 3D models:** \\nFollowing the reviewer's suggestion, we have now added a comparison of four additional 3D models (GVP, DimeNet, and recent FAENet and FastEGNN) in Table 1 and Figs. 2, 3, 5, 6 in the main text (highlighted in blue). However, we still find that all these models show similar performance on RNA property prediction tasks with 3D models failing to outperform 2D models. We further choose FastEGNN for all subsequent experiments and observe the similar trend as reported earlier with EGNN and SchNet. \\n\\n\\n| Model | COVID | Ribonanza | Tc-Ribo | Fungal |\\n|---------------------------|---------------|-----------------|----------------|----------------|\\n| **1D model** | | | | |\\n| Transformer1D | 0.361\\u00b10.017 | 0.705\\u00b10.015 | 0.705\\u00b10.019 | 1.417\\u00b10.005 |\\n| **2D model** | | | | |\\n| Transformer1D2D | 0.305\\u00b10.012 | 0.514\\u00b10.004 | 0.633\\u00b10.001 | OOM |\\n| GCN | 0.359\\u00b10.009 | 0.509\\u00b10.004 | 0.640\\u00b10.005 | 1.192\\u00b10.077 |\\n| GAT | 0.315\\u00b10.006 | 0.534\\u00b10.006 | 0.603\\u00b10.004 | 1.112\\u00b10.035 |\\n| ChebNet | 0.279\\u00b10.015 | 0.499\\u00b10.005 | 0.599\\u00b10.001 | 1.018\\u00b10.023 |\\n| Graph Transformer | 0.318\\u00b10.008 | 0.500\\u00b10.005 | 0.604\\u00b10.001 | 1.317\\u00b10.002 |\\n| GraphGPS | 0.332\\u00b10.013 | 0.523\\u00b10.003 | 0.610\\u00b10.012 | 1.025\\u00b10.081 |\\n| **3D model (w/o pooling)** | | | | |\\n| EGNN (w/o pooling) | 0.480\\u00b10.025 | 0.808\\u00b10.023 | 0.725\\u00b10.002 | OOM |\\n| SchNet (w/o pooling) | 0.499\\u00b10.030 | 0.843\\u00b10.004 | 0.704\\u00b10.001 | OOM |\\n| FAENet (w/o pooling) | 0.486\\u00b10.010 | 0.834\\u00b10.006 | 0.703\\u00b10.004 | OOM |\\n| DimeNet (w/o pooling) | 0.467\\u00b10.010 | 0.797\\u00b10.012 | 0.712\\u00b10.004 | OOM |\\n| GVP (w/o pooling) | 0.467\\u00b10.010 | 0.797\\u00b10.012 | 0.744\\u00b10.004 | OOM |\\n| FastEGNN (w/o pooling) | 0.477\\u00b10.005 | 0.816\\u00b10.014 | 0.753\\u00b10.001 | OOM |\\n| **3D model (with nuc. pooling)** | | | | |\\n| EGNN (nuc. pooling) | 0.364\\u00b10.003 | 0.619\\u00b10.007 | 0.663\\u00b10.010 | OOM |\\n| SchNet (nuc. pooling) | 0.390\\u00b10.006 | 0.685\\u00b10.006 | 0.655\\u00b10.038 | OOM |\\n| FastEGNN (nuc. pooling) | 0.444\\u00b10.003 | 0.753\\u00b10.015 | 0.710\\u00b10.011 | OOM |\\n\\n\\n\\nRegarding 3D models such as ARES and PaxNet, we note that they are meant for RNA 3D structure prediction and ranking and do not support RNA property prediction out of the box. Similarly, methods such as MEAN are highly specialized for modeling antibody domains by design and adapting them for RNA property prediction will be a non-trivial contribution in itself. \\n\\n**RW2 \\\\& RW3: Pooling strategy and dealing with long-range dependencies:** \\nWe appreciate the reviewer's observation about the simplicity of the proposed framework. We would like to emphasize the our paper sits under **\\u201cdatasets and benchmarks\\u201d**, a subject area highlighted in ICLR call for papers. Our main contributions are: 1) introducing first of its kind RNA datasets with all 1D, 2D and 3D structures and property labels; 2) providing a modular unified testing environment for benchmarking 1D, 2D and 3D property prediction models; 3) studying existing 1D, 2D and 3D models to assess the impact of geometric information in a range of real-world scenarios and establish baselines for future research in this direction. No such study exists to date despite the importance of RNA as therapeutic modality. \\n\\nWith this, substantially modifying existing or developing novel 3D methods is outside the scope of this work. At the same time, we agree with the reviewer that better utilization of 3D information is important, and our work will serve to foster future research in this direction.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWith several hours remaining until the rebuttal deadline, we hope we have successfully addressed your concerns and questions. If you find our responses satisfactory, we kindly ask you to consider raising your score. \\n\\nWe truly appreciate your time, effort, and valuable contributions to improving our work. \\n\\nBest regards, \\nThe Authors.\"}", "{\"comment\": \"Thank you very much for the authors' replies. I still have some questions and hope to get their explanations.\\n> **D1. Need to add original improvements to the model.**\\n\\nThe facts stated in the authors' response are indeed the core issues in this field. High-quality RNA datasets are very rare, so the benchmark compilation contribution of this article is indeed very important.\\n\\nHowever, I and several other reviewers are very concerned about the fact that ICLR, as a conference in the field of machine learning, may not contribute enough to just propose benchmarks and test existing baselines (although this article belongs to the field of \\\"datasets and benchmarks\\\"). Although this article compares a large number of baselines, it lacks the authors' own innovation in model architecture. Generally, AI for Science benchmarks are more or less designed for special scenarios (e.g. GVP-GNN and GNS). Can the authors take advantage of the particularity of RNA to add some more. For example, introduce some small sample learning techniques (such as active learning, etc.) based on the lack of data? Or can they improve FastEGNN based on the sequence structure of RNA?\\n\\n> **D2. About FastEGNN settings.**\\n\\nCan the authors list the specific parameters of FastEGNN? For example, how many virtual nodes are used? Is there any additional edge deletion?\\n\\nAs far as I know, it seems that the virtual nodes of FastEGNN are introduced without prior knowledge, and the point set distribution of the experimental data set in the original paper does not show obvious serialization characteristics like RNA. Can the poor performance of FastEGNN on RNA data be considered to be due to the contribution of virtual nodes to irrelevant real nodes? Can the authors further analyze such phenomena?\\n\\nIn response to this problem, can the authors modify the initialization of virtual nodes (for example, set one for each RNA substructure) and the way virtual nodes are connected to real nodes (delete edges with a long distance or introduce radial basis functions) as their customized model on the benchmark? I think that designing a methodology that uses RNA prior knowledge to modify the general model into a customized model can be a good contribution to match the benchmark to make the paper more acceptable.\\n\\n> **D3. More baselines.**\\n\\nThe method of using high-degree steerable features is also very important for 3D geometric graph networks. As a benchmark, I think it would be more beneficial to the quality of the article to choose one from TFN, NequIP, MACE, EquiformerV2 for testing. I recommend using TFN and using the code in https://github.com/chaitjo/geometric-gnn-dojo/blob/main/models/tfn.py, which is easier to modify.\\nIn addition, it may not be particularly easy to migrate special models such as MEAN and dyMEAN. I can also understand that the authors do not have enough time to design them specifically, so I will not ask for them.\"}", "{\"comment\": \"I have carefully read the author's feedback and think that the author's explanation makes sense. I have combined the opinions of both parties and put forward the following suggestions for improvement. If the author can do this, I will consider recommending that this article be accepted.\\n\\n> **D4. About models using high-degree steerable features**\\n\\nI still think it is necessary to use these models based on high-degree steerable features as baselines. After all, they are also an important part of equivariant models. Authors can avoid OOM problems by adjusting batch_size or reducing channels. Alternatively, authors can consider using the recently emerged SO3krates [a] or HEGNN [b] to introduce high-degree steerable features while avoiding the complexity of tensor products. I understand that the remaining time in the current discussion stage may not be enough to complete the experiment, so in the discussion stage, I only require the authors briefly **discuss the possible benefits of high-degree steerable features** in the article. Presumably, there is still enough time from acceptance to submission of camera ready for the authors to supplement the experimental results (and also the results of models like SaVeNet, LEFTNet and MEAN).\\n\\n[a] Frank J T, Unke O T, M\\u00fcller K R, et al. A Euclidean transformer for fast and stable machine learned force fields[J]. Nature Communications, 2024, 15(1): 6539.\\n\\n[b] Cen J, Li A, Lin N, et al. Are High-Degree Representations Really Unnecessary in Equivariant Graph Neural Networks?[J]. arXiv preprint arXiv:2410.11443, 2024.\\n\\n> **D5. Discussion on the inapplicability of FastEGNN**\\n\\nI think the authors' response makes a lot of sense, and I hope to include this **discussion** in the manuscript. I think FastEGNN is an excellent model, although its CoM virtual node initialization may not be suitable for this problem that clearly has a sequence prior. It would be great if the authors could combine this to explain why FastEGNN performs poorly and give suggestions for improvement. I believe this will not only improve the value of this article, but also be a respect for the original author of the FastEGNN model.\\n\\n> **D6. Discussion on considering noise in model design**\\n\\nSince most studies in this field do not explicitly address noise when designing model architectures, could the authors **discuss how to effectively account for noise during model design** and what methods might be employed to minimize its impact?\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Author Responses (2/2)\", \"comment\": \"**RQ1: Results with RNA language models:**\\nGood suggestion! We have now **benchmarked 2 SOTA RNA language models**: **RNA-FM** [1] and **SpliceBERT** [2] and added the results in Table 1 in the main text (highlighted in blue). We find that for 2 out of 4 datasets (Covid and Ribonanza-2k), RNA foundation models perform worse than the supervised transformer baseline wheres for the other two datasets (Tc-Riboswitches and Fungal), they achieve similar performance (results reported n MCRMSE, lower is better). This is consistent with recent papers in multiple biology domains demonstrating generalized foundation models are yet to surpass specialized supervised baselines (see [3,4,5]). Thus, all our presented conclusions hold. \\n\\n| Model | COVID | Ribonanza | Tc-Ribo | Fungal |\\n|---------------------------|---------------|-----------------|----------------|----------------|\\n| **1D model** | | | | |\\n| Transformer1D | ***0.361\\u00b10.017*** | ***0.705\\u00b10.015*** | 0.705\\u00b10.019 | ***1.417\\u00b10.005*** |\\n| RNA-FM | 0.591\\u00b10.081 | 0.909\\u00b10.144 | ***0.693\\u00b10.001*** | 1.420\\u00b10.028 |\\n| SpliceBERT | 0.588\\u00b10.077 | 1.022\\u00b10.144 | 0.708\\u00b10.003 | 1.435\\u00b10.059 |\\n\\n\\n&nbsp;\\n\\n\\n**RQ2: Criteria for selecting secondary structure prediction tools:**. \\nWe choose EternaFold as the tool for secondary structure prediction because of the reported SOTA performance of EternaFold in recent works ([6,7,8]) and has been explained in lines 143-146 of the main text. \\n\\n**RQ3: Explain the motivation behind the five task settings:** \\nWe appreciate the reviewer's feedback and recognize the importance of clearly explaining the motivation for the selected tasks. These tasks were specifically chosen to simulate real-world challenges encountered in RNA property prediction, focusing on diverse practically relevant aspects of model evaluation. The tasks aim to address critical questions such as: \\n**Task 1:** how effectively models leverage structural information of RNA which so far has not been explored in literature in the context of RNA property prediction; \\n**Tasks 2 and 3:** how well they perform with limited training data or partial labels, a common constraint in experimental settings due to the costs of obtaining large experimental databases; \\n**Tasks 4 and 5:** the robustness of the models under sequencing noise, which mirrors variations and errors produced by RNA sequencing platforms and methods. \\nIn Sec 3 (pages 4 and 5), we have cited relevant works which highlight exactly these scenarios for RNA data. We have now also added **detailed descriptions and motivations for these tasks in the Appendix D (pages 19-20 and highlighted in blue)** and referred to it in main text (lines 248-249). \\n\\n&nbsp;\\n\\nPlease let us know if there are any further questions. Thanks a lot!\\n\\n&nbsp;\", \"references\": \"[1] Interpretable RNA Foundation Model from Unannotated Data for Highly Accurate RNA Structure and Function Predictions, Chen et al., arXiv 2022. \\n[2] Self-supervised learning on millions of primary RNA sequences from 72 vertebrates improves sequence-based RNA splicing prediction, Chen et al., Briefings in Bioinformatics, 2023. \\n[3] Specialized Foundation Models Struggle to Beat Supervised Baselines, Xu, Gupta et al., FM4Science@NeurIPS 2024 \\n[4] Convolutions are competitive with transformers for protein sequence pretraining, Yang et al., cell Systems, 2024 \\n[5] Assessing the limits of zero-shot foundation models in single-cell biology, Kedzierska et al., bioRxiv, 2023 \\n[6] RNA secondary structure packages evaluated and improved by high-throughput experiments, Wayment-Steele et al., Nature Methods, 2022. \\n[7] Deep learning models for predicting RNA degradation via dual crowdsourcing, Wayment-Steele et al., Nature Machine Intelligence, 2022 \\n[8] Ribonanza: deep learning of RNA structure through dual crowdsourcing, He et al., bioRxiv, 2024\"}", "{\"comment\": \"Thank you for addressing the ethics concerns. Most of my concerns have been addressed, and I raise the score to 6.\"}", "{\"comment\": \"I carefully compared this article with other benchmark articles according to the requirements of the benchmark and reviewed it again. In fact, the amount of work in this article is sufficient, and as far as I know, it is the first to study the properties of RNA in a real-world scenario. In particular, large-scale long-chain RNA datasets are rare, and the author's integrated representation modalities and combed research directions are valuable. By reading 'Author Responses to Reviewer JSSw', I was further convinced of the value of this article, so I upgraded the rating to 'Clear Accepted'.\"}", "{\"title\": \"General summary of revisions and rebuttal responses\", \"comment\": \"Dear Reviewers and Area Chair,\\n\\nWe sincerely thank all reviewers for their valuable feedback and thoughtful comments on our submission. We are pleased to see recognition of the key strengths of our work and would like to summarize the major advantages of our work highlighted by the reviewers and discussed in our rebuttal responses:\\n1. ***Introduction of New Datasets:*** A key contribution of our work is the creation and curation of first-of-its-kind diverse datasets of RNA sequences (1D) annotated with 2D, and 3D structures for various property prediction tasks. These datasets cover multiple species, sequence lengths, and application areas, and provide an essential foundation for evaluating future models in an underexplored field of RNA modeling with immense therapeutic potential. No prior work offers datasets with such comprehensive structural annotations (1D sequence, 2D secondary structure, 3D all-atom) and property labels.\\n\\n2. ***Extensive Evaluation of Structural Information for RNA Property Prediction:*** We establish a unified modular testing environment encompassing 15 representative and state-of-the-art models for 1D, 2D, and 3D RNA property prediction. During the rebuttal, we expanded our analysis by including additional state-of-the-art 3D baselines and 1D RNA language models, further validating our conclusions and providing a useful resource for the community to benchmark and develop future RNA modeling approaches. This framework not only facilitates direct comparison of diverse methods but also highlights key performance trade-offs across different structural representations and modeling scenarios.\\n\\n3. ***Addressing Important Real-World RNA Modeling Challenges:*** Beyond prediction accuracy, we tackled real-world challenges practitioners routinely face such as noise robustness, OOD generalization, data/label efficiency, computational efficiency, and model scalability. Key insights include: (1) 2D spectral methods outperform 1D and 3D in low-to-moderate noise, (2) 3D models excel with limited data, even under structural noise when scaled up using RNA specific biological priors, (3) 1D methods require 2x-5x more data to match 2D/3D but perform best in high noise while 2D models outperforming other models for low noise levels, and (4) adding soft structural priors to 1D models (e.g.-Transformer1D2D) outperforming more complex models across noise regimes. These findings address previously unexplored scenarios and reflect practical challenges in working with diverse RNA sequences, offering actionable insights into the interplay of geometric features and advancing the field.\\n\\nWe appreciate the reviewers\\u2019 engagement and constructive feedback, which allowed us to refine our work further. We believe these strengths, combined with the revisions and additional experiments provided, demonstrate the significance and impact of our contributions.\\n\\nThank you for your time and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Author Responses (2/2)\", \"comment\": \"**RW4: Conclusions from data efficiency experiment in Section 4.2:**\\nThe experiments in section 4.2, investigating models' data efficiency, not only reveal that the performance improves with increasing training data, but also uncovers novel insights of 2D spectral models excelling in low data and partial label regimes, and 3D models outperforming 1D models under limited data regime despite the structural noise. To the best of our knowledge, these insights are novel. We consider these insights significant in the real-world scenarios where large fully-labeled datasets are unavailable.\\n\\n**RW4: lower prediction accuracy of existing methods for 3D structures:** \\nTo the best of our knowledge, no datasets exist containing triplets of 1D RNA sequences, experimentally determined 2D/3D structures, and corresponding property labels, as discussed in Appendix A. This is due to the high cost and technical difficulty of experimentally determining RNA structures and measuring their properties. Consequently, practitioners must rely on predicted structures or fall back to 1D sequence models. Thus, the structural inaccuracies are often inherent and inevitable in practice. At the same time, as the reviewer rightly noted, 3D data contains more information than 1D and 2D data. However, before our work, it was unclear whether this extra, potentially noisy information would add value for property prediction compared to 1D or 2D data. No other prior work has investigated this trade-off between richer information content and potentially more uncertain 3D structures.\\n\\n\\n**RW5: Conclusions from sequencing noise experiment in Section 4.3:**. \\nWe want to highlight that the goal of experiments in Section 4.3 was to assess the models under \\\\textit{realistic} sequencing noise. To this end, we sampled the noise profiles practically observed in real-world sequencing platforms (line 430-434). Since no prior work has studied the impact of realistic sequencing noise on quality of predicted structures, it was not clear how different classes of property prediction models perform in this realistic scenario and to what degree their performance deteriorates under realistic noise. Our experiment not only reveals that Transformer1D is more robust under high sequencing noise, but also that 2D models still perform the best under low-to-moderate noise regimes (now highligted in blue in lines 479-480). Additionally, we reveal that simple Transformer1D2D with soft structural prior outperforms the rest of the models even in high noise regimes while still utilizing structural information (lines 473-476). We consider these insights significant since they cover real-world deployment scenarios and no prior work has investigated the impact of \\\\textit{realistic} sequencing noise on 1D, 2D and 3D property prediction methods. \\n\\n\\n**RQ1. Have the authors considered finding a 3D RNA dataset, such as the RNAsolo?** \\nWe thank the reviewer for the suggestion. However, note that RNAsolo dataset does not include any property labels associated with these structures and hence cannot be used for our purpose of studying the impact of sequence and structural information for property prediction tasks. Currently, we are not aware of any dataset containing triplets of 1D RNA sequences, experimentally determined 2D and 3D structures, and corresponding experimentally measured property labels per data point. This also highlights the importance of our work in a real-world setting where both experimentally determined geometric structures and property labels are unavailable. \\n\\n\\n**RQ2. What specific properties are being referred to, and what are the units for the predicted values?** \\nWe have described the properties in Subsection 2.1 of the main text. To summarize, the properties we model are Riboswitch switching behavior for the Tc-Riboswitches dataset, nucleotide degradation for the COVID-19 dataset, reactivity for the Ribonanza-2k dataset, and expression for the Fungal dataset. \\n\\nFor Tc-Riboswitches, the labels are percentage reflecting switching behaviors. For COVID-19, Ribonanza-2k datasets, the labels degradation and reactivity labels are normalized intensity (hence without units) and for Fungal dataset, the expression labels are measured in transcripts per million (TPM) per kilobase million (RPKM). \\n\\n&nbsp;\\n\\nPlease let us know if there are any further questions. Thanks a lot!\"}", "{\"comment\": \"Dear authors, thank you for your responses. Apart from the misplaced references of SO3krates and HEGNN in Line. 1088, I think your summary is quite accurate. I raise my rating to \\\"Weakly Accepted\\\".\"}", "{\"comment\": \"As we approach the end of the discussion period, we kindly request your review of our detailed responses and revisions, which aims at comprehensively addressing all the concerns you raised. We respectfully request if would you be willing to increase your score if you are happy with how we have addressed your suggestions? Thank you!\"}", "{\"title\": \"Thank you for raising your score!\", \"comment\": \"We thank the reviewer for raising their score and acknowledging our rebuttal responses! We appreciate your valuable suggestions during the discussion process which helped improve our work.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWith several hours remaining until the rebuttal deadline, we hope we have successfully addressed your concerns and questions. If you find our responses satisfactory, we kindly ask you to consider raising your score. \\n\\nWe truly appreciate your time, effort, and valuable contributions to improving our work. \\n\\nBest regards, \\nThe Authors.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable feedback. We have carefully responded to all your concerns and would be grateful if you could consider raising your score.\\n\\nAs the rebuttal deadline approaches, we would greatly appreciate your feedback at your earliest convenience. Thank you again for your time and thoughtful suggestions!\\n\\nBest regards, \\nThe Authors\", \"title\": \"Friendly request for feedback on responses to your questions...\"}", "{\"title\": \"Thank your for updating your score\", \"comment\": \"Thank you very much for your useful feedback, kind understanding and raising your score!\"}", "{\"comment\": \"We thank the reviewer for raising their score based on our response. If the reviewer can point us to which concerns are still remaining so that we can address them from our side to help raise their score even further. Thank you!\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your valuable feedback. We have carefully responded to all your concerns. Could you kindly review your rating and consider our rebuttal? We would be grateful if you could consider raising your score.\\n\\nAs the rebuttal deadline approaches, we would greatly appreciate your feedback at your earliest convenience. Thank you again for your time and thoughtful suggestions!\\n\\nBest regards, \\nThe Authors\", \"title\": \"Friendly request for feedback on responses to your questions...\"}", "{\"title\": \"Thank you for strongly advocating for our work!\", \"comment\": \"We thank the reviewer for raising their score and championing for our work highlighting our contributions as novel and meriting publication owing to the utility of the real-world datasets and experiments for the field!\"}", "{\"summary\": \"The paper provides a curated set of RNA datasets with annotated 2D and 3D structures and investigates RNA property prediction using various deep learning models, focusing on 1D (nucleotide sequences), 2D (graph representations), and 3D (atomic structures) approaches. Key findings reveal that 2D models generally outperform 1D models, while 3D models excel in noise-free scenarios but are sensitive to noise. In contrast, 1D models show greater robustness in noisy and OOD conditions. The authors emphasize the trade-offs of each approach and advocate for future research to integrate the strengths of all models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This study provides a thorough comparison of 1D, 2D, and 3D models, showcasing their respective strengths and weaknesses in handling RNA data.\\n2. This study provides a comprehensive analysis of various deep learning models, assessing their performance under different conditions, including limited data and labels, different types of sequencing errors, and out-of-distribution scenarios, which is crucial for real-world applications.\\n3. The commitment to transparency and reproducibility by making methodologies, datasets, and code publicly available promotes collaborative progress in the field.\", \"weaknesses\": \"1. The article lacks methodological innovation, missing deep improvements on existing technologies and novel algorithm designs.\\n2. The article merely compares various metrics, and the key points are not sufficiently emphasized.\\n3. Both the secondary and tertiary structures are predicted using software, particularly the tertiary structure, which is not very accurate. This can lead to significant uncertainties in further property predictions.\", \"questions\": \"1. In recent years, many RNA language models have been proposed, applicable to various downstream tasks. Compared to these models, what are the advantages and disadvantages of the methods mentioned in the article?\\n2. What are the criteria for selecting secondary structure prediction tools, and is there a detailed analysis and further experimentation?\\n3. Please explain the motivation behind the five task settings, as these experiments may make the article appear cumbersome and the key points unclear.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Request for Clarification on Score-Comment Alignment\", \"comment\": \"Dear Reviewer JSSw,\\n\\nWe noticed that your review mentions, \\u201cthe overall quality is marginally acceptable,\\u201d yet the final score assigned is \\u201creject.\\u201d We kindly ask if you could check the score to ensure alignment with your comments, and consider our rebuttal at the same time. \\n\\nThank you for your time and consideration. \\n\\nBest regards, \\nThe Authors\"}", "{\"comment\": \"Thank you for your response. Some of my concerns have been addressed, so I decide to raise my score.\"}", "{\"title\": \"Author Responses (1/2)\", \"comment\": \"We thank the reviewer for their feedback and appreciating the thoroughness and comprehensiveness of our experiments and for highlighting the value of our newly annotated RNA data. We address the questions and comments raised by the reviewer point-by-point:\\n\\n**RW1: Lacking methodological innovation:** \\nWe appreciate the reviewer's observation about the simplicity of the proposed framework. We would like to emphasize the our paper sits under \\u201cdatasets and benchmarks\\u201d, a subject area highlighted in ICLR call for papers. Our main contributions are: 1) introducing first of its kind RNA datasets with all 1D, 2D and 3D structures and property labels; 2) providing a modular unified testing environment for benchmarking 1D, 2D and 3D property prediction models; 3) studying existing 1D, 2D and 3D models to assess the impact of geometric information in a range of real-world scenarios and establish baselines for future research in this direction. No such study exists to date despite the importance of RNA as therapeutic modality. \\n\\n\\n**RW2: The article merely compares various metrics, and the key points are not sufficiently emphasized.** \\nIn this work, we provide a comprehensive comparison beyond just RNA property prediction accuracy, which includes a range of real-world challenges of modeling RNA: noise robustness, OOD generalization, data and label efficiency. We derived several key insights including: 2D spectral methods outperforming 1D and 3D methods in low-to-moderate noise regimes (highlighted in bold in line 259 and line 373); 3D models outperforming 1D models under limited data regime even despite the structural noise (highlighted in bold in line 383); 1D sequence methods require 2x-5x more training data to match the performance of 2D and 3D methods (line 379), but excel in high noise regime (highlighted in bold in line 462); simple Transformer1D2D with soft structural prior outperforms the rest of the models even in high noise regimes while still utilizing structural information (lines 472-476). We consider these insights significant since they cover a range of real-world scenarios that have not been investigated in prior work. We would appreciate the suggestions from the reviewer on which key points require further elaboration and more emphasis. \\n\\n**RW3: Impact of uncertainties in secondary and tertiary structures for property predictions:** \\nWe wish to clarify that this is intentional and corresponds to the real-world setting reflecting the challenges practitioners face when modeling RNA properties. We are not aware of any dataset containing triplets of 1D RNA sequences, experimentally determined 2D and 3D structures, and corresponding experimentally measured property labels. This is due to the high cost and technical difficulty of experimentally determining RNA structures and measuring their properties. Consequently, practitioners must rely on predicted structures or fall back to 1D sequence models. Thus, the structural uncertainties are inherent and inevitable in practice and including them into analysis better reflects practical reality.\"}", "{\"title\": \"Author Responses (2/2)\", \"comment\": \"**RW2: Evaluation with more recent 3D methods:**.\\n\\nFollowing the reviewer's suggestion, we have now added a comparison of four additional 3D models (GVP, DimeNet, and recent FAENet and FastEGNN) in Table 1 and Figs. 2, 3, 5, 6 in the main text (highlighted in blue). However, we still find that all these models show similar performance on RNA property prediction tasks with 3D models failing to outperform 2D models. We further chose FastEGNN for all subsequent experiments and observe the similar trend as reported earlier with EGNN and SchNet. \\n\\n| Model | COVID | Ribonanza | Tc-Ribo | Fungal |\\n|---------------------------|---------------|-----------------|----------------|----------------|\\n| **1D model** | | | | |\\n| Transformer1D | 0.361\\u00b10.017 | 0.705\\u00b10.015 | 0.705\\u00b10.019 | 1.417\\u00b10.005 |\\n| **2D model** | | | | |\\n| Transformer1D2D | 0.305\\u00b10.012 | 0.514\\u00b10.004 | 0.633\\u00b10.001 | OOM |\\n| GCN | 0.359\\u00b10.009 | 0.509\\u00b10.004 | 0.640\\u00b10.005 | 1.192\\u00b10.077 |\\n| GAT | 0.315\\u00b10.006 | 0.534\\u00b10.006 | 0.603\\u00b10.004 | 1.112\\u00b10.035 |\\n| ChebNet | 0.279\\u00b10.015 | 0.499\\u00b10.005 | 0.599\\u00b10.001 | 1.018\\u00b10.023 |\\n| Graph Transformer | 0.318\\u00b10.008 | 0.500\\u00b10.005 | 0.604\\u00b10.001 | 1.317\\u00b10.002 |\\n| GraphGPS | 0.332\\u00b10.013 | 0.523\\u00b10.003 | 0.610\\u00b10.012 | 1.025\\u00b10.081 |\\n| **3D model (w/o pooling)** | | | | |\\n| EGNN (w/o pooling) | 0.480\\u00b10.025 | 0.808\\u00b10.023 | 0.725\\u00b10.002 | OOM |\\n| SchNet (w/o pooling) | 0.499\\u00b10.030 | 0.843\\u00b10.004 | 0.704\\u00b10.001 | OOM |\\n| FAENet (w/o pooling) | 0.486\\u00b10.010 | 0.834\\u00b10.006 | 0.703\\u00b10.004 | OOM |\\n| DimeNet (w/o pooling) | 0.467\\u00b10.010 | 0.797\\u00b10.012 | 0.712\\u00b10.004 | OOM |\\n| GVP (w/o pooling) | 0.467\\u00b10.010 | 0.797\\u00b10.012 | 0.744\\u00b10.004 | OOM |\\n| FastEGNN (w/o pooling) | 0.477\\u00b10.005 | 0.816\\u00b10.014 | 0.753\\u00b10.001 | OOM |\\n| **3D model (with nuc. pooling)** | | | | |\\n| EGNN (nuc. pooling) | 0.364\\u00b10.003 | 0.619\\u00b10.007 | 0.663\\u00b10.010 | OOM |\\n| SchNet (nuc. pooling) | 0.390\\u00b10.006 | 0.685\\u00b10.006 | 0.655\\u00b10.038 | OOM |\\n| FastEGNN (nuc. pooling) | 0.444\\u00b10.003 | 0.753\\u00b10.015 | 0.710\\u00b10.011 | OOM |\\n\\n\\n&nbsp;\\n\\nPlease let us know if there are any further questions. Thanks a lot!\"}", "{\"summary\": \"This paper presents a systematic evaluation of incorporating explicit geometric information into RNA property prediction, considering not only performance but also real-world challenges such as limited data availability, partial labeling, sequencing noise, and computational efficiency. To this end, authors introduce a newly curated set of RNA datasets with enhanced 2D and 3D structural annotations, providing a resource for model evaluation on RNA data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"For the field of AI for Science, high-quality datasets are very important assistants. This article integrates four datasets, which contain a large number of data samples of various types.\", \"weaknesses\": \"> **W1. Lack of explanation for RNA's uniqueness.**\\n\\nIn section 3, it seems that a task of general sequence molecules is defined, and it is not specifically for RNA. Is there any fundamental difference in methodology between them and other sequenced molecules (such as proteins and DNA), except that the molecular composition may be slightly different?\\n\\n> **W2. The method used is relatively old.**\\n\\nThere are many new works for 1D sequences and 2D topological graphs, which are not elaborated here. As for 3D geometric graph neural networks, many latest works are not included in the comparison. In fact, the EGNN used has been pointed out by some literature to have form capacity limitations [a], which makes the conclusions drawn in the article unreliable. More and stronger baselines should be added. You can refer to these surveys [b,c,d], and add baselines such as SaVeNet [e], LEFTNET [f], FAENet [g], TFN [h], NequIP [i], MACE [j], EquiformerV2 [k], EPT [l], MEAN [m], dyMEAN [n], etc.\\n\\n[a] On the expressive power of geometric graph neural networks\\n\\n[b] A Survey of Geometric Graph Neural Networks: Data Structures, Models and Applications\\n\\n[c] A Hitchhiker's Guide to Geometric GNNs for 3D Atomic Systems\\n\\n[d] Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems\\n\\n[e] SaVeNet: A Scalable Vector Network for Enhanced Molecular Representation Learning\\n\\n[f] A new perspective on building efficient and expressive 3D equivariant graph neural networks\\n\\n[g] FAENet: Frame Averaging Equivariant GNN for Materials Modeling\\n\\n[h] Tensor field networks: Rotation- and translation-equivariant neural networks for 3D point clouds\\n\\n[i] E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials\\n\\n[j] Mace: Higher order equivariant message passing neural networks for fast and accurate force fields\\n\\n[k] EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations\\n\\n[l] Equivariant Pretrained Transformer for Unified Geometric Learning on Multi-Domain 3D Molecules\\n\\n[m] Conditional Antibody Design as 3D Equivariant Graph Translation\\n\\n[n] End-to-End Full-Atom Antibody Design\", \"questions\": \"See Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces a curated set of RNA datasets with enhanced 2D and 3D structural annotations, providing a valuable resource for evaluating RNA property prediction models. It systematically investigates the impact of incorporating geometric information, showing that models with explicit geometry encoding generally outperform sequence-based models, while the latter are more robust under noise but require more training data. The paper emphasizes the trade-offs between 1D, 2D, and 3D models, with 2D models typically outperforming 1D ones, and 3D models excelling in noise-free conditions but being sensitive to noise.\", \"strengths\": \"1. The study compiles a large and diverse set of RNA sequence data across a broad range of nucleotide count intervals, ensuring comprehensive coverage.\\n\\n2. This work provides an in-depth comparison of 1D, 2D, and 3D models, conducting a thorough analysis of various deep learning approaches and evaluating their performance under different conditions, which is essential for real-world applications.\\n\\n3. The authors demonstrate a strong commitment to transparency and reproducibility by making their methodologies, datasets, and code publicly available, fostering collaborative progress in the field.\", \"weaknesses\": \"1. The methods employed are relatively outdated, with recent advances in 1D sequence models and 2D topological graph representations not fully addressed. Additionally, newer works on 3D geometric graph neural networks are not included in the comparison.\\n\\n2. The models used in the study are essentially existing approaches, with no novel model proposed by the authors.\\n\\n3. The analysis of the experimental results remains somewhat limited and could benefit from further depth.\\n\\nSince this paper primarily focuses on datasets and benchmarks, the novelty of the methods is not as critical. The authors have also provided additional experiments with more recent models and detailed explanations of these results. Following the rebuttal, all reviewers reached a consensus and expressed positive feedback about the submission. Therefore, I recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the authors addressed the following points:\\n\\nIn response to concerns from Reviewer JSSw and HBUE, the authors provided additional experiments with recent models, which satisfied both reviewers.\\n\\nRegarding the limited methodological innovation, as pointed out by Reviewer JSSw and k6pU, the authors clarified that the paper focuses on \\\"datasets and benchmarks,\\\" which still offers valuable contributions to the field. Both reviewers expressed satisfaction with these explanations.\\n\\nReviewer HBUE raised additional concerns about including more baselines (high-degree models) and FastEGNN settings, which the authors have adequately addressed.\\n\\nOverall, Reviewers JSSw and HBUE were actively engaged during the author-reviewer discussion period, while Reviewer k6pU did not participate but agreed to raise the score during the AC-reviewer discussion phase. In summary, all concerns were addressed, and all reviewers gave positive feedback on the revised submission.\"}", "{\"comment\": \"**RD4. About models using high-degree steerable features**\\n\\nThanks for your suggestions. We actually tried to use a smaller batch size. In the paper, we used batch size=96 consistently for all baselines. But for methods using spherical harmonics, even batch size = 16 or 8 faces OOM issues. \\n\\nThank you for the nice suggestion about HEGNN and SO3krates. We have now added a section Appendix E.1 of the paper and referred to main text (lines 537-538) to discuss the possible benefits of high-degree steerable features. And we will continue to run more baselines mentioned by the reviewer to make the benchmark more solid and update it in the camera-ready version. \\n\\n**RD5. Discussion on the inapplicability of FastEGNN** \\n\\nThanks for your understanding. We added the discussion about FastEGNN in our paper (see lines 295 - 299). We appreciate FastEGNN\\u2019s significant contributions to this field and believe it will be a future direction to explore. \\n\\n\\n**RD6. Discussion on considering noise in model design** \\n\\nBased on the analysis in our paper, one of the most effective ways to deal with noise data may be the ensemble method. Our experiments show that though the 1D method does not perform well on clean datasets, the 1D method is more robust to noise. An effective way to handle noise can be through ensemble methods. For instance, combining 1D, 2D, and 3D models by independently learning representations and integrating them via attention mechanisms that dynamically weigh each modality based on task relevance and noise can leverage the robustness of 1D methods while benefiting from the strengths of 2D and 3D approaches and be a good direction for future research. \\n\\nWe included this part in the Appendix E.2 in our paper and referred to it main text (lines 535-537). \\n\\n&nbsp; \\n\\nWe hope this addresses your concerns and helps enhance our paper.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWith several hours remaining until the rebuttal deadline, we hope we have successfully addressed your concerns and questions. If you find our responses satisfactory, we kindly ask you to consider raising your score. \\n\\nWe truly appreciate your time, effort, and valuable contributions to improving our work. \\n\\nBest regards, \\nThe Authors.\"}", "{\"title\": \"Author Responses (1/2)\", \"comment\": \"We thank the reviewer for their feedback and appreciating the value and importance of our newly contributed RNA datasets for advancing the field of RNA property prediction. We address the questions and comments raised by the reviewer point-by-point:\\n\\n**RW1: Lack of explanation for RNA's uniqueness:** \\nWe thank the reviewer for raising this point. RNA differs fundamentally from proteins in both the availability of data and the nature of practical challenges in modeling. Unlike proteins, which benefit from extensive high-quality databases such as PDB [1] containing sequences, experimental structures, and experimentally measured property labels in benchmarks such as FLIP [2], RNA datasets are far more limited [4,5]. Existing RNA resources, like RNAsolo [3], provide 3D structures but lack property annotations, leaving a significant gap in benchmark datasets for RNA property prediction. Moreover, RNA is particularly susceptible to sequencing noise due to variability in platforms and quality [6, 7], a challenge less prominent in protein studies where experimental characterization techniques are more advanced. These distinctions mean that while the role of structural information is well-understood for proteins, it remains under-explored for RNA. Thus, the evaluation setups for protein and RNA cannot be compared directly. With regards to DNA, similar limitations hold as for RNA and hence we are not aware of high-quality structural datasets with property labels for DNA either and recent literature only chooses to model DNA sequences alone.\\n\\n\\n&nbsp;\", \"references\": \"[1] Protein Data Bank: the single global archive for 3D macromolecular structure data, Nucleic acids research, 2019. \\n[2] FLIP: Benchmark tasks in fitness landscape inference for proteins, Dallago et al., NeurIPS 2021 \\n[3] RNAsolo: a repository of cleaned PDB-derived RNA 3D structures, Adamczyk et al., Bioinformatics 2022 \\n[4] Translating rna sequencing into clinical diagnostics: opportunities and challenges, Byron et al., Nature Reviews Genetics, 2016 \\n[5] Reducing costs for dna and rna sequencing by sample pooling using a metagenomic approach, Teufel \\\\& Sobetzko, BMC genomics, 2022. \\n[6] Rna sequencing: advances, challenges and opportunities, Ozsolak \\\\& Milos, Nature reviews genetics, 2011. \\n[7] Accuracy of next generation sequencing platforms, Fox et al., Next generation, sequencing \\\\& applications, 2014.\"}" ] }
9hpcTgztk8
Document-Level In-Context Few-Shot Relation Extraction via Pre-Trained Language Models
[ "Yilmazcan Ozyurt", "Stefan Feuerriegel", "Ce Zhang" ]
Document-level relation extraction aims at inferring structured human knowledge from textual documents. State-of-the-art methods for this task use pre-trained language models (LMs) via fine-tuning, yet fine-tuning is computationally expensive and cannot adapt to new relation types or new LMs. As a remedy, we leverage the generalization capabilities of pre-trained LMs and present a novel framework for document-level in-context few-shot relation extraction. Our framework has three strengths: it eliminates the need (1) for named entity recognition and (2) for human annotations of documents, and (3) it can be updated to new LMs without re-training. We evaluate our framework using DocRED, the largest publicly available dataset for document-level relation extraction, and demonstrate that our framework achieves state-of-the-art performance. We further show that our framework actually performs much better than the original labels from the development set of DocRED. Finally, we conduct an extensive benchmark demonstrating the effectiveness of our framework, achieving state-of-the-art results across six relation extraction datasets and outperforming more than 30 baseline methods. Unlike our framework, the baseline methods have large computational overhead (e.g., from fine-tuning). To the best of our knowledge, we are the first to reformulate the document-level relation extraction task as a tailored in-context few-shot learning paradigm.
[ "relation extraction", "document", "in-context few-shot learning", "knowledge base", "large language models", "natural language processing" ]
Reject
https://openreview.net/pdf?id=9hpcTgztk8
https://openreview.net/forum?id=9hpcTgztk8
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xOYBn9DDqi", "uzWeWn8Mf7", "tjXfb3LOt1", "lQCnDsseW9", "UH8MvITpg3", "2tJLhXShH0" ], "note_type": [ "official_review", "official_review", "official_review", "decision", "official_review", "meta_review" ], "note_created": [ 1729613001467, 1730453722400, 1730632277827, 1737523990164, 1730710238774, 1734837521503 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9545/Reviewer_iEps" ], [ "ICLR.cc/2025/Conference/Submission9545/Reviewer_GY8a" ], [ "ICLR.cc/2025/Conference/Submission9545/Reviewer_3STQ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9545/Reviewer_98mo" ], [ "ICLR.cc/2025/Conference/Submission9545/Area_Chair_WHdS" ] ], "structured_content_str": [ "{\"summary\": [\"The study proposes a method for few-shot document-level relation extraction based on prompting large language models.\", \"The approach first selects and ranks K exemplars from reference documents that are annotated using triples in knowledge bases (distant annotation). Documents are ranked through embedding similarity to the test document. This avoids the requirement for human-annotated few-shot data. These exemplars are aggregated in sets and used to prompt the language model multiple times, providing a weighted probability distribution for candidate entity pairs.\", \"The method is aimed at simplifying document-level relation extraction by employing pretrained language models without subsequent fine-tuning. The authors claim that this method enables adaptation to new relation types and data domains. Moreover, the suggested approach does not rely on entity recognition systems.\"], \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The benchmark includes document-level and sentence-level relation extraction datasets as well as various decoder-only language models as backbones, showing promising performance results.\", \"The proposed REPLM method does not require entity recognition or human annotated training data and facilitates the adoption of different LM backbones.\", \"The authors provide ablation studies evaluating whether their LM backbones learn the relation extraction task or recall entity relations from pretraining data. The results suggest that LMs do not rely on named entities encountered during pretraining.\"], \"weaknesses\": [\"Parameter comparisons with baseline approaches are missing. REBEL-large is based on BART-large with 0.4 billion parameters, while the smallest REPLM backbone GPT-JT contains 6 billion parameters.\", \"The results in Table 4 show that approaches involving significantly smaller LM backbones, such as ATLOP or DREEAM based on RoBERTa-large (0.4 billion parameters), outperform REPLM models employing GPT-JT (6 billion parameters), Llama 3.1 (8 billion parameters), and GPT-3.5.\", \"Model parameters and inference resources are not transparent. The proposed REPLM method requires multiple inference steps for each relation type and test document. While the authors highlight fine-tuning as a limitation of related methods, the computational requirements for retrieval and generation are not reported.\", \"GPT-JT model is trained on Natural Instructions dataset, which includes various tasks involving entity and relation extraction based on Wikidata and Wikipedia documents. The authors should assess the level of data contamination and the impact on model performance.\"], \"questions\": [\"How important is document similarity for REPLM performance?\", \"Did you evaluate REPLM for relation types that are included in the reference documents (few-shot exemplars), but not in the test documents? How do you deal with hallucinations?\", \"How many few-shot exemplars (K) were used for (i) the baselines, and (ii) REPLM in Table 4?\", \"I suggest providing a color scale for the F1 Scores in Figure 2\", \"Tables 6-24 exceed the page margins\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents REPLM, a novel framework for document-level in-context few-shot relation extraction using pre-trained language models (LMs) without fine-tuning. The framework addresses key limitations of existing relation extraction methods by eliminating the need for named entity recognition and human-annotated data, and by supporting adaptation to new relation types or language models without retraining. Evaluated on DocRED and five other datasets, REPLM demonstrates state-of-the-art performance, showing robust generalization across relation extraction tasks without large computational overhead.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Innovative Contribution: The authors successfully reformulate document-level relation extraction as an in-context few-shot learning problem, a fresh perspective in this domain.\\n2. Computational Efficiency: By avoiding fine-tuning, the REPLM framework provides a scalable alternative to traditional methods that require extensive resources, enabling adaptability across diverse relation extraction tasks.\\n3. Empirical Validation: Extensive benchmarking on DocRED and multiple datasets shows REPLM\\u2019s superior performance compared to over 30 baseline methods, demonstrating both efficacy and generalizability.\", \"weaknesses\": \"1. Limited Error Analysis: The paper does not offer a detailed error analysis to identify specific instances where REPLM might underperform or fail, which would help understand its limitations.\\n2. Lack of Comparison with Entity-Based Approaches: Although REPLM outperforms entity-based baselines, the paper could benefit from a clearer discussion contrasting its performance with entity-recognition pipelines to highlight the advantages of an in-context few-shot approach.\", \"questions\": \"1. Could the authors provide further insight into potential applications or adjustments needed for REPLM in real-world deployment scenarios?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents REPLM, designed to enhance document-level relation extraction by leveraging in-context few-shot learning with pre-trained language models (LMs). This approach circumvents the traditional reliance on named entity recognition and extensive human annotations, significantly reducing computational costs. REPLM employs in-context few-shot learning using LMs like GPT-J, enabling it to adapt to new datasets and LMs without retraining. The framework achieves state-of-the-art results on the DocRED dataset and outperforms over 30 baseline methods across multiple datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"REPLM successfully reduces the need for named entity recognition and human annotations, making it less resource-intensive and potentially more scalable and adaptable to new datasets and LMs without retraining.\", \"The framework is shown to achieve state-of-the-art results on the DocRED dataset as well as across six other relation extraction datasets, outperforming over 30 baseline methods which demonstrates its effectiveness.\"], \"weaknesses\": [\"The performance of the REPLM framework heavily depends on the relevance and quality of the in-context examples used. This could potentially limit its effectiveness if the available examples are not of high quality or if they are not well-aligned with the specific relations being extracted. Moreover, The method might inherit biases from the in-context examples. If these examples are biased or not sufficiently diverse, the extracted relations might also reflect these biases.\", \"Based on the experimental results (e.g., Table 4), it raises questions about whether the performance improvement is genuinely driven by the proposed in-context few-shot learning paradigm or primarily attributed to the use of larger-parameter LMs. Moreover, as shown in Table 5, performance of Llama 70B is much higher than that of Llama 8B.\", \"Given that this paper focuses on the task of few-shot document-level relation extraction, it is noteworthy that several relevant baselines, such as [1, 2], are absent from the discussion.\", \"This paper highlights that REPLM successfully generates more relations than REBEL (author, Chaosmosis, F\\u00e9lix Guattari), despite it not being annotated. This raises doubts about whether this achievement can be attributed to the effectiveness of the proposed method or simply the power of the large language model (LLM) utilized in the implementation.\", \"[1] Meng, Shiao, Xuming Hu, Aiwei Liu, Fukun Ma, Yawen Yang, and Lijie Wen. \\\"RAPL: A Relation-Aware Prototype Learning Approach for Few-Shot Document-Level Relation Extraction.\\\" In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 5208-5226. 2023.\", \"[2] Popovic, Nicholas, and Michael F\\u00e4rber. \\\"Few-Shot Document-Level Relation Extraction.\\\" In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5733-5746. 2022.\"], \"questions\": \"Please see the weakness above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper presents a new framework called REPLM for document-level few-shot relation extraction using pre-trained language models (LMs). The key idea is to reformulate relation extraction as a tailored in-context few-shot learning paradigm without requiring named entity recognition, human annotations, or re-training when adding new relations or adopting new LMs. Specifically, for a given document and relation type, REPLM retrieves sets of most relevant in-context examples and aggregates their outputs in a probabilistic manner to extract the relational triplets. The authors evaluate REPLM on the DocRED dataset and demonstrate state-of-the-art performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The authors provide strong motivation for studying document-level relation extraction in Section 1. They highlight important challenges (e.g., expensive annotations, inflexibility to new relations and LMs) and explain how REPLM addresses them.\\n\\n2. REPLM tackles document-level RE from a fresh perspective of in-context few-shot learning (Section 3). This eliminates the need for per-document human annotations and de-couples RE from NER, making it robust to NER errors. The new formulation also enables adapting to new relations/LMs without re-training.\\n\\n3. Extensive experiments on the large-scale DocRED benchmark (Section 6.1) show that REPLM achieves state-of-the-art F1 scores across all metrics, with gains of 1.2% - 4.3% over fine-tuned LMs (Table 3). Compared to recent in-context learning methods (Section 6.2), REPLM attains 10+ F1 improvements while being much more efficient.\", \"weaknesses\": \"1. Though impressive, the empirical results are limited to only one dataset DocRED. Given the strong claims made (e.g., \\\"significantly outperforms SOTA methods\\\", \\\"our framework can generalize to different relation types and domains\\\"), it is crucial to evaluate REPLM on diverse document-level RE datasets, such as SciREX, CDR, GDA. You could propose evaluating performance on unseen relation types or testing zero-shot transfer between datasets.\\n\\n2. Two critical hyperparameters in REPLM, namely the candidate pool size N (Section 4.1) and the number of sampled subsets L (Section 4.2), are not systematically studied. How do these design choices impact the performance and computational costs? Experiments with varying N and L should be conducted to investigate the sensitivity. I suggest plotting performance vs. N/L to visualize the tradeoffs. What is computational complexity as a function of these parameters?\\n\\n3. While Section 6.3 analyzes REPLM against DocRED human labels, it is quite shallow and lacks specific examples. To better understand the behaviors of REPLM, more in-depth analysis is needed: Where does it make mistakes? Any limitations compared to human? A few representative success & failure cases would help strengthen the discussion in Section 6.4.\\n\\n4. Algorithm 1 in Section 4.1 is not clearly explained. Need to define all notations (e.g., kq, x, Cj) and use consistent formatting. Also specify what similarity function f is used.\", \"questions\": \"Section 6.4: Discuss the potential noise introduced by distant supervision when building the candidate pool. How might it impact REPLM and are there any ways to mitigate?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper presents REPLM, a novel framework for document-level in-context few-shot relation extraction using pre-trained language models (LMs) without fine-tuning. It aims to address limitations of existing methods by eliminating the need for named entity recognition and human-annotated data, and enabling adaptation to new relation types or language models. Reviewers generally agree that the paper provides strong motivation for studying document-level relation extraction. The empirical results are mostly limited to the DocRED dataset, and reviewers questioned the generalizability claims given the lack of extensive testing on a diverse range of document-level RE datasets. There is a lack of detailed error analysis to understand where REPLM might underperform or fail. The paper currently has several significant weaknesses that need to be addressed before it can be considered for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"No Discussion.\"}" ] }
9hmDl8fFDs
Deep Complex Spatio-Spectral Networks with Complex Visual Inputs
[ "Saurabh Yadav", "Koteswar Rao Jerripothula" ]
Complex-valued neural networks have attracted growing attention for their ability to handle complex-valued data with enhanced representational capacity. However, their potential in computer vision remains relatively untapped. In this paper, we introduce Deep Complex Spatio-Spectral Network (DCSNet), a fully complex-valued token-based, end-to-end neural network designed for binary segmentation tasks. Additionally, our DCSNet encoder can be used for image classification in the complex domain. We also propose an invertible real-to-complex (R2C) transform, which generates two complex-valued input channels, complex intensity and complex hue, while producing complex-valued images with distinct real and imaginary components. DCSNet operates in both spatial and spectral domains by leveraging complex-valued inputs and complex Fourier transform. As a result, the complex-valued representation is maintained throughout DCSNet, and we avoid the information loss typically associated with Real$\leftrightarrow$Complex transformations. Extensive experiments show that DCSNet surpasses existing complex-valued methods across various tasks on both real and complex-valued data and achieves competitive performance compared to existing real-valued methods, establishing a robust framework for handling both data types effectively.
[ "Deep Complex Newtworks", "Complex-valued color transformation" ]
Reject
https://openreview.net/pdf?id=9hmDl8fFDs
https://openreview.net/forum?id=9hmDl8fFDs
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yQojBKTNfg", "vW1mKNrkIW", "vHhSxsHHZH", "uv17UNXzv2", "ugBPo7BOA6", "tcCEo24jc3", "rXI7GWRZ3B", "rNluNSkjsD", "pKu08mNrMT", "jch0kSGctG", "jQrEy59tki", "hxc8DdvfVG", "fyu1wXHGTA", "eaHGca4CSk", "dc2Ajyhg02", "dRYxGD36qF", "c03jsl6RYW", "Z4cQe7hhDg", "VhLTymzZzz", "R4biDb5VbL", "P6WRmX51tR", "NZd4dsRwiw", "KlMFlMOYe9", "IncQc5aw67", "IIca8Jawge", "HKlRcq4Uiq", "Gy2g9MPxxQ", "AkwaEOYWVz", "9bxvmLrpJP", "8GfV5XpQlN", "776MFZqgVP", "5mPSlH6sQa", "5Gg7XuqIt6", "4ubQyP0Sbp", "4nilEgoqUB", "1jcRzoaHXI" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1733226458110, 1732470342450, 1732282420240, 1733178181983, 1733213011462, 1730543839409, 1732292406869, 1733313447168, 1732695669314, 1733194622117, 1732282285388, 1732293099815, 1732291270732, 1732530976701, 1733073197275, 1733128292809, 1730527140121, 1732455355151, 1733127727221, 1733041741974, 1733170555224, 1732476744222, 1734108841262, 1732993377570, 1733038881425, 1732553704841, 1737523444865, 1732480181030, 1733174487122, 1730292223401, 1733227894446, 1733085689221, 1733073255253, 1732554894964, 1730213020086, 1733159383943 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Reviewer_BUvX" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Reviewer_F89z" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Area_Chair_nR2G" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Reviewer_QqAh" ], [ "ICLR.cc/2025/Conference/Submission1276/Reviewer_F89z" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Area_Chair_nR2G" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Reviewer_QqAh" ], [ "ICLR.cc/2025/Conference/Submission1276/Reviewer_F89z" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Reviewer_QqAh" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Reviewer_BUvX" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ], [ "ICLR.cc/2025/Conference/Submission1276/Reviewer_F89z" ], [ "ICLR.cc/2025/Conference/Submission1276/Reviewer_B2bV" ], [ "ICLR.cc/2025/Conference/Submission1276/Authors" ] ], "structured_content_str": [ "{\"title\": \"Discussion Follow-up (for Reviewer F89z)\", \"comment\": \"Dear Reviewer F89z,\\n\\nThank you very much for your thoughtful and constructive review of our paper. We truly appreciate the time and effort you have invested in providing us with valuable feedback. Your insights have been instrumental in helping us explain our work better. It was an absolute pleasure to have you as a reviewer.\\n\\nBased on our discussion, we believe we have addressed the concerns you raised and clarified any points of confusion. We hope these clarifications have resolved the key issues and align with your expectations.\\n\\nGiven that the other reviewers have rated the paper positively, we kindly request you to reconsider your current rating in light of these discussions. We believe that a consensus among reviewers could be reached and would be grateful for your updated evaluation.\\n\\nThank you once again for your time and for contributing to this review process.\\n\\n&nbsp;\\n\\nWarm regards,\\n\\nAuthors of #1276\"}", "{\"title\": \"Author response to reviewer F89z (part 2)\", \"comment\": \"## Weakness 2, Weakness 3 & Question 3: Real to Complex Conversion\\n\\nAs Reviewer BUvX also pointed out, complex-valued neural networks can indeed outperform their real-valued counterparts, but we believe this largely depends on the nature of the input data. For instance, the seminal work DCN [1] achieved similar results for complex-valued and real-valued networks on image classification tasks. One plausible reason for this is the use of the RGB color space for input, which is inherently real-valued. In contrast, for audio tasks\\u2014where complex-valued data is naturally available\\u2014complex-valued networks performed better, as reported in [1]. This underscores the importance of the input domain: for complex-valued networks to fully exploit their potential, the input data should ideally reside in the complex-valued domain.\\n\\nThis idea has been well-recognized in subsequent works, including FCCN [2], which proposed the iHSV color space to address this limitation. Complex-valued color models are critical for learning rich complex-valued representations, and we argue that the input data must also be complex-valued to facilitate this process effectively.\\n\\nIt is important to note that this goes beyond transformations like Fourier transforms, which can indeed produce complex representations but do not allow for intuitive visualization like we experience via images with discernible corners, edges, and shapes. A complex-valued color model bridges this gap: it not only provides complex representations but also enables visualization. This visualization capability is particularly vital for binary segmentation tasks, where spatial and structural information is paramount.\\n\\n\\n**Why a new complex-valued color model?**\\n\\nWhile iHSV has demonstrated success, ColorNet [3] shows that the choice of color model can significantly impact network accuracy. It also showed that using multiple color models simultaneously can further enhance accuracy while reducing the number of parameters. Therefore, having more complex-valued color models can certainly help research in the domain of complex-valued networks. Inspired by these insights, we explored RGB color model and searched for argand planes to develop a novel complex-valued color model, iRGB. Our experiments demonstrate that iRGB outperforms iHSV, as evidenced in Table 7 of our paper and the table below, which shows results on CIFAR10:\\n\\n| L+i0 & a+ib [4] &nbsp; | R+iG & G +iB [4] &nbsp;| Fourier Transform &nbsp;| iHSV [2] &nbsp; | RGB+i0 &nbsp; | iRGB (Ours) |\\n|------|-----|------|-----|--------|--------|\\n| 91.8 | 92.7 | 85.8 | 93.5 | 89.1 | **94.3**|\\n\\n\\n**Comparison with other existing/trivial transformations**\\n\\n\\nThe literature includes various real-to-complex transformations, such as the two encodings proposed in [4]: (i) L+i0 & a+ib (using L*ab space) and (ii) R+iG & G+iB (using RGB space). We also tried the trivial transformation RGB+i0 you suggested in Q3. Previous works, such as DCN [1] (Section 3.7) and FCCN [2] (Table 6), have also empirically shown that having both real and imaginary parts contributes to better performance. Our results reaffirm this, highlighting the superior performance of iRGB. As demonstrated in the table above, our iRGB surpasses all other methods, including iHSV, in terms of accuracy.\\n\\n\\n**Invertibility of the transformation**\\n\\n\\nRegarding invertibility, it is crucial to ensure no information loss occurs during transformations. Otherwise, the network would operate on incomplete data. We have validated this property for iRGB and included additional experiments and an enhanced demo in the supplementary material, as suggested by Reviewer BUvX.\\n\\n&nbsp;\\n&nbsp;\\n\\n[1] C. Trabelsi, O. Bilaniuk, Dmitriy Serdyuk, Sandeep Subramanian, J. F. Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, C. Pal, \\\"Deep Complex Networks\\\", ICLR 2018.\\n\\n[2] Saurabh Yadav; Koteswar Rao Jerripothula, FCCNs: Fully Complex-valued Convolutional Networks using Complex-valued Color Model and Loss Function. ICCV 2023.\\n\\n[3] Gowda, S.N., Yuan, C. (2019). ColorNet: Investigating the Importance of Color Spaces for Image Classification. In: Jawahar, C., Li, H., Mori, G., Schindler, K. (eds) Computer Vision \\u2013 ACCV 2018. \\n\\n[4] Utkarsh Singhal, Yifei Xing, Stella X. Yu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 681-690\"}", "{\"title\": \"Author response to reviewer BuvX (part 2)\", \"comment\": \"## Weakness-1 [part 2 of 2]: Average SOD performance\\nBelow, we compare the average performance across the five SOD datasets presented in Table 2. While our method achieves the best performance in terms of the maxF metric, it consistently ranks at least second-best across all metrics\\u2014a distinction no other method in the comparison achieves.\\n\\n| Method | $S_m \\\\uparrow$ | $maxF\\\\uparrow$ | $E_{\\\\xi}^{max} \\\\uparrow$ | $MAE\\\\downarrow$ |\\n|--------|--------|--------|--------|--------|\\n|PiCANet | 0.871 | 0.854 | 0.913 | 0.046 |\\n|BASNet | 0.872 | 0.856 | 0.909 | 0.051 |\\n|PoolNet | 0.879 | 0.864 | 0.914 | 0.049 |\\n|EGNet-R | 0.886 | 0.868 | 0.918 | 0.048 |\\n|MINet-R | 0.885 | 0.868 | 0.919 | 0.046 |\\n|LDF-R | 0.892 | 0.876 | 0.921 | `0.043` |\\n|CSF-R2 | 0.892 | 0.874 | 0.911 | 0.049 |\\n|GateNet-R | 0.888 | 0.874 | 0.922 | 0.047 |\\n|VST | `0.904` | **0.894** | `0.932` | 0.045 |\\n|FCCN | 0.824 | 0.813 | 0.878 | 0.078 |\\n|SCVUNet | 0.831 | 0.827 | 0.893 | 0.069 |\\n|DCSNet (Ours) | **0.893** | `0.895` | **0.930** | **0.044**| \\n\\n*Note: `Red` indicates best result, and **bold** indicates second best result.*\"}", "{\"title\": \"Incorporating Missing Results in Camera-ready Version\", \"comment\": \"Thank you for your thoughtful review and for raising the score. We deeply appreciate your recognition of the additional results and discussions we provided in our response, including those on invertibility, comparisons, and computational load metrics. We apologize for not including these in the main paper; this omission was unintentional and due to unforeseen circumstances related to the lead author's health.\\n\\nOur priority during the rebuttal phase was to address reviewers' concerns as thoughtfully and comprehensively as possible, which left us unable to integrate these results into the manuscript in time. However, we fully agree that these findings significantly strengthen our claims and results. We will ensure they are thoroughly incorporated into the camera-ready version, along with additional context and discussion, to maximize the paper's clarity and impact.\\n\\nThank you again for your constructive feedback and for recognizing the value of these results.\"}", "{\"title\": \"Author Response to further comments of Reviewer F89z\", \"comment\": \"`Github repository of DCN`\\n\\nThanks for pointing it out. However, as we have already stated, they perform complex-to-complex mapping in the audio domain, so it must have been used there. To maintain focus, it will be nice if we restrict the discussion to the computer vision domain, as emphasized earlier.\\n\\n&nbsp;\\n\\n`Real to Complex Conversion`\\n\\n*Additional Discussion:* \\n\\nThank you for finding the discussion valuable. We will make sure to include it in the camera-ready version.\\n\\n&nbsp; \\n\\n*Independence:*\\n\\nThank you for pointing this out and helping us refine our explanation. We acknowledge that we overlooked considering the polar representation of our transform. However, it is important to highlight an intriguing aspect: while magnitudes are related, the phases are entirely independent, which has significant implications. Prior works [1] and [2] have demonstrated how phase information can be effectively leveraged for object discovery and image classification tasks, respectively.\\n\\nThis phase independence could potentially explain why we observe improved performance with our iRGB color model. Accordingly, we would like to revise our earlier statement on independence to the following: our transform produces components that are phase-wise independent.\\n\\n&nbsp;\\n\\n`Binary Segmentation`\\n\\n*Mapping:*\\n\\nAs mentioned earlier, our goal was to design an end-to-end complex-valued neural network, so it obviously performs complex-to-complex mapping. To enable other types of mappings, we introduced the R2C transform at the input and the (1, i) encoding at the output, as pre-processing and post-processing steps, respectively.\\n\\nOur primary motivation for developing this end-to-end complex-valued neural network was to unlock the full potential of complex-valued architectures in the computer vision domain.\\n\\n&nbsp;\\n\\n*Background as $i$:*\\n\\nThis was intended as an analogy. Just as the foreground and background in an image are physically separate and distinct, we can treat them as mutually independent by representing one as the real part and the other as the imaginary part. Therefore, if $y$ denotes the usual binary segmentation map (i.e. foreground=1 & background=0) highlighting the foreground, then $1-y$ becomes the map highlighting the background. Representing these as the real and imaginary parts results in a complex-valued target map, $y\\u2032=y+i(1\\u2212y)$.\\n\\nIn this formulation, since $y=1$ for foreground pixels, all foreground pixels in $y\\u2032$ are represented by $1$. Similarly, since $y=0$ for background pixels, all background pixels in $y\\u2032$ are represented by $i$. As far as inference in concerned, if the output is a complex map $a+ib$, we compute the final output map in the real domain by taking the average of $a$ and $1\\u2212b$ maps. We hope this explanation clarifies why our $(1, i)$ encoding of the segmentation map is intuitive.\\n\\n\\n\\n&nbsp;\\n\\n*References:*\\n \\n[1] L\\u00f6we, S., Lippe, P., Rudolph, M., & Welling, M. (2022). Complex-valued autoencoders for object discovery. Transactions on Machine Learning Research.\\n\\n[2] Chen, G., Peng, P., Ma, L., Li, J., Du, L., & Tian, Y. (2021). Amplitude-phase recombination: Rethinking robustness of convolutional neural networks in frequency domain. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 458-467).\"}", "{\"summary\": \"This paper presents a transformation to map RGB images into complex domain and an associate network comprising a loss function to handle such complex inputs.\\n\\nOverall, the contributions are significant and may be of interest to the community, but the paper organization could be improved and more focus should be placed on the transformation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1) The proposed transformation is novel and may be an important contribution for the community working in complex and hypercomplex domains.\\n2) Although not novel, using the fourier filter module is good to handle complex inputs.\", \"weaknesses\": \"1) While the method sounds, the results are not impressive and surely they are not statistically significant. As of my experience, complex, quaternion, and in general hypercomplex models clearly outperform real-valued counterparts when they are able to catch some underlying physical process intrinsic into data. Maybe, the tasks chosen by the authors do not highlight the effectiveness of their method. Maybe, the authors could stress more the parameters saving of using a complex model with respect to a real-valued one, which can help reducing the computational load while obtaining comparable results.\\n2) The authors should have focused more on the transformation, which is a novel contribution, and better show its properties (see questions).\\n3) Some key references to related works are missing, the authors should at least give credit to them, or better try to compare their method with them. Some of them follow, but I encourage the authors to better explore previous literature on complex, quaternion and hypercomplex networks.\\n\\n[1] C. Trabelsi, O. Bilaniuk, Dmitriy Serdyuk, Sandeep Subramanian, J. F. Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, C. Pal, \\\"Deep Complex Networks\\\", ICLR 2017.\\n\\n[2] E. Grassucci, A. Zhang, D. Comminiello, \\\"PHNNs: Lightweight Neural Networks via Parameterized Hypercomplex Convolutions\\\", IEEE Transactions on Neural Networks and Learning Systems, (Volume: 35, Issue: 6, June 2024).\", \"minor_comments\": \"The Saxon Genitive should be avoided in scientific writing, although I know that both ChatGPT and Grammarly insert it. I suggest the authors to remove all the Saxon genitives in the paper.\", \"questions\": \"1) I am very curious about the invertibility of the proposed transform. Given the Algorithm 2 in Appendix A, would it be possible to have some experiments to prove its effectiveness? I think that this transformation is the real contribution of the paper, as it allows a direct mapping between greyscale and RGB images, which was lacking in complex and quaternion papers that often struggle to do so.\\n2) Which is the computational load in terms of FLOPs, runtime memory, and time of the proposed model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author response to reviewer BUvX (part 4)\", \"comment\": \"## Weakness 3: Missed references and comparisons\\n\\nThank you for suggesting the missing references. We agree that these works should have been discussed. Our focus was on binary segmentation, and we inadvertently overlooked them as they primarily address image classification. To address this, we have included a comparison of our encoder with these and other complex-valued works on the ImageNet and CIFAR-10 datasets in the table below. The results clearly demonstrate that our method performs better.\\n\\n\\n| Model | PHNN (ResNet50) | DCN | FCCN | DCSNet encoder|\\n|-----|------|-------|--------|--------|\\n| Acc (\\\\%) | 68.6 | 72.64 | 77.27 | **78.83**|\\n\\n\\n| Model | PHNN (ResNet152) | DCN | FCCN | DCSNet encoder |\\n|-------|------|------|------|----------------|\\n| Acc (\\\\%)| 90.5 | 89.6 | 93.6 | **94.3** |\\n\\n*Note: **Bold** indicates better result.*\\n\\n\\nBelow, we briefly summarize these works:\\n\\n- DCN [1] introduced the foundational elements of complex-valued neural networks, demonstrating their utility in image classification tasks. FCCN [3] extended this by designing fully complex-valued networks through the use of complex convolutions across all layers. More recently, PHNN [2] proposed a generalized framework for hypercomplex networks, introducing parameterized hypercomplex convolutional layers that learn convolution rules from data using the Kronecker product. This approach provides flexibility for handling complex, hypercomplex, and quaternion representations.\\n\\n- While these prior works predominantly address image classification in the complex domain, our work uniquely focuses on binary segmentation in the complex domain. Furthermore, unlike FCCN's convolution-based design for developing end-to-end complex-valued networks, our approach leverages a token-based methodology to achieve the same goal. \\n\\n\\n[1] C. Trabelsi, O. Bilaniuk, Dmitriy Serdyuk, Sandeep Subramanian, J. F. Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, C. Pal, \\\"Deep Complex Networks\\\", ICLR 2017.\\n\\n[2] E. Grassucci, A. Zhang, D. Comminiello, \\\"PHNNs: Lightweight Neural Networks via Parameterized Hypercomplex Convolutions\\\", IEEE Transactions on Neural Networks and Learning Systems, (Volume: 35, Issue: 6, June 2024).\\n\\n[3] Saurabh Yadav; Koteswar Rao Jerripothula, FCCNs: Fully Complex-valued Convolutional Networks using Complex-valued Color Model and Loss Function. ICCV 2023.\"}", "{\"title\": \"Summary and Common Response\", \"comment\": \"We sincerely thank the reviewers for their valuable feedback and constructive discussions. We are encouraged by the acknowledgment of the novelty and contributions of our work.\\n\\n&nbsp;\\n\\n## Paper Summary:\\n\\nThis paper addresses a long-standing limitation in applying complex-valued neural networks to computer vision tasks while solving binary image segmentation entirely within the complex domain. Our work builds on the foundational principles of the FCCN [2] paper but extends the field significantly by introducing:\\n\\n- **iRGB Color Model:** A novel, invertible complex-valued color model, enabling complex-valued processing of RGB images. Our experiments show iRGB surpasses the iHSV color model (proposed by FCCN [2]), providing a robust foundation for future research.\\n\\n- **DCSNet:** A novel token-based complex-valued architecture exploring the frequency domain using complex Fourier transform and a multi-scale encoder-decoder structure. This approach learns rich complex-valued representations and achieves SOTA performance (while using complex-valued neural networks) across multiple binary image segmentation tasks. \\n\\n- **Complex Encoding for Segmentation Targets:** A unique (1, i) encoding method for binary segmentation maps, inspired by the physical independence of foreground and background elements, aligning seamlessly with the mathematical properties of complex numbers.\\n\\nBeyond achieving state-of-the-art results on real-valued and complex-valued datasets, our work broadens the application of complex-valued neural networks to more computer vision tasks. For instance, our invertible R2C transform further opens opportunities in image synthesis and generation tasks, demonstrating the transformative potential of complex-valued representations.\\n\\n\\n&nbsp;\\n\\n\\n## Review & Discussion Summary:\\n\\n**Reviewer BUvX:**\\nBUvX acknowledged the potential of complex-valued networks but raised concerns about comparisons and additional analyses. We provided detailed comparisons with real-valued counterparts, computational efficiency results for DCSNet, and further insights into our R2C transform. While our responses appeared satisfactory, the reviewer was disappointed these updates were not included in the revised manuscript. We acknowledge this oversight and will include them in the camera-ready version.\\n\\n**Reviewer F89z:**\\nF89z recognized the novelty of our R2C transform but sought clarification on its advantages over existing real-to-complex transforms, including simpler alternatives. We provided detailed explanations and quantitative comparisons to highlight its benefits, addressing these points thoroughly. F89z was further interested in the advantages offered over the trivial [R+Gi,G+Bi] transform, and we explained them too. Upon F89z\\u2019s suggestion, we revisited one of the advantages and revised it too. \\n\\nAdditionally, F89z requested insights into the improvements our DCSNet offers over DCN [1]. We clarified that our approach remains fully within the complex domain, unlike DCN [1], which does not solve computer vision tasks entirely in the complex domain. F89z also raised questions about local feature learning and our (1,i) encoding, which we addressed in detail. \\n\\nWhile we have responded to all concerns of F89z, we have not yet received further communication regarding re-evaluation of the score.\\n\\n**Reviewer QqAh:**\\nQqAh sought clarity on the differences between our work and FCCN [2] and requested further details on R2C and architectural choices. We addressed all queries, providing detailed explanations and additional experimental results. The reviewer was satisfied but suggested exploring wavelet filters in future work.\\n\\n**Reviewer B2bV:**\\nB2bV was highly positive, awarding a score of 8, and requested clarification on result interpretation. We highlighted how our method surpassed 90% performance and achieved a 2.86% improvement over prior complex-valued approaches.\\n\\n\\n&nbsp;\\n\\n`FINAL REMARK:` We believe this paper is a significant step toward realizing the full potential of complex-valued neural networks in computer vision. We hope the advancements made, their implications and the review discussions will be carefully considered while making the final decision.\\n\\n&nbsp;\\n\\nWarm Regards,\\n\\nAuthors of #1276\"}", "{\"title\": \"Author response to reviewer QqAh (part 1)\", \"comment\": \"## Weakness 1: DCSNet vs FCCN\\n\\nWe thank the reviewer for the comment and appreciate the opportunity to clarify the differences between our proposed DCSNet and the FCCN. While both models maintain complex-valued information throughout, their design, application, and computational properties significantly differ. Below, we highlight these distinctions:\\n\\n**Application Domain**: FCCNs were primarily designed for image classification, whereas our DCSNets are specifically tailored for binary image segmentation. This fundamental difference in objectives influences both architectural and operational choices.\\n\\n**Processing Mechanism**: FCCNs utilize a sliding-window concept during convolution, while DCSNets leverage a tokenization approach inspired by transformer architecture, enabling efficient feature extraction and processing.\\n\\n**Complexity**: FCCNs exhibit quadratic complexity due to the convolution operation involved, whereas DCSNets achieve log-linear complexity due to the incorporation of the Fast Fourier Transform (FFT), making DCSNets computationally more efficient.\\n\\n**Operational Domains**: FCCNs operate entirely in the spatial domain, while DCSNets uniquely combine operations in both the spatial and frequency domains, facilitating a richer representation of complex-valued data.\\n\\n**Computational Efficiency and Performance**: DCSNet outperforms FCCNs both in terms of computational requirements and performance, as shown in the comparison table below:\\n\\n| | FCCN (Resnet152) | DCSNet-encoder|\\n|---|------|-------|\\n|gFLOPS $\\\\downarrow$ | 28 | **6** |\\n|GPU memory (MB) $\\\\downarrow$| 330 | **132**|\\n| Time (ms) $\\\\downarrow$| 61 | **13** |\\n| Params (M) $\\\\downarrow$| 59.8 |**16.9**|\\n| Top-1 Accuracy $\\\\uparrow$| 77.3 |**78.8**|\\n\\n\\nThese results demonstrate that our DCSNet-encoder not only achieves better accuracy but also significantly reduces computational costs, making it a superior choice, especially for resource-constrained environments.\\nWe hope this clarification addresses the reviewer\\u2019s concerns and illustrates the unique contributions and advantages of our DCSNet over FCCN. Thank you for your feedback, and we are happy to provide additional details if needed.\"}", "{\"comment\": \"Thanks for the clarifications,\\nThanks for the discussion on DCN. Even though the authors of DCN did not use complex, dense layer for the image classification tasks, they do provide the recipe for it [github](https://github.com/ChihebTrabelsi/deep_complex_networks/blob/master/complexnn/dense.py).\\n \\n`Regarding the real to complex conversion`\\n\\nPlease add this discussion in the main text, as this 'real to convex' transformation is the paper's core contribution.\\n\\n** Independence of Color Components ** Authors argued that in $ [R+iG, G+iB] $, the components are not independent. In the proposed method, aren't the magnitudes of the vectors $v$ and $u$ related? If they are related, then the complex components are not completely independent.\\n\\n`Binary Segmentation`\\n\\n\\\"To compare our complex output against the ground truth...\\\" Does this mean the proposed model is unsuitable for mapping complex input to real output? Otherwise, this choice seems forced.\\n\\nAlso, I am unsure if I follow the reasoning of assigning $i$ instead of $0$ to the background to make it independent of the foreground.\"}", "{\"title\": \"Author response to reviewer BUvX (part 1)\", \"comment\": [\"## Weakness-1 [part 1 of 2]: Experiments (Computational efficiency + Comparison with real-valued counterpart) and Backbone necessity\", \"The main motivation of this paper was to build the first-of-its-kind binary segmentation model entirely in the complex domain, and therefore we took up the tasks such as SOD, shadow detection, blur detection and foreground extraction (for complex-valued images). While we have clearly outperformed the existing works on complex-valued neural networks in each of these tasks (see Tables 2-5; as acknowledged by reviewer B2bV as well), our results are currently comparable to state-of-the-art methods that operate exclusively in the real domain.\", \"As you rightly pointed out, complex-valued networks are usually efficient on computational front, we tried to collect the number of parameters of as many methods as possible and have reported them now in the manuscript. As expected, we can clearly notice that our number of parameters is on the lower side while obtaining comparable results. We appreciate your valuable feedback, which helped highlight this important aspect of our work.\", \"We would like to emphasize that the competing real-valued methods cannot be considered direct counterparts to our approach, as their network architectures differ significantly. To ensure a fair comparison, we adapted our models by converting their parameters to real-valued ones. Our results are indeed better than the real-valued counterparts, as can be seen in the tables below:\"], \"for_salient_object_detection\": \"| Dataset | | DUTS | | | | HKU-IS | | | | ECSSD | | |\\n|---------------|---------|--------|-----------|---------|---------|--------|-----------|---------|---------|--------|-----------|---------|\\n| | $S_m \\\\uparrow$ | $maxF\\\\uparrow$ | $E_{\\\\xi}^{max}\\\\uparrow$ | $MAE\\\\downarrow$ | $S_m \\\\uparrow$ | $maxF\\\\uparrow$ | $E_{\\\\xi}^{max}\\\\uparrow$ | $MAE\\\\downarrow$ |$S_m \\\\uparrow$ | $maxF\\\\uparrow$ | $E_{\\\\xi}^{max}\\\\uparrow$ | $MAE\\\\downarrow$ |\\n| Real-DCSNet | 0.864 | 0.832 | 0.912 | 0.043 | 0.908 | 0.920 | 0.948 | 0.041 | 0.907 | 0.912 | 0.956 | 0.034 |\\n| DCSNet | **0.894** | **0.874** | **0.941** | **0.039** | **0.927** | **0.945** | **0.960** | **0.034** | **0.917** | **0.924** | **0.967** | **0.029** |\\n\\n\\n\\n\\n| Dataset | | PASCAL-S | | | | DUT-O | | |\\n|---------|---------|:-----------:|-----------|---------|---------|--------|-----------|---------|\\n| |$S_m \\\\uparrow$ | $maxF\\\\uparrow$ | $E_{\\\\xi}^{max}\\\\uparrow$ | $MAE\\\\downarrow$ |$S_m \\\\uparrow$ | $maxF\\\\uparrow$ | $E_{\\\\xi}^{max}\\\\uparrow$ | $MAE\\\\downarrow$ |\\n|Real-DCSNet| 0.847 | 0.841 | 0.896 | 0.069 | 0.817 | 0.747 | 0.851 | 0.062 |\\n|DCSNet| **0.866** | **0.850** | **0.903** | **0.062** | **0.839** | **0.776** | **0.880** | **0.056** |\\n\\n&nbsp;\", \"for_defocus_blur_detection\": \"| Method | DUT | | CUHK | |\\n|------|-----|-----|-----|-----|\\n| | $\\\\mathcal{F}_{\\\\beta} \\\\uparrow$ | MAE$\\\\downarrow$ | $\\\\mathcal{F}_{\\\\beta} \\\\uparrow$ | MAE$\\\\downarrow$ |\\n| Real-DCSNet | 0.841| 0.121 | 0. 899 | 0.064 |\\n| DCSNet | **0.894**| **0.058** | **0.907** | **0.045**|\\n\\n&nbsp;\", \"for_shadow_detection\": \"| Method | ISTD | SBU |\\n|-----|-------|-------|\\n| | BER $\\\\downarrow$ | BER $\\\\downarrow$ |\\n| Real-DCSNet | 1.83 | 3.45 |\\n| DCSNet | **1.49** | **3.05** |\\n\\n&nbsp;\", \"for_insar_foreground_extraction\": \"| Method | mIoU $\\\\uparrow$|\\n|-------|---------|\\n|Real-DCSNet | 0.85 |\\n|DCSNet| **0.89** |\\n\\n*Note: **Bold** indicates better result.*\\n\\n\\n- In order to develop complex-valued binary segmentation networks, we required a backbone (encoder) trained on a large dataset such as ImageNet, as no pre-trained token-based complex-valued network currently exists. Therefore, we trained such a network from scratch and reported the results in Table 1(a). Interestingly, our network outperformed existing complex-valued networks, motivating us to evaluate it further on complex-valued datasets. As shown in Table 1(b), our method demonstrated superior performance in those evaluations as well.\\n\\n- In Table 1(a), we compared with some relevant and well-known real-valued networks too as baselines. While we acknowledge the existence of more advanced image classification networks, our goal was not to achieve state-of-the-art performance in image classification. We just needed a decent token-based complex-valued network to serve as backbone to our main network, and we believe we were able to achieve this, as the results were comparable with the baselines chosen.\"}", "{\"title\": \"Author response to reviewer BUvX (part 5)\", \"comment\": \"## Question 2 & Weakness 4: Computational load & Saxon Genitive\\n\\n### 1. Computational load\\n\\nThank you for the suggestion. In the table below, we present a comparison of FLOPs, GPU memory usage, and inference time (for a batch size of 1) between our proposed DCSNet and FCCN (ResNet152), the previous state-of-the-art complex-valued network on ImageNet.\", \"for_classification\": \"| | FCCN | DCSNet|\\n|---|------|-------|\\n|gFLOPS $\\\\downarrow$ | 28 | **6** |\\n|GPU memory (MB) $\\\\downarrow$| 330 | **132**|\\n| Time (ms) $\\\\downarrow$| 61 | **13** |\\n\\n&nbsp;\", \"for_binary_segmentation\": \"| | FCCN | DCSNet|\\n|---|------|-------|\\n| gFLOPS $\\\\downarrow$ | 59 | **22** |\\n|GPU memory (MB) $\\\\downarrow$| 573 | **289** |\\n| Time (ms) $\\\\downarrow$| 84 | **30** |\\n\\n*Note: **Bold** indicates better result.*\\n\\n&nbsp;\\n### 2. Saxon Genitive\\n\\nWe are thankful for your valuable suggestion and feedback. We have removed all Saxon genetives from the revised manuscript.\"}", "{\"title\": \"Author response to reviewer BUvX (part 3)\", \"comment\": \"## Weakness-2 & Question-1: Further analysis of R2C transform and its invertibility\\n\\n\\n- We appreciate the acknowledgment of the novelty and advantages of our RGB-to-iRGB (R2C) transform, including its invertibility and the ability to visualize its components as grayscale images. To demonstrate its effectiveness, we originally included a demo in the supplementary material at the time of submission. This demo showcased two sample images, which were converted into our complex color domain (iRGB) using the R2C transform and then reconstructed back to the original images using the inverse-R2C transform.\\n\\n- Following your suggestion for further analysis, we have now provided additional examples and included SSIM scores in pdf (R2C_demo.pdf) to quantitatively measure the similarity between the original and reconstructed images. These results, presented in the updated supplementary material, consistently show an SSIM score of 1, thereby validating our claim that the R2C transform is perfectly invertible.\\n\\n\\n- Our R2C transform is not only novel but also provides some additional benefits compared to the existing complex-valued color models and encodings. For example, quaternion neural networks [1] have RGB image inputs in the form of $0 + iR + jG + kB$ to create a quaternion representation, which artificially forces the real part to be zero. Similarly, CDS [3] proposes to uses Lab color space to generate complex input as: $L+i0$ and $a+ib$. Here, the imaginary component of the first complex channel is also artificially set to zero. In contrast, in our case, we managed to derive complex channels naturally by locating colors on the two kinds of argand planes we discovered in the RGB color space. \\n\\n- In a slightly different approach, DCN [2] generated complex input $I + i f(I)$ from an image $I$ by using a convolutional block f. The idea is to learn imaginary part from the real part itself. However, this introduces dependency between the real and imaginary components, which should not be the case. In our case, however, the real and imaginary components are derived from the orthogonal components of a vector, which ensures they are independent of each other. \\n \\n\\n[1] Zhu, X., Xu, Y., Xu, H., & Chen, C. (2018). Quaternion convolutional neural networks. In Proceedings of the European conference on computer vision (ECCV)\\n\\n[2] C. Trabelsi, O. Bilaniuk, Dmitriy Serdyuk, Sandeep Subramanian, J. F. Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, C. Pal, \\\"Deep Complex Networks\\\", ICLR 2017.\\n\\n[3] Singhal, U., Xing, Y., & Yu, S. X. (2022). Co-domain symmetry for complex-valued deep learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.\"}", "{\"comment\": \"Dear reviewers,\\n\\nA reminder that **November, 26** is the last day to interact with the authors, before the private discussion with the area chairs. At the very least, please acknowledge having read the rebuttal (if present). If the rebuttal was satisfying, please improve your score accordingly. Finally, if you have concerns that might be solved shortly, please exploit the remaining time.\\n\\nThanks,\\nThe AC\"}", "{\"title\": \"Author response to reviewer QqAh (part 4)\", \"comment\": \"## Weakness 3 [part 1 of 2]: DCSNet Architecture\\n\\n1. In DCSNet, we maintain a complex-valued representation throughout the network, which requires careful design to balance computational efficiency and representational power. Below, we explain why Fourier filters were chosen and how they act globally:\\n\\n Fourier filters are inherently global because their operation spans all frequencies simultaneously through element-wise multiplication in Fourier domain, which inherently incorporates information across the entire image due to the global nature of the Fourier transform.\\n\\n Using complex-valued self-attention to handle the complex-valued inputs in DCSNet would require additional multiplication operations, which significantly increases computational complexity. Following [1], we explored alternatives and presented a comparison in Appendix D. Fourier filters offer a computationally efficient solution ([2]), as they leverage the properties of the Fourier domain to handle global information with lower computational overhead compared to self-attention mechanisms.\\n \\n \\n\\n&nbsp;\\n&nbsp;\\n\\n2. We appreciate the reviewer\\u2019s observation regarding the use of dense tokens and their purpose in our method. Below, we clarify the role of dense tokens and the rationale for their inclusion in our design:\\n\\n \\n Dense tokens are additional learnable tokens specifically designed for dense prediction tasks, such as predicting a binary mask at different resolutions. Rather than altering or replacing the image patch tokens, dense tokens interact with these patch tokens during the forward pass to learn task-specific, image-dependent embeddings. This mechanism is inspired by transformer-based methods ([3], [4]) that utilize such tokens for specific downstream tasks.\\n By maintaining the dense tokens as separate embeddings, we ensure that the image information remains intact while enabling the network to learn representations tailored for dense output predictions.\\n \\n \\n We agree that high-resolution Fourier filters could potentially capture localized features, and that is why we have incorporated multi-scale architecture, with initial layers of encoder and final layers of decoder having high-resolution Fourier filters. We would like to clarify that We called Forier filters global only due to the kind of operation they perform. Fourier filters can capture local information as well. \\n\\n For example, Even a small filter of size 3\\u00d73 in the spatial domain, capable of capturing local features, must first be zero-padded to match the image dimensions before being transformed into the Fourier domain to produce a filter of the same size for multiplication. We hypothesize that the network can learn to construct such Fourier filters that, when interpreted in the spatial domain, effectively have a smaller receptive field, enabling the extraction of local features when needed.\\n On the other hand, dense tokens are explicitly trained to learn spatially-aware embeddings required for downstream dense prediction tasks, using features (be it global or local) learned using Forier filters. \\n\\n In summary, we would like to clarify that Fourier filters are used for feature learning, and dense tokens help in executing downstream tasks. \\n \\n We conducted an ablation study (Table 6 in the manuscript) to evaluate the impact of removing dense tokens. This study demonstrated a performance degradation when dense tokens were not used, highlighting their importance for accurate dense prediction.\\n\\n&nbsp;\\n&nbsp;\\n\\n\\n[1] Eilers, F., & Jiang, X. (2023, June). Building Blocks for a Complex-Valued Transformer Architecture. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)\\n\\n[2] Rao, Y., Zhao, W., Zhu, Z., Lu, J., & Zhou, J. (2021). Global filter networks for image classification. Advances in neural information processing systems\\n\\n[3] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., & Houlsby, N. (2021). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. ICLR.\\n\\n[4] Liu, N., Zhang, N., Wan, K., Shao, L., & Han, J. (2021). Visual saliency transformer. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 4722-4732).\"}", "{\"comment\": \"Thank you for the clarification. The authors have addressed most of my concerns. However, I believe that multi-scale wavelet filters could offer potential improvements over Fourier filters, which could be a worthwhile direction for future exploration. I am raising my score to 6, albeit with reduced confidence.\"}", "{\"summary\": \"The work proposes a complex-value deep neural network for computing vision tasks. First, the authors propose an inevitable real-to-complex transformation. Then, the work proposes an architecture comprising spectral convolution and a complex T2T module.\\nThe authors evaluated their model on image classification, smooth object detection, and defocus blur detection.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The work proposes a novel invertible real-to-complex conversion for RGB images in complex-valued neural networks. The procedures are clearly stated using pseudocode and figures.\", \"weaknesses\": \"1. The authors seem to miss the seminal work on complex values networks and didn\\u2019t compare/discuss with the techniques discussed in [1]\\n2. The work is directed at using complex-valued networks for real-valued images. The majority of the paper involves devising an invertible conversion from real to complex representation. However, the paper fails to demonstrate its utility. For example, for classification on ImageNets, the authors did not consider state-of-the-art models, such as Vit, Swin-v2, etc., which achieve above 90% accuracy. \\n\\n3. The paper does not discuss the motivation of the specific real to complex conversion. There are many invertible conversions between real and complex.\\n\\n[1] DEEP COMPLEX NETWORKS\", \"questions\": \"1. Line 317: Citation link broken for T2T-ViTYuan et al. (2021)\\n\\n2. Line 269: Do you use complex-valued \\u201c normalization\\u201d as discussed in [1]\\n\\n3. How well does the model perform if we consider trivial real to complex conversion that considers the real numbers as complex numbers with $0$ imaginary part? This is also a very crucial ablation that the authors should perform.\\n\\n4. Does using spectral convolution make it challenging to capture local features as it performs global convolution?\\n\\n\\n[1] DEEP COMPLEX NETWORKS\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author response to reviewer F89z (part 1)\", \"comment\": \"## Weakness 1 & Question 2: Discussion and comparison with seminal work\\n\\nThank you for pointing out that we missed discussing DCN [1], which is indeed a seminal work in this domain and should have been included in our discussion. DCN introduced foundational concepts for complex-valued neural networks, including complex-valued convolution, activation functions, and normalization. However, we note that DCN essentially operates in the real domain for both input and output. Specifically, it assumes images as real-valued input and then derives the imaginary part from that very real-valued input (refer to our response to reviewer BUvX [in part 4] for further clarification). Additionally, the complex-valued nature of DCN is restricted to the convolutional base. The real and imaginary components are concatenated and passed to real-valued fully connected layers. Furthermore, its evaluation is limited to image classification tasks (as far as the computer vision domain is concerned), primarily on small-scale datasets.\\n\\nIn contrast, our work addresses the challenge of building a fully complex-valued pipeline, where all components\\u2014from input to output, including the loss function\\u2014are in the complex domain. While FCCN [2] also addresses these by taking a fully convolutional approach, proposing the iHSV color space and a complex-valued loss function, its scope is again limited to image classification. Our work moves beyond this by adopting a token-based architecture inspired by transformers, utilizing Fourier filters as fundamental building blocks in both the encoder and decoder. Crucially, we focus on binary segmentation problems, which is more challenging.\\n\\nAs we needed a backbone trained on a large dataset for building our binary segmentation network, we trained a robust complex-valued encoder on ImageNet, achieving the best results to date for any complex-valued network. The comparative results are summarized in the table below:\\n\\n| DCN [1] | FCCN [2] | DCSNet (Ours) |\\n|------|--------|--------|\\n| 72.6 | 77.3 | **78.8**|\\n\\nNote that the ImageNet results for DCN [1] were taken from FCCN [2], which we missed in the original version. This oversight has been corrected now in the paper as well. Finally, we would like to confirm that we used complex-valued normalization, as described in DCN [1].\", \"references\": \"[1] C. Trabelsi, O. Bilaniuk, Dmitriy Serdyuk, Sandeep Subramanian, J. F. Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, C. Pal, \\\"Deep Complex Networks,\\\" ICLR 2018.\\n\\n[2] Saurabh Yadav, Koteswar Rao Jerripothula, \\\"FCCNs: Fully Complex-valued Convolutional Networks using Complex-valued Color Model and Loss Function,\\\" ICCV 2023.\"}", "{\"title\": \"Author response to reviewer B2bV\", \"comment\": \"Thanks a lot for your encouraging comments and the rating.\\n\\n## Weakness: Interpretation of Benchmark Results\\n\\n**Comparison with Other Complex-Valued Neural Networks:** To address your concern, we computed the average of the best accuracies achieved by existing complex-valued neural networks and compared them with our results. The averages were 87.55% for existing methods and 90.39% for our approach. This represents a significant improvement of 2.84%. Importantly, we also surpassed the 90% threshold, underscoring the potential of complex-valued neural networks in the computer vision domain and highlighting the significance of our contributions.\\n\\n**Comparison with Real-Valued Counterparts:** We would like to clarify that the real-valued networks mentioned in the paper as competitors are not exact real-valued counterparts due to differences in network size, network-type, pre-training methods, and other factors. Following Reviewer BUvX's suggestion, we have now conducted a direct comparison with actual real-valued counterparts, ensuring parity in configuration. As shown in the tables below, our approach demonstrates superior performance compared to these actual real-valued counterparts.\", \"for_salient_object_detection\": \"| Dataset | | DUTS | | | | HKU-IS | | | | ECSSD | | |\\n|---------------|---------|--------|-----------|---------|---------|--------|-----------|---------|---------|--------|-----------|---------|\\n| | $S_m \\\\uparrow$ | $maxF\\\\uparrow$ | $E_{\\\\xi}^{max}\\\\uparrow$ | $MAE\\\\downarrow$ | $S_m \\\\uparrow$ | $maxF\\\\uparrow$ | $E_{\\\\xi}^{max}\\\\uparrow$ | $MAE\\\\downarrow$ |$S_m \\\\uparrow$ | $maxF\\\\uparrow$ | $E_{\\\\xi}^{max}\\\\uparrow$ | $MAE\\\\downarrow$ |\\n| Real-DCSNet | 0.864 | 0.832 | 0.912 | 0.043 | 0.908 | 0.920 | 0.948 | 0.041 | 0.907 | 0.912 | 0.956 | 0.034 |\\n| DCSNet | **0.894** | **0.874** | **0.941** | **0.039** | **0.927** | **0.945** | **0.960** | **0.034** | **0.917** | **0.924** | **0.967** | **0.029** |\\n\\n\\n\\n\\n| Dataset | | PASCAL-S | | | | DUT-O | | |\\n|---------|---------|:-----------:|-----------|---------|---------|--------|-----------|---------|\\n| |$S_m \\\\uparrow$ | $maxF\\\\uparrow$ | $E_{\\\\xi}^{max}\\\\uparrow$ | $MAE\\\\downarrow$ |$S_m \\\\uparrow$ | $maxF\\\\uparrow$ | $E_{\\\\xi}^{max}\\\\uparrow$ | $MAE\\\\downarrow$ |\\n|Real-DCSNet| 0.847 | 0.841 | 0.896 | 0.069 | 0.817 | 0.747 | 0.851 | 0.062 |\\n|DCSNet| **0.866** | **0.850** | **0.903** | **0.062** | **0.839** | **0.776** | **0.880** | **0.056** |\\n\\n&nbsp;\", \"for_defocus_blur_detection\": \"| Method | DUT | | CUHK | |\\n|------|-----|-----|-----|-----|\\n| | $\\\\mathcal{F}_{\\\\beta} \\\\uparrow$ | MAE$\\\\downarrow$ | $\\\\mathcal{F}_{\\\\beta} \\\\uparrow$ | MAE$\\\\downarrow$ |\\n| Real-DCSNet | 0.841| 0.121 | 0. 899 | 0.064 |\\n| DCSNet | **0.894**| **0.058** | **0.907** | **0.045**|\\n\\n&nbsp;\", \"for_shadow_detection\": \"| Method | ISTD | SBU |\\n|-----|-------|-------|\\n| | BER $\\\\downarrow$ | BER $\\\\downarrow$ |\\n| Real-DCSNet | 1.83 | 3.45 |\\n| DCSNet | **1.49** | **3.05** |\\n\\n&nbsp;\", \"for_insar_foreground_extraction\": \"| Method | mIoU $\\\\uparrow$|\\n|-------|---------|\\n|Real-DCSNet | 0.85 |\\n|DCSNet| **0.89** |\\n\\n\\nWe have also updated our Table 2 with the average results (shown below) obtained over the 5 datasets used. While our method achieves the best performance in terms of the maxF metric, it consistently ranks at least second-best across all metrics\\u2014a distinction no other method in the comparison achieves.\\n\\n| Method | $S_m \\\\uparrow$ | $maxF\\\\uparrow$ | $E_{\\\\xi}^{max} \\\\uparrow$ | $MAE\\\\downarrow$ |\\n|--------|--------|--------|--------|--------|\\n|PiCANet | 0.871 | 0.854 | 0.913 | 0.046 |\\n|BASNet | 0.872 | 0.856 | 0.909 | 0.051 |\\n|PoolNet | 0.879 | 0.864 | 0.914 | 0.049 |\\n|EGNet-R | 0.886 | 0.868 | 0.918 | 0.048 |\\n|MINet-R | 0.885 | 0.868 | 0.919 | 0.046 |\\n|LDF-R | 0.892 | 0.876 | 0.921 | `0.043` |\\n|CSF-R2 | 0.892 | 0.874 | 0.911 | 0.049 |\\n|GateNet-R | 0.888 | 0.874 | 0.922 | 0.047 |\\n|VST | `0.904` | **0.894** | `0.932` | 0.045 |\\n|FCCN | 0.824 | 0.813 | 0.878 | 0.078 |\\n|SCVUNet | 0.831 | 0.827 | 0.893 | 0.069 |\\n|DCSNet (Ours) | **0.893** | `0.895` | **0.930** | **0.044**| \\n\\n*Note: `Red` indicates best result, and **bold** indicates second best result.*\"}", "{\"title\": \"Author response to reviewer QqAh (part 3)\", \"comment\": \"## Weakness 2: R2C vs existing alternatives\\n\\n**Apologies for the delayed responses; the lead author was not doing well. We will respond to the question on DCSNet architecture as well in few hours.** \\n \\nWe appreciate the comment on comparison with well-established methods of complex-valued image generation. While techniques such as quaternion representation, complex logarithmic transformation, and the Hilbert transform are indeed powerful tools, our proposed R2C method offers some key advantages, which we discuss below. \\n\\n**Intuitive Mapping of Color to Complex Plane:** Our R2C method provides a more intuitive and geometrically grounded mapping of colors to the complex domain. By identifying two distinct Argand planes in the RGB color space itself, our method captures both the color relationship with respect to the grayscale (grayline) and the spatial positioning of the color within the RGB cube, resulting in a complex-valued color model called iRGB. We hardly have any complex-valued color models except iHSV (proposed in FCCNs paper [1]), and our experiments have shown better performance of our iRGB over iHSV (see Table 7). \\n\\n**Richer & Dual Complex Representations:** This approach enables us to represent each color as two complex numbers, which provides a richer representation than typical methods which only focus on one color channel (e.g., Hilbert transform) at a time or a single transformation (e.g., using quaternions, where a 4D complex number [$0 + iR + jG + kB$] is employed without considering the inherent color space geometry). The two complex numbers derived from the Argand planes allow our color model (iRGB) to preserve more nuanced information about color differences and the relationship between color and luminance. Through this dual complex representation, our method captures the perceptual relationship between luminance and chrominance in a way that is not explicitly addressed by the quaternion or Hilbert transform approaches. This makes our R2C method applicable to a broader range of image processing algorithms.\\n\\n**Computational Efficiency:** The R2C method transforms real-valued images to complex-valued images by processing each pixel independently, resulting in a computational complexity of $O(n)$ and making it highly parallelizable. In contrast, methods like the Hilbert transform have higher computational costs, at least $O(n\\\\log n)$. This efficiency makes R2C well-suited for large-scale image processing tasks.\\n\\n**Flexibility in Applications:** While methods like quaternion representation or complex logarithmic transformations typically focus on specific kinds of data (e.g., 3D rotations or magnitude-phase representations), our method is more flexible in mapping any color into a pair of complex numbers, making it a general framework for complex-valued image generation.\\n\\nIn summary, while quaternion representations and other well-established transformations have their place in the broader context of complex-valued image processing, the R2C method offers a unique advantage by providing an intuitive, dual-complex, efficient representation of color that better captures the geometry of the RGB color space, while leading to wider application domains and applications in various image processing algorithms. We believe this contribution significantly advances the ability to generate complex-valued images from real-valued ones.\\n\\nWe hope this clarifies the distinction between our method and existing alternatives and demonstrates its utility in the context of complex-valued image generation. We will get back to you on our architecture soon.\\n\\n[1] Saurabh Yadav; Koteswar Rao Jerripothula, FCCNs: Fully Complex-valued Convolutional Networks using Complex-valued Color Model and Loss Function. ICCV 2023.\"}", "{\"title\": \"Author Response to additional comments of Reviewer F89z (part 2)\", \"comment\": \"## (Part 2) Advantages of iRGB over others\\nAs Reviewer QqAh also raised the same point regarding iRGB's advantages over other representations, we have now listed them below: \\n\\n**Intuitive Mapping of Color to Complex Plane:** Our R2C method provides a more intuitive and geometrically grounded mapping of colors to the complex domain. By identifying two distinct Argand planes in the RGB color space itself, our method captures both the color relationship with respect to the grayscale (grayline) and the spatial positioning of the color within the RGB cube, resulting in a complex-valued color model called iRGB. We hardly have any complex-valued color models except iHSV (proposed in FCCNs paper [1]), and our experiments have shown better performance of our iRGB over iHSV. \\n\\n**Richer & Dual Complex Representations:** This approach enables us to represent each color as two complex numbers, which provides a richer representation than typical methods which only focus on one color channel (e.g., Hilbert transform) at a time or a single transformation (e.g., using quaternions, where a 4D complex number [$0 + iR + jG + kB$] is employed without considering the inherent color space geometry). The two complex numbers derived from the Argand planes allow our color model (iRGB) to preserve more nuanced information about color differences and the relationship between color and luminance. Through this dual complex representation, our method captures the perceptual relationship between luminance and chrominance in a way that is not explicitly addressed by the quaternion or Hilbert transform approaches. This makes our R2C method applicable to a broader range of image processing algorithms.\\n\\n**Computational Efficiency:** The R2C method transforms real-valued images to complex-valued images by processing each pixel independently, resulting in a computational complexity of $O(n)$ and making it highly parallelizable. In contrast, methods like the Hilbert transform have higher computational costs, at least $O(n\\\\log n)$. This efficiency makes R2C well-suited for large-scale image processing tasks.\\n\\n**Flexibility in Applications:** While methods like quaternion representation or complex logarithmic transformations typically focus on specific kinds of data (e.g., 3D rotations or magnitude-phase representations), our method is more flexible in mapping any color into a pair of complex numbers, making it a general framework for complex-valued image generation.\\n\\nIn summary, while quaternion representations and other well-established transformations have their place in the broader context of complex-valued image processing, the R2C method offers a unique advantage by providing an intuitive, dual-complex, efficient representation of color that better captures the geometry of the RGB color space, while leading to wider application domains and applications in various image processing algorithms. We believe this contribution significantly advances the ability to generate complex-valued images from real-valued ones.\\n\\n\\n**Advantages of iRGB over simple [R+iG,G+iB] representation**\\n\\nThe representation $[R+iG,G+iB]$ is a relatively naive approach and disregards key principles of color model design. Below, we highlight three specific advantages of our iRGB model:\\n\\n*Independence of Color Components*: In $[R+iG,G+iB]$, the imaginary part of the first component ($G$) is tied to the real part of the second component ($G$). This dependency inhibits the independent processing of color components. In contrast, our iRGB representation utilizes two fully independent components, $||v||e^{i\\\\theta}$ and $||u||e^{i\\\\phi}$, allowing greater flexibility and more effective color representation.\\n\\n*Unbiased Representation*: The $[R+iG,G+iB]$ approach inherently biases the green ($G$) color channel due to its repetition, potentially leading to suboptimal results. In comparison, iRGB avoids such biases, providing a balanced and unbiased representation across color channels.\\n\\n*Intuitive Meaning*: The $[R+iG,G+iB]$ representation lacks a clear and intuitive physical or conceptual interpretation. On the other hand, iRGB directly encodes both intensity and color information, offering a more meaningful and interpretable representation as detailed earlier.\\n\\nWe hope this clarifies various advantages our iRGB offers over others. \\n\\n[1] Saurabh Yadav; Koteswar Rao Jerripothula, FCCNs: Fully Complex-valued Convolutional Networks using Complex-valued Color Model and Loss Function. ICCV 2023.\"}", "{\"title\": \"Author response to reviewer F89z (part 3)\", \"comment\": \"## Weakness 2: Comparison with SOTA Image Classification Results\\n\\nAs stated earlier, the primary objective of our work is to address **binary segmentation in the complex domain**. Achieving this requires a decent complex-valued backbone trained on a sufficiently large dataset, such as ImageNet. To this end, we trained our DCSNet encoder on ImageNet-1k, which demonstrated superior performance compared to the existing state-of-the-art complex-valued method for image classification, FCCN [2], as shown in Table 1 of our paper. Importantly, this improvement was achieved with fewer parameters.\\n\\nWhile we included some baselines from the real domain for reference in Table 1, our goal was never to achieve state-of-the-art (SOTA) image classification results overall. Rather, our aim was to obtain the **best binary segmentation results in the complex domain**, which we have consistently achieved (see Tables 2-5), as acknowledged by reviewer B2bV as well. Notably, our approach **excels on both real-valued and complex-valued datasets**, underscoring the versatility of our proposed method\\u2014a capability that real-valued networks cannot achieve without compromising the inherent complex-valued nature of the data. Additionally, while our primary focus was binary segmentation, in the process, our work also produced the best results in the complex domain for image classification tasks across both real-valued and complex-valued datasets (refer to Table 1).\\n\\n\\n\\nMoreover, modern state-of-the-art methods like ViT and Swin-V2 are designed and optimized specifically for image classification. These networks are trained on significantly larger datasets, such as JFT-3B and ImageNet-22k, and leverage over a billion parameters [1]. They are then fine-tuned on ImageNet-1k, achieving accuracies close to 90%. In contrast, our model was trained directly on ImageNet-1k, without leveraging such large-scale datasets or model sizes. Given this difference in objectives and resources, we feel it is not a fair comparison to benchmark our model against such methods, as image classification SOTA was never our goal. We just needed a good enough complex-valued backbone/encoder. \\n\\nInstead of allocating our limited computational resources toward achieving SOTA results on image classification\\u2014a task beyond our focus\\u2014we have dedicated our efforts toward building the **binary segmentation networks in the complex domain** that this work aims to deliver.\", \"references\": \"[1] Liu, Z., Hu, H., Lin, Y., Yao, Z., Xie, Z., Wei, Y., ... & Guo, B. (2022). Swin transformer v2: Scaling up capacity and resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition.\\n\\n[2] Saurabh Yadav; Koteswar Rao Jerripothula, FCCNs: Fully Complex-valued Convolutional Networks using Complex-valued Color Model and Loss Function. ICCV 2023.\"}", "{\"metareview\": \"The paper considers binary segmentation with a complex-valued neural network. It introduces several novel components, including (a) a real-to-complex transformation of the RGB values, (b) a Fourier module, and (c) a segmentation loss in the complex domain. Of the 3 reviews with confidence > 2, the paper had a negative review round with all 3 reviewers recommending rejection. After the rebuttal phase, 1 reviewer is still highly critical of the work, while the other 2 have retained a marginal accept score.\\n\\nIn general, there are several issues that were not fully clarified in the rebuttal phase. In particular, (a) the novelty of the method is not clear, especially since the authors have decided to focus on a narrow computer vision use case; (c) the results when comparing to a real-valued scenario are not convincing; (c) many experiments were not included in the main paper, and (d) some reviewers claim that a more complete mathematical analysis of the conversion is missing. \\n\\nOn my side, I believe the paper is interesting for the community working on complex-valued models, but the narrow scope and unconvincing rebuttal makes me lean towards a rejection. I believe the paper can be significantly strengthened by the inclusion of the comments that were discussed in the rebuttal phase, but this would require a significant amount of work that could also change the structure or claims of the paper.\", \"additional_comments_on_reviewer_discussion\": [\"**Reviewer B2bV**: they claimed to have limited experience in the field, and they haven't interacted during the rebuttal phase. Their score significantly differed from the other reviewers. *I ignored most of the review for the final decision.*\", \"**Reviewer QqAh**: they were concerned mostly about the limited novelty of the approach, and about a lack of evaluation of competing conversion approaches (such as complex wavelets). During the rebuttal they improved the score from a 5 to a 6, but still leaning towards a rejection due to the limited novelty.\", \"**Reviewer F89z**: they highlighted multiple concerns, including the lack of novelty, the limited mathematical analysis of the solution, and the unconvincing results. The authors provided a lengthy rebuttal, but the reviewer remained unconvinced on most points and keeps recommending rejection. *This review and the reviewer's comments have influenced heavily my final decision*, and other reviewers agreed with F89z in the final discussion.\", \"**Reviewer F89z** had similar concerns with respect to F89z (unconvincing results, limited novelty, limited analysis of the methods). While the rebuttal was more convincing (and they increased the score from 5 to 6), they remain concerned that most of the analysis shown in the rebuttal does not appear in the main paper. *I agree with this evaluation*, as reflected in the metareview.\"]}", "{\"title\": \"Author response to reviewer QqAh (part 2)\", \"comment\": \"## Weakness 4 & 5: Resolution tokens & Table 7 clarification\\n\\n\\n**Resolution tokens**\\n\\nUsing exactly four resolutions was an architectural design choice inspired by prior works [1], [2], and [3], which effectively leverage four distinct resolutions for multi-scale feature representation.\\n\\nTo address your concern, we conducted additional experiments using three and five resolutions, and we present the results in the tables below. The findings show that using five resolutions improves some metrics (e.g., maxF and MAE), while slightly lowering others (e.g., $S_m$ and $E_{\\\\xi}^{max}$). Conversely, using three resolutions resulted in a slight performance drop across all datasets and metrics compared to the four-resolution setting. This suggests that four resolutions provide a balanced trade-off across all evaluation criteria.\\nExperiment with 3,4 and 5 resolutions:\\n\\n| Dataset | | DUTS | | | | ECSSD | | | | HKU-IS | | |\\n|---------------|---------|--------|-----------|---------|---------|--------|-----------|---------|---------|--------|-----------|---------|\\n| | $S_m \\\\uparrow$ | maxF$\\\\uparrow$ | $E_{\\\\xi}^{max}\\\\uparrow$ | MAE$\\\\downarrow$ | $S_m \\\\uparrow$ | maxF$\\\\uparrow$ | $E_{\\\\xi}^{max}\\\\uparrow$ | MAE$\\\\downarrow$ | $S_m \\\\uparrow$ | maxF$\\\\uparrow$ | $E_{\\\\xi}^{max}\\\\uparrow$ | MAE$\\\\downarrow$ |\\n| 3 resolutions | 0.881 | 0.857 | 0.929 | 0.041 | 0.919 | 0.920 | 0.951 | 0.034 | 0.903 | 0.908 | 0.955 | 0.031 |\\n| 4 resolutions | **0.894** | 0.874 | **0.941** | **0.039** | **0.927** | 0.945 | **0.960** | **0.034** | **0.917** | **0.924** | **0.967** | **0.029** |\\n| 5 resolutions | 0.893 | **0.876** | 0.939 | **0.039** | **0.927** | **0.946** | 0.957 | 0.035 | 0.913 | **0.924** | 0.966 | **0.029** |\\n\\n\\n\\n| Dataset | | PASCAL-S | | | | DUT-O | | |\\n|---------|---------|--------|-----------|---------|---------|--------|-----------|---------|\\n| | $S_m \\\\uparrow$ | maxF$\\\\uparrow$ | $E_{\\\\xi}^{max}\\\\uparrow$ | MAE$\\\\downarrow$ | $S_m \\\\uparrow$ | maxF$\\\\uparrow$ | $E_{\\\\xi}^{max}\\\\uparrow$ | MAE$\\\\downarrow$ |\\n| 3 resolutions | 0.851 | 0.833 | 0.897 | 0.064 | 0.819 | 0.756 | 0.848 | 0.058 |\\n| 4 resolutions | **0.866** | 0.850 | **0.903** | **0.062** | **0.839** | 0.776 | **0.880** | **0.056** |\\n| 5 resolutions | 0.862 | **0.852** | **0.903** | 0.064 | 0.835 | **0.779** | 0.879 | **0.056** |\\n\\n\\nWhile using five resolutions can provide slight improvements in specific metrics, the four-resolution setup balances performance across datasets and metrics and aligns with prior research. We appreciate your suggestion, as it allowed us to validate and demonstrate the robustness of our approach under different architectural configurations.\\n\\n[1] Huang, Z., Dai, H., Xiang, T. Z., Wang, S., Chen, H. X., Qin, J., & Xiong, H. (2023). Feature shrinkage pyramid for camouflaged object detection with transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 5557-5566).\\n\\n[2] Liu, N., Zhang, N., Wan, K., Shao, L., & Han, J. (2021). Visual saliency transformer. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 4722-4732).\\n\\n[3] Wang, W., Xie, E., Li, X., Fan, D. P., Song, K., Liang, D., ... & Shao, L. (2021). Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 568-578).\\n\\n&nbsp;\\n&nbsp;\\n\\n\\n**Table 7 clarification**\\n\\nThank you for your careful observation. The term was indeed a typo. We have rectified this to $\\\\mathcal{L}_{idense}$ in the revised manuscript.\"}", "{\"comment\": \"Thank you for explaining the differences between DCSNet and FCCN. However, I still do not fully understand the advantages of the proposed Complex-Valued Image Generation (R2C) method compared to other established alternatives, such as quaternion representation, complex logarithmic transformation, and the Hilbert transform. Perhaps I am missing a critical point. Could you clarify what makes the R2C method novel or unique?\\n\\nAdditionally, regarding Question 3 on the DCSNet architecture, I did not find the response sufficiently clear or detailed to address my concerns. This may stem from my limited expertise in this area. As such, I have decided to retain my original score, albeit with lower confidence.\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"## (Part 1)\\nThanks for the rebuttal. \\n\\n1. \\\"However, we note that DCN essentially operates in the real domain for both input and output.\\\" ---- In Section 4.2 of [1] they perform music spectrum prediction. I believe that is complex to complex mapping. Correct me if I am wrong.\\n\\n2. \\\" Additionally, the complex-valued nature of DCN is restricted to the convolutional base.\\\" --- Could you please clarify?\\n\\n3. \\\"The real and imaginary components are concatenated and passed to real-valued fully connected layers.\\\" --- There is no difference in representing a complex number as $a+ib$ or $[a,b]$ as long as you model the operations of a complex number of, for example, see Eq 2 and Eq 6 of [1]. These are different from just concatenating the real and imaginary parts and passing them as real-valued input to the model.\\n\\n\\n\\n\\n\\n[1] C. Trabelsi, O. Bilaniuk, Dmitriy Serdyuk, Sandeep Subramanian, J. F. Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, C. Pal, \\\"Deep Complex Networks,\\\" ICLR 2018.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Author response to reviewer F89z (part 4)\", \"comment\": \"## Question 4 & Question 1: Capturing local features & Citation link issue\\n\\nFourier filters are inherently global, whether they are low-pass, high-pass, Butterworth, or notch filters, as their operation spans all frequencies simultaneously through multiplication in the Fourier domain. Even a small filter of size 3\\u00d73 in the spatial domain, capable of capturing local features, must first be zero-padded to match the image dimensions before being transformed into the Fourier domain to produce a filter of the same size for multiplication.\\n\\nOur hypothesis is that the network can learn to construct such Fourier filters that, when interpreted in the spatial domain, effectively have a smaller receptive field, enabling the extraction of local features when needed. Furthermore, our network employs multi-scale skip connections and multi-scale feature learning, facilitating feature learning at both global and local levels, as demonstrated in [1], [2], [3], and [4].\\n\\n\\nThank you for pointing out the issue with the citation link; we have corrected it in the revised manuscript.\", \"references\": \"[1] Nian Liu, Ni Zhang, Kaiyuan Wan, Ling Shao, and Junwei Han. Visual saliency transformer. IEEE/CVF International Conference on Computer Vision, ICCV 2021\\n\\n\\n[2] Wang, W., Xie, E., Li, X., Fan, D. P., Song, K., Liang, D., ... & Shao, L. (2021). Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF international conference on computer vision.\\n\\n\\n[3] Zhang, W., Huang, Z., Luo, G., Chen, T., Wang, X., Liu, W., ... & Shen, C. (2022). Topformer: Token pyramid transformer for mobile semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.\\n\\n\\n[4] Yan, X., Tang, H., Sun, S., Ma, H., Kong, D., & Xie, X. (2022). After-unet: Axial fusion transformer unet for medical image segmentation. In Proceedings of the IEEE/CVF winter conference on applications of computer vision.\"}", "{\"title\": \"Author Response to additional comments of Reviewer F89z (part 3)\", \"comment\": \"## Part 3\\n\\nBy \\\"best binary segmentation results in the complex domain,\\\" we meant the best binary segmentation results achieved using complex-valued networks.\\n\\nThere are two reasons why we chose the background to be represented by '$i$' instead of '$0$', while keeping the foreground as '$1$':\\n\\n*Complex-Valued Ground Truth Encoding*: To compare our complex output against the ground truth, a complex-valued encoding of the ground truth was required. We encoded the binary segmentation map $y$ as $y+i(1\\u2212y)$, which results in background pixels being represented by '$i$'.\\n\\n*Foreground and Background Independence*: Similar to the real and imaginary parts of a complex number, in the physical world, the foreground and background are conceptually independent of each other. This analogy inspired us to represent the foreground as the real part and the background as the imaginary part of the image. This convention appeared well-suited for addressing the binary image segmentation problem in the complex domain. \\n\\n&nbsp;\\n\\n**Local Feature Learning:**\\n\\nThank you for pointing out reference [1], which proposed a localized Fourier layer for efficiently learning local features. The approach constrains the degrees of freedom of a Fourier filter to ensure that the corresponding spatial kernel is spatially localized. This is indeed an interesting idea, and we would be glad to explore it in our future work.\\n\\n[1] Rahman, M. A., & Yeh, R. A. (2023). Truly scale-equivariant deep nets with fourier layers. Advances in Neural Information Processing Systems, 36, 6092-6104.\"}", "{\"summary\": \"This paper introduces the Deep Complex Patio-Spectral Network (DCSNet), a fully complex-valued, token-based neural network developed for end-to-end foreground extraction and adaptable for image classification. Extensive experiments show that DCSNet surpasses current complex-valued approaches across various tasks with real and complex-valued data, achieving results on par with leading real-valued models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper presents a novel complex-valued neural network, the Deep Complex Patio-Spectral Network (DCSNet), a fully complex-valued, token-based, end-to-end architecture designed for foreground extraction and adaptable to image classification tasks. Extensive experiments demonstrate that DCSNet outperforms existing complex-valued methods across diverse tasks involving both real and complex-valued data, achieving competitive results relative to state-of-the-art real-valued models. The paper is well-written, concise, and easy to follow.\", \"weaknesses\": \"Novelty: The novelty of this paper appears questionable. The authors claim that they propose the first token-based complex-valued network that maintains complex-valued information throughout. However, the fully Complex-valued Convolutional Network (FCCN) [1] also processes complex-valued data through the entire model. Could the authors clarify any differences between these two models? What advantages does DCSNet offer over FCCN?\\n\\n[1] Saurabh Yadav; Koteswar Rao Jerripothula, FCCNs: Fully Complex-valued Convolutional Networks using Complex-valued Color Model and Loss Function. ICCV 2023.\\n\\nComplex-valued Image Generation (R2C Method): The authors propose an R2C method for generating complex-valued images from real-valued images, presenting it as a novel complex-valued color transformation. However, methods such as quaternion representation, complex logarithmic transformation, and the Hilbert transform are well-established for generating complex-valued images. Could the authors specify the advantages of the R2C method over these alternatives?\", \"dcsnet_architecture\": \"1. Fourier filters replace self-attention in DCSNet to retain information within the complex domain while preserving global context. How do Fourier filters achieve global information retention in this context, and why were they chosen?\\n2. The paper briefly mentions dense tokens for image embedding but doesn\\u2019t fully explain their purpose. Are these tokens meant to capture pixel-level details (dense information) of the image? If so, why not use high-resolution Fourier filters as localized filters to capture this information directly?\\n3. If large Fourier filters serve as global filters in the frequency domain while dense tokens capture image details, could a bank of wavelet filters offer a more effective solution? Wavelet filters with multiple resolutions could extract both global (large scale) and local (small scale) image features.\", \"resolution_tokens\": \"The paper mentions multiple resolution tokens \\\\T_{i} for i \\\\in {0,1,2,3}. What was the reasoning for using exactly four resolutions? Would using more or fewer resolutions impact the results?\", \"table_7_clarification\": \"In Table 7, there is a term \\\\calL_{isal}. Is this a typo? Please clarify its meaning if not.\", \"questions\": \"Please see the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Discussion Follow-up (for Reviewer QqAh)\", \"comment\": \"Dear Reviewer QqAh,\\n\\nThank you very much for taking the time to carefully reconsider our paper and for your updated evaluation. We truly appreciate your thoughtful feedback and the engaging discussion we had.\\n\\nYour insights and comments played a crucial role in helping us refine and present our work more effectively. We are particularly grateful for your suggestion to explore wavelet filters; it is an excellent idea that we will certainly pursue in our future work.\\n\\nIt was a pleasure to have you as a reviewer, and we deeply value your contributions to the review process. Thank you once again for your time and support.\\n\\n&nbsp;\\n\\nWarm regards,\\n\\nAuthors of #1276\"}", "{\"title\": \"Thanks for the reply\", \"comment\": \"I would like to thank the authors for their effort and the quality of the response.\\nI find the results on invertibility, the additional comparisons, and the computational load metrics very interesting.\\nHowever, I could not find any of these interesting discussions or results in the main paper and this is a little bit disappointing as these results could have really strengthened authors claims and results.\\nI would raise my score by one point since the results are pretty interesting, but not more than this since the paper lacks those discussions and results.\"}", "{\"title\": \"Author response to reviewer QqAh (part 5)\", \"comment\": \"## Weakness 3 [part 2 of 2]: DCSNet Architecture\\n\\n3. We thank the reviewer for suggesting the potential use of wavelet filters for multi-resolution analysis.\\nWhile wavelet filters offer multi-resolution analysis by localizing both spatial and frequency components, incorporating a bank of wavelet filters in the current framework would come with its own set of challenges. Wavelet filters require multiple scales and orientations, which can significantly increase the computational and memory overhead, especially when working with high-dimensional image data. While Wavelet filters inherently capture local features, they need to be predefined and fixed. We would need to define learnable wavelet filters in complex-valued neural networks, which is definitely interesting and can be explored. However, as we have already clarified above, our Fourier filters are taking care of both global and local feature extraction; we need not make the suggested replacement. Moreover, we believe the added advantage offered by the wavelet transform is already being offered through our multi-scale architecture. \\n\\nWe appreciate the reviewer for suggesting some great ideas worth exploring in the context of complex-valued networks.\\n \\n&nbsp;\\n&nbsp;\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"## Part 2-4\\n\\n1. Thanks for the ablation study for different real-to-complex conversions. When I asked for motivation, I sought a theoretical or intuitive explanation of \\\"why the iRGB is better than others, such as $R+iG$ & $G +iB$\\\".\\n\\n2. \\\"best binary segmentation results in the complex domain\\\" -- I believe that the authors meant the output segmentation mask is complex. I get that if the original input is complex-valued, it is better to use complex-valued models. However, why do we need the output mask to be $1$ and $i$ instead of $1$ and $0$?\\n\\n3. I understand the authors' explanation that the filters can be learned to be local. However, we need to use very high-frequency components in constructing the spectral filter, which might be overfitting. Local constructions of special filters are discussed in [2]. (This point is just for discussion and improving the model. It is not a weakness that the authors need to respond to.)\\n\\n[2] *Truly Scale-Equivariant Deep Nets with Fourier Layers*\"}", "{\"summary\": \"This study investigates a novel complex-valued deep CNN designed for foreground extraction. It proposes a new method for encoding RGB images as complex values, an end-to-end token-based architecture that maintains the complex representation throughout, and an improved training pipeline. The authors demonstrate superior performance on a variety of complex-valued image benchmarks.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"I am unfamiliar with the literature on complex-valued neural networks, and defer to the opinion of more reviewers. As a neophyte to this field, I found the manuscript overall to be very readable, well-written, interesting, and convincing. The novel encoding, architecture, and training pipeline seem to work well, and produce a very capable model for handling complex-valued inputs.\", \"weaknesses\": \"This may be my ascribed to my naivety for the field, but I'm unsure how to interpret the benchmark results. While DCSNet certainly seems to outperform other complex-valued neural networks, the margin of victory is often in the range of 1-5 percent. It is difficult to tell whether this represents a fundamental advance, or a marginal improvement. Further, Table 2 seems to indicate that complex-valued neural networks in general frequently fail to outperform their real-valued counterparts. How should these results be interpreted with respect to the broader viability of complex-valued neural networks?\", \"questions\": \"See weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response to additional comments of Reviewer F89z (part 1)\", \"comment\": \"## Part 1\\n\\nThanks for the additional comments. Here is our response. \\n \\n**1)** We would like to clarify that our statement was made specifically in the context of the computer vision task addressed by DCN [1], namely image recognition, where both inputs and outputs were indeed real-valued. Furthermore, as our paper focuses on *\\\"DCSNets with complex visual inputs\\\"* (as stated in the title), we have deliberately restricted all discussions, experiments, and rebuttals to the computer vision domain. The DCN paper itself acknowledged achieving only comparable results to real-valued counterparts in the computer vision task it undertook. To address this specific limitation, we developed the iRGB color model and a tailored loss function to effectively handle complex-valued outputs for the binary image segmentation task.\\n\\nRegarding the additional acoustic-related tasks tackled by DCN\\u2014namely automatic music transcription and speech spectrum prediction\\u2014we believe these are outside the scope of our paper, which is explicitly centered on computer vision. However, since you have now pointed out these audio-related tasks, we would like to provide the following clarification:\\n\\nIn audio-related tasks, complex-valued data is naturally available. For the first task (Section 4.2), the outputs were real-valued. For the second task (Section 4.3), you are correct that it involves a complex-to-complex mapping, achieved in DCN by employing an entirely convolutional network. Since audio tasks are beyond our focus, we can only state that such mappings could also be tried using a complex-valued token-based architecture. While such exploration would indeed be interesting, it would distracting from the primary focus of this paper. Thank you for pointing this out; we will certainly consider pursuing this idea in our future work. \\n&nbsp;\\n\\n**2)** As mentioned above, since DCN needed real-valued outputs for the image recognition task, the authors concatenated the real and imaginary outputs of the convolutional base to feed the subsequent real-valued fully connected layers. For reference, please see the official implementation of [1] on [github](https://github.com/ChihebTrabelsi/deep_complex_networks/blob/master/scripts/training.py) at line 250. It is clear from the code that real-valued fully connected layers are used. Therefore, the complex-valued nature of the network is limited to the convolutional base in the computer vision task undertaken by DCN.\\n\\n&nbsp;\\n\\n**3)** We have already clarified above that complex operations no longer occur in the fully connected layers of DCN (in image recognition task). Therefore, representing a complex number as $a+ib$ or $[a,b]$ in these fully connected layers does make a difference. You yourself mentioned, representing a complex number as $a+ib$ or $[a,b]$ are identical as long as complex operations are maintained. However, DCN performs complex-valued operations only in its convolutional base. In its fully connected layers, it performs real-valued operations on the real-valued vector $[a,b]$. On the other hand, we perform complex operations on $a+ib$ throughout our network. \\n\\nRegarding Eqns 2 & 6 in the DCN paper, these pertain to complex convolution and complex batch normalization, respectively, both of which are employed in the convolutional base, not in the fully connected layers.\\n\\n[1] C. Trabelsi, O. Bilaniuk, Dmitriy Serdyuk, Sandeep Subramanian, J. F. Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua Bengio, C. Pal, \\\"Deep Complex Networks,\\\" ICLR 2018.\"}" ] }
9h5paerJxC
Cluster-Segregate-Perturb (CSP): A Model-agnostic Explainability Pipeline for Spatiotemporal Land Surface Forecasting Models
[ "Tushar Verma", "Sudipan Saha" ]
Satellite images are increasingly valuable for modeling regional climate change. Earth surface forecasting is one task that combines satellite imagery and meteorological data to understand how climate evolves over time. However, understanding the complex relationship between meteorological variables and land surface changes remains a challenge. Our paper introduces a pipeline that integrates principles from perturbation-based techniques like LIME and global explainability techniques methods like PDP, addressing the limitations of these techniques in high-dimensional spatiotemporal models. This pipeline facilitates analyses such as marginal sensitivity, correlation, and lag analysis, etc for complex land forecasting models. Using ConvLSTM for surface forecasting, we analyzed influence of variables like temperature, pressure, and precipitation on the NDVI of the surface predictions. Our study in EarthNet2021 Dataset (primarily consists of samples from the European Alps region, collected during the spring to fall seasons) revealed that precipitation had the greatest impact, followed by temperature, while pressure has little to no direct effect on NDVI. Additionally, interesting nonlinear correlations between meteorological variables and NDVI have been uncovered.
[ "Climate AI", "Explainability", "ConvLSTM", "spatiotemporal analysis" ]
https://openreview.net/pdf?id=9h5paerJxC
https://openreview.net/forum?id=9h5paerJxC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ycLmWZWB45", "b77gnyiWbs", "HtbSXXeb42", "ELc5FFhGh7", "9yP2730WVv" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731499542771, 1730703642682, 1730653357022, 1729995265932, 1730125955455 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9682/Authors" ], [ "ICLR.cc/2025/Conference/Submission9682/Reviewer_8GJb" ], [ "ICLR.cc/2025/Conference/Submission9682/Reviewer_hCsD" ], [ "ICLR.cc/2025/Conference/Submission9682/Reviewer_tHE2" ], [ "ICLR.cc/2025/Conference/Submission9682/Reviewer_8x9V" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper introduces the Cluster-Segregate-Perturb (CSP) pipeline as an approach for achieving explainability in land surface forecasting. The CSP pipeline includes clustering, segregation, and perturbation. This pipeline facilitates analyses such as marginal sensitivity, correlation, and lag analysis, etc for complex land forecasting models. CSP has been tested on EarthNet2021.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The idea of designing a pipeline for the complex spatiotemporal data and high-dimensional feature spaces is intriguing.\\n2. The description of the challenges in explainability of spatiotemporal land surface forecasting is technically sound.\", \"weaknesses\": \"1. The paper lacks novelty in its method design. The author may overstate the ability of CSP, for example, downsampling the images by averaging down to a single value is kind of rough. While it greatly reduces the complexity, it also loses the spatial information.\\n2. The paper\\u2019s presentation needs to be improved; the structure lacks coherence and unclear.\\n3.EARTHNET2021 is the only dataset being used. It is necessary to evaluate the method across various datasets in land surface forecasting fields. Also, consider using synthetic data to prove the functionality of CSP.\\n4. The experiments section mainly focuses on sensitivity and correlation analysis, lacking deeper and more detailed analysis. Additionally, there are no baselines for comparison.\\n5. The assumption of this method is not universally valid. Ignoring the relationships between input features may oversimplify the analysis since weather features are often correlated.\", \"questions\": \"1.\\tCould you elaborate on your method\\u2019s contribution and explain how it outperforms existing methods? Have you compared CSP to other approaches?\\n2.\\tThough you claim CSP can be integrated with any other model architecture in the paper, have you tried any other models except for ConvLSTM? If so, do you have the corresponding results? \\n3.\\tHave you evaluated CSP in scenarios where input features are correlated and do not satisfy the assumption?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work tries to uncover the relationship of meteorological drivers to satellite-derived vegetation greenness through studying the partial sensitivities of a deep neural network to input perturbations. More specifically, they work on the EarthNet2021 dataset, which contains samples (\\\"minicubes\\\") of high resolution NDVI at many places in Europe co-located with meteorological variables. First, they cluster each individual meteorological variable. Then they identify similar minicubes, by grouping according to the cluster ids of five individual meteorological variables, resulting in ~800 meteorologically similar minicubes. For each group, they compute partial sensitivities of the ConvLSTM model to additive perturbations of the input meteorology, obtaining marginal response curves of NDVI to weather.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1) The work is original in the sense that it tried working around the challenges the particular dataset (being high-dimensional and not global, and samples auto-correlated) brings, leading a highly tailored analysis for the EarthNet2021 dataset.\\n2) Reproducibility is given, as the work is based on open data and open source code of a model.\\n3) The paper highlights the challenging nature of the relationship between NDVI and weather, often being non-linear, and highly variable in space.\", \"weaknesses\": \"1) My major concern is that the presented analysis does not achieve its goal, that is explaining the complex relationship between NDVI and weather at fine resolution. Here, i think a more careful presentation and interpretation of the results needs to be done, after reading this paper, the reader should have an increased understanding of the drivers of NDVI dynamics.\\n2) The used dataset EarthNet2021 and the Diaconu ConvLSTM models both suffer from a few drawbacks. It would have been much better to use the more recent GreenEarthNet dataset, which is an improved version of EarthNet2021 and comes with many improved baseline models. https://openaccess.thecvf.com/content/CVPR2024/html/Benson_Multi-modal_Learning_for_Geospatial_Vegetation_Forecasting_CVPR_2024_paper.html \\n3) The writing lacks clarity, I had to read the paper multiple times to understand the approach. Perhaps a figure that visually explains the approach could help here, but more importantly, many parts of the paper need to be rewritten to follow a clearer structure.\\n4) I am unsure if the clustering approach is meaningful for Precipitation data, which is generally exponentially distributed and has few spikes and else zeros. \\n5) Introducing around ~800 groups of minicubes with similar weather does not necessarily increase interpretability, as one would still need to look at 800 different plots with 5 panels each (Fig. 2...) to fully understand the relationship between NDVI and weather.\\n6) The perturbations are done univariate, however the considered weather variables are not independent of each other.\\n7) The perturbations for precipitation seem physically not meaningful.\\n8) The related works section is poorly written. It should reflect what has been done and how that related to the presented study. Also, it lacks many related works. You could start by checking the references of the EarthNet2021 paper, which already contains many more especially regarding NDVI.\\n9) In Fig. 1, I am not sure the absolute x-coordinate makes so much sense. The EarthNet2021 minicubes start at random start dates, i.e. they are during different seasons, so patterns in meteorology should rather be translation-equivariant, i.e. considering relative coordinates, I am thinking in the direction of wavelets and other filters here.\", \"questions\": \"1) I wonder if you could use climate analogues instead of the Perturbations? --> What if you switch the weather of one group of minicubes with that of another to compute the sensitivities? In that way you'd be more sure it is actually \\\"physical\\\" climate you are considering.\\n2) How do you match the resolution of the meteorological variables to Sentinel 2? Do you take a central cut-out that actually reflects the Sentinel 2 scale? Or do you interpolate, as in the spatial references do not match anymore?\\n3) Related to this, for the spatial averaging to do clustering of the meterological variables, are those time series representing 2.5km^2 or 100km^2?\\n4) Could you actually analyse if the relationships are different for different land cover types? E.g. Grasses and Trees should have very different relationships to weather, for instance under drought grasses can be brown within a few days, but evergreen trees take months before you visibly see changes in the NDVI.\\n5) The ConvLSTM uses memory, which may reflect some of the relationship between temperature and NDVI on a monthly time scale, in other words, the model might incorporate a trend for its meteorological variables or the response thereof in its memory - could that not potentially distort your analysis?\\n6) How do you account for lagged effects? The land surface can be much slower than the atmosphere, responding on very different time scales (slowly changing...)\\n7) Have you seen this recent pre-print? https://arxiv.org/abs/2410.01770 \\n8) More of a suggestion, I personally feel like the contribution in this paper may be much more on the \\\"results\\\" side of things, and less methodological. Hence, it might be much more suitable for a journal and not for ICLR.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a pipeline to quantify the importance of different input variables for a Convolutional LSTM. The authors construct a model that aims at predicting the NDVI in a chip, based on different meteorological input variables. Once this model is trained, the authors propose a pipeline called Clustering, Segregation, Perturbation (CSP), that aims at identifying the importance of the different meteorological variables used in the ConvLSTM.\", \"the_pipeline_starts_by_clustering_in_time\": \"using k-means, the method identifies clusters of observations that can be grouped together over time. The second step is to segregate the clusters by k-means. The last step is to perturb the inputs, and observe the impact of the perturbation on the model output.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper addresses a timely and important subject: can we understand how a certain variable affects what is observed in deep learning models, without explicitly modeling the physical phenomena. This topic is of utmost importance for deep learning, as they are often critiqued as black boxes. This paper thus proposes a method to disentangle this black box, and shed a light on the importance of each variable on the observed process.\", \"weaknesses\": \"It is unclear to me what the clustering process adds here. The proposed perturbation resembles a simple sensitivity analysis. In table 1, the results are presented as averages over the clusters, and in figure 3 the different clusters are shown individually, but it is hard to understand weather these results would be different if the perturbation had been performed without the clustering.\\n\\nTowards the end, the paper aims at demonstrating the impact of the different variables, and states an approximatively 12 times greater importance of the precipitation compared to the pressure, and 5 times greater than temperature. This seems significatif, but hard to judge given that we are not presented with a baseline. Comparing this method to other well known methods of sensitivity analysis, like simple perturbation or noise injection could inform about the significance of the result.\\n\\nOverall, the paper was hard to read. The paper is not very well structured, and the methodology hard to follow. Multiple specific mistakes (c.f. specific comments below) made it hard to make sense of the methodology.\\n\\n# Specific comments\", \"line_28\": \"what interesting nonlinear correlations\", \"line_74\": \"what is the curse of dimensionality?\", \"line_151\": \"assumption: is this reasonable? Environmental variables are known to be not completely independent of each other\", \"line_177\": \"so a total of 30 * 32000 images?\", \"line_179\": \"The spatial resolution is 20 meters, the dimension of the image is 128x128 pixels. I'm guessing the value 2.56 is the side of the image, so km, not an actual surface (20 m * 128 px / 1000 m/km = 2.56 km, the surface would be 6.5536 km2).\", \"line_181\": \"unclear what the spatial resolution is. Assuming 102.4 is the surface (unit is km2, but this might be the same error as above), then the spatial resolution is 126.5 meters. But if it's the side, as above, then the spatial resolution is 1.28 km, which is probably more reasonable for a weather variable. Although E-OBS has a 10km resolution, so that would still be different.\", \"line_183\": \"What is the resampling procedure? Probably nearest neighbor for spatial, but what about temporal?\\nHow many pixels cover the same area as the Sentinel-2 chip? Assuming a 126.5 meters resolution, 20 pixels cover the chip, but if we take the value 1.28 km resolution, then 2 pixels per side (4 total) cover the chip, which means all the pixels in the resampled product basically have the same value. Even less if we consider the E-OBS resolution of 10km.\", \"line_185_196\": \"not clear\", \"line_206\": \"I'm not familiar with the EarthNetScore, but if it's scaled between 0 and 1, a value of 0.3257 seems very low. Wouldn't that very low modeling quality influence the interpretation of the results?\", \"line_216\": \"Soft-DTW: is this a pre-processing step of the data, or is this already one of the clustering steps of the CSP? Unclear\", \"line_243_245\": \"as we established above, 1 to 4 pixels basically cover the entirety of the Setinel-2 chip, so the lack of variability stems from the resampling of the dataset, not from the variables themselves.\", \"line_251\": \"given the spatial resolution of the meteorological dataset, what is the goal of first upscaling, then downscaling? Is the downscaled data used somewhere else?\\nOtherwise this step seems overkill to me, just take the average over the sentinel-2 extent.\", \"line_256_307\": \"rephrase this section, the definitions are all over the place.\", \"figure_1\": \"For each line, use same limits for y\", \"line_326_334\": \"introduce Si after equation 9, this is confusing.\", \"line_412\": \"\\\"irrespective of the weather segment the sensitivity of the variables remained almost the same\\\". Does that mean that the clustering wasn't needed?\", \"line_414\": \"\\\"weighted by the cardinality of the sample sets\\\". Not sure what that means.\\n\\n# Minor comments\", \"figure_2\": \"use same limits for NDVI\\n\\n# Grammar comments\", \"citations_miss_match_between_automatic_citations_and_manual\": \"117\\n131/132\\n145\\n... and many more\", \"author_name_twice\": \"107/108\\n113/114\\n119\\n128\\n134\\n... and many more\", \"line_24\": \"makes it sound like EarthNet2021 is your previous study\", \"line_100\": \"missing point between citation and reference\", \"line_101\": \"\\\"insights. In...\\\" new sentence here\", \"line_102\": \"remove authors, \\\"propose\\\"\", \"line_128\": \"replace \\\"concerning\\\" with \\\"on\\\"\", \"line_143\": \"\\\"during dry years, like 1989, following another dry year.\\\" unclear what that means\", \"questions\": \"c.f. weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper discusses how to develop an interpretability pipeline for spatiotemporal prediction problems in earth science. The authors point out that existing interpretability methods, such as PDP or LIME, may have two limitations when dealing with spatiotemporal prediction problems:\\n\\n1. These methods lack consideration of spatiotemporal variations in independent variables, and their randomly generated samples for explanatory analysis may lack temporal continuity.\\n\\n2. These methods ignore that the relationships between independent and dependent variables may vary with seasons, lacking the ability to explain temporal dynamic mappings.\\n\\nTo address these issues, the authors propose an interpretability pipeline called CLUSTER-SEGREGATE-PERTURB (CSP). This method first uses Soft DWT+Kmeans to cluster the temporal variation curves of meteorological data into several patterns. Based on the patterns of meteorological attributes obtained from clustering, the authors divide the dataset into several subsets according to meteorological pattern similarity. Finally, the authors create perturbation sets for various variables based on climate patterns and data averages for subsequent interpretability analysis.\\n\\nIn the analysis phase, the authors selected three influence variables: temperature (min, max, avg), pressure, and precipitation. They trained a ConvLSTM model on the EarthNet21 dataset and used marginal sensitivity and marginal correlation analysis to interpret ConvLSTM on the segregation-perturbation sets generated by CSP. The authors analyzed the results of marginal sensitivity and marginal correlation. The results show that NDVI is most sensitive to precipitation, followed by temperature and pressure.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper demonstrates good originality. The authors' analysis of existing PDP and LIME methods is reasonable, and they accurately identify the scientific challenges in these two approaches. To address these issues, the authors propose a sound scientific hypothesis: \\\"explaining grouped spatiotemporally homogeneous data.\\\" Based on this hypothesis, they construct a three-step pipeline using existing methods to aggregate spatiotemporally homogeneous data and add perturbations that preserve temporal continuity. Using this approach, the authors discuss the influence of three main meteorological factors on NDVI and draw some interesting conclusions.\", \"weaknesses\": \"However, the paper has significant issues in its research foundation and experimental validation. Additionally, there are shortcomings in the literature review and paper organization.\", \"research_foundation\": \"The paper primarily builds upon improving PDP and LIME methods. While these methods indeed show limitations in explaining time series predictions, recent studies have already extended these interpretability methods to time series prediction:\\n\\n[1] Shi, H., Yang, N., Yang, X., & Tang, H. (2023). Clarifying relationship between pm2. 5 concentrations and spatiotemporal predictors using multi-way partial dependence plots. Remote Sensing, 15(2), 358.\\n\\n[2] Liu, J., & Zhang, X. (2022). ReX: A Framework for Incorporating Temporal Information in Model-Agnostic Local Explanation Techniques. arXiv preprint arXiv:2209.03798.\\n\\nThe authors' failure to consider and analyze these methods undermines the reliability of their research foundation.\", \"method_validation\": \"Given that this is not the only post-hoc interpretation method for time series data, it is essential to validate whether this method is more suitable for surface process spatiotemporal prediction than existing methods. Specifically, the paper lacks discussion of:\\n\\n1. Quantitative evaluation of interpretability method accuracy (typically achieved using surface process simulation synthetic datasets)\\n2. Comparative analysis with other time series interpretation methods\\n3. Response to research hypotheses regarding existing methods' reliability with temporal and high-dimensional data\", \"additional_issues\": \"1. The related work section merely lists existing methods without analyzing their limitations or providing technical motivation\\n\\n2. The paper's structure is overly segmented, mixing original methodology (5.2), previous research methods (5.1), and experimental results (5.2)\\n\\n3. While claiming existing methods struggle with high-dimensional spatiotemporal data (lines 21 and 52), the experiments only use data with 3 attributes and 5 dimensions\\n\\n4. There's an inconsistency between the criticism of PDP's inability to isolate variable effects (line 53) and the paper's assumption of variable independence in Section 4\", \"questions\": \"1. What is the current state of spatiotemporal interpretability methods, and what challenges do they face?\\n\\n2. What is the motivation behind the proposed CSP technique, and what implications does it have for future research?\\n\\n3. What are the technical advantages of the proposed method compared to existing spatiotemporal interpretability methods, and is it more suitable for surface process spatiotemporal prediction problems?\\n\\n4. How reasonable is the assumption of independent distribution among meteorological variables? In reality, meteorological variables typically exhibit significant correlations.\\n\\n5. Can the authors provide validation results on synthetic datasets and quantitatively compare the accuracy of their method with other time series interpretability methods?\\n\\n6. NDVI may experience saturation effects under strong solar radiation. Does the interpretability model account for the instability in independent-dependent variable relationships caused by this saturation phenomenon?\\n\\n7. Can the authors demonstrate their method's performance on truly high-dimensional data to validate its advantages over existing methods in handling high-dimensional data?\\n\\n8. Is there an issue with the y-axis labeling in Figure 1?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
9h45qxXEx0
Debiasing Federated Learning with Correlated Client Participation
[ "Zhenyu Sun", "ziyang zhang", "Zheng Xu", "Gauri Joshi", "Pranay Sharma", "Ermin Wei" ]
In cross-device federated learning (FL) with millions of mobile clients, only a small subset of clients participate in training in every communication round, and Federated Averaging (FedAvg) is the most popular algorithm in practice. Existing analyses of FedAvg usually assume the participating clients are independently sampled in each round from a uniform distribution, which does not reflect real-world scenarios. This paper introduces a theoretical framework that models client participation in FL as a Markov chain to study optimization convergence when clients have non-uniform and correlated participation across rounds. We apply this framework to analyze a more practical pattern: every client must wait a minimum number of $R$ rounds (minimum separation) before re-participating. We theoretically prove and empirically observe that increasing minimum separation reduces the bias induced by intrinsic non-uniformity of client availability in cross-device FL systems. Furthermore, we develop an effective debiasing algorithm for FedAvg that provably converges to the unbiased optimal solution under arbitrary minimum separation and unknown client availability distribution.
[ "federated learning", "Markov chain", "time-correlated participation" ]
Accept (Poster)
https://openreview.net/pdf?id=9h45qxXEx0
https://openreview.net/forum?id=9h45qxXEx0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xyhyUQMapX", "usIMhdr2wZ", "rvMNCl5sjN", "rb4WNdTvaP", "mHPPA0rrIl", "jRtjrp19gM", "ZyqPNNwBuQ", "W7XUAq2mx4", "TRlzgw7mHp", "Pa0EW8RRfP", "FGCkYAipvT", "ClEm4Kh0cl", "ALHd2sxJW7", "7cr7UphadO", "4HQXrgqZNo", "1b9uhb5aoc" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1729955894777, 1732590028296, 1731983210641, 1731353191870, 1732569485470, 1731982326354, 1731982644972, 1730110687693, 1737523834218, 1732760656467, 1730699796548, 1731983534591, 1733183105367, 1732761004127, 1734295325115, 1731982964680 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7365/Reviewer_ehNB" ], [ "ICLR.cc/2025/Conference/Submission7365/Reviewer_ehNB" ], [ "ICLR.cc/2025/Conference/Submission7365/Authors" ], [ "ICLR.cc/2025/Conference/Submission7365/Reviewer_GYuM" ], [ "ICLR.cc/2025/Conference/Submission7365/Reviewer_Cr1N" ], [ "ICLR.cc/2025/Conference/Submission7365/Authors" ], [ "ICLR.cc/2025/Conference/Submission7365/Authors" ], [ "ICLR.cc/2025/Conference/Submission7365/Reviewer_Cr1N" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7365/Authors" ], [ "ICLR.cc/2025/Conference/Submission7365/Reviewer_Yi23" ], [ "ICLR.cc/2025/Conference/Submission7365/Authors" ], [ "ICLR.cc/2025/Conference/Submission7365/Reviewer_Yi23" ], [ "ICLR.cc/2025/Conference/Submission7365/Authors" ], [ "ICLR.cc/2025/Conference/Submission7365/Area_Chair_Aovg" ], [ "ICLR.cc/2025/Conference/Submission7365/Authors" ] ], "structured_content_str": [ "{\"summary\": \"Existing federated learning algorithms assume clients are sampled uniformly at random at each iteration which does not reflect the real scenario. In this paper, the authors assume that each client requires a minimum separation of R rounds between sampling. Then they model client selection as a Markov chain to theoretically analyze the setting and propose a debiasing algorithm with provable guarantees.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Motivation: the minimum separation in federated learning is reasonable to analyze.\", \"Literature review is thorough.\"], \"weaknesses\": [\"The FL setting is not rigorous. In line 121, authors use $p_i$ to capture the willingness to be sampled at each iteration which means a client may not join the training for arbitrarily long amount of iterations (with small probability). However, in line 128~129, it is claimed that \\\"the cyclic participation corresponds to the case, R = N / B \\u2212 1\\\", which means in the last round of the cycle, all of the remaining B clients will definitely be sampled. So the setting is not consistent. Besides, forcing clients to join cross-device federated learning is not practical.\", \"As has been mentioned in the \\\"Limitations\\\" section, the theoretical results do not enjoy linear scalability.\", \"I am not very convinced that Markov-chain Model is necessary to analyze the problem. The algorithm 1 essentially only try to estimate p_i and then inversely scale 1/pi to gradients in order to have uniform weight to all clients.\", \"The presentation of the paper can be improved.\"], \"questions\": \"Are Markov-chain model really necessary to analyze this problem?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response\", \"comment\": \"I acknowledge that I have read the author response and decided the give higher scores.\"}", "{\"title\": \"Response to Reviewer Cr1N\", \"comment\": \"**Response to Weakness 1:**\\nWe have added the results of test accuracies among FedAvg, Debiasing FedAvg (ours) and FedVARP under Cifar10 in Appendix H in the updated manuscript. The experimental results show that increasing $R$ reduces bias and increases test accuracy for both FedAvg and FedVARP; and our Debiasing FedAvg outperforms FedAvg and FedVARP in both training and test settings (see Table 1 in Appendix H). We also highlight that the main problem studied in this paper is the bias effect induced by correlated client participation (justified by Theorem 1 and 2) and how to reduce it (justified by Theorem 3). Thus, we mainly focus on the training stage, i.e., solving problem (1) in the federated setting. Shown by our theories and experiments, the training error of server\\u2019s model output by conventional FL algorithms does suffer from unavoidable bias. And we can effectively reduce the bias, i.e., getting a smaller training error, by increasing $R$, and particularly no bias when $R=M-1$. \\n\\n**Response to Weakness 2:**\\nWe argue that a uniform $R$ is actually considered in the real-world setting. In particular, as discussed in (Xu et al., 2023) in References, a uniform minimum separation $R$ is adopted in Google Gboard to achieve privacy guarantees with larger $R$ leading to stronger privacy (Kairous et al., 2021). We still note that compared to literature where clients are sampled independently or from a cyclic pattern, our problem is much more general and captures them as special cases. Moreover, in the last paragraph of Section 5 (see Line 435), we discussed the possibility to extend uniform $R$ to client-specific $R_i$. In fact, for client-specific $R_i$, our theories (Theorems 1 and 3) still hold.\\n\\n**Response to Question 1:**\\nWe omitted the dependence on heterogeneity in the main text. Actually, our bounds are proportional to the heterogeneity level $G^2$ (defined in Assumption 1). In Appendices G and F, one can see explicitly the effect of heterogeneity on the convergence (see Lines 1384 and 1547 for details, respectively). In particular, as heterogeneity grows, the larger the gap is between what the vanilla FedAvg converges to and the original optimal solution. With larger minimum separation, this gap shrinks. Under our debiasing algorithm, however, even with large heterogeneity, we can recover the original optimal solution.\"}", "{\"summary\": \"This paper studied the federated learning problem with partial client participation. In particular, it focused on the case where there is minimum separate between clients in terms of minimum rounds. The authors formulated the client participation process as a R-th order Markov chain and characterize the marginal stationary distribution of clients to be sampled. The authors proposed a debiasing FedAvg based on the estimation of this marginal stationary distribution and provided the convergence analysis of the proposed algorithm as well as the original FedAvg algorithm. There are several very interesting observations of this paper. The performance of the proposed algorithm is also verified using simulations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The authors formulates the client participation process as a R-th order Markov chain.\\n\\n2. The authors proposed the debiasing FedAvg algorithm based on the estimation of marginal stationary distribution of clients to be sampled. \\n\\n3. The authors provided the convergence analysis of both FedAvg (to indicate the problem) and the proposed algorithm which can converge.\", \"weaknesses\": \"1. The paper is not well written and there are some notations not explained, e.g., $\\\\tau_{mix}$ (is it the mixing time?) and $p_e$, although the paper presented quite a few interesting ideas.\\n\\n2. The authors discussed quite a few limitations of the proposed approach and its proofs. These seem the weaknesses of the paper.\", \"questions\": \"1. The algorithm is based on GD instead of SGD. If SGD was used, will there be any challenges?\\n\\n2. I suggest the authors put the proof of Proposition 1 in the appendix.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A.\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the author's reply. I raise my score to 6.\"}", "{\"title\": \"General response to all reviewers\", \"comment\": \"We appreciate all valuable comments and suggestions provided by the reviewers. We have fully addressed all the concerns and made corresponding editions (in red) in the updated manuscript. We are eager to discuss with the reviewers for any further comment.\"}", "{\"title\": \"Response to Reviewer GYuM\", \"comment\": \"**Response to Weakness 1:**\\nWe thank the reviewer for pointing out the presentation issue. We defined $\\\\tau_{mix}$ as the mixing time in the statement of Theorem 1 (see Line 297) and $p_e$ denotes the $e$-th client availability probability. We also made corresponding changes in our updated version.\\n\\n**Response to Weakness 2:**\\nWe acknowledge the weakness and we provided discussions in Section 7. As stated in Section 7, the limitations are technical difficulties of our analysis. This paper serves as a first step towards analysis of federated learning under correlated client participation. We would like to look for more advanced mathematical tools to solve them in the future.\\n\\n**Response to Question 1:**\\nIn our analysis, we just applied local gradient descent in the local updates for simplicity. And our analysis can be easily extended to local SGD. In such case, we can similarly bound $\\\\Vert x_{t+1} - x_t \\\\Vert^2$ as in Lemma 7 by introducing bounded variance of stochastic gradient for each client (which is a standard assumption in stochastic optimization analysis). Then, all the following analysis goes through as well, while the final convergence bounds additionally depend on the variance of the stochastic gradient. Moreover, we highlight that in the experiment, we did use local SGD during local updates, and the results verify our theoretical demonstration.\\n\\n**Response to Question 2:**\\nWe thank the reviewer for the suggestion. We have added the proof of Proposition 1 in our updated Appendix C for clarity.\"}", "{\"summary\": \"This paper finds that traditional FL algorithms like FedAvg assume clients participate independently and uniformly, which is unrealistic in practical applications. It addresses the bias in FL due to non-uniform and time-correlated client participation. The authors introduce a Markov chain model to simulate the sequential and dependent nature of client participation, where each client waits a minimum number of rounds before participating again. A debiasing algorithm for FedAvg is proposed to improve convergence and ensure unbiased model updates. Empirical results also demonstrate that Debiasing FedAvg effectively reduces bias during training.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The authors find common FL assumption that clients participate independently and uniformly is unrealistic.\\n2. The paper frames client participation as a Markov process, capturing real-world constraints and interdependencies among clients.\\n3. The paper proposes Debiasing FedAvg converging to an unbiased solution with theoretical analysis.\\n4. Experiments on both synthetic and real datasets validate the algorithm\\u2019s effectiveness.\", \"weaknesses\": \"1. The paper claims that a larger minimum separation $R$ reduces bias. However, it lacks a discussion of how $R$ affects the server's model performance on the test set empirically and how to choose the best $R$.\\n2. The paper assumes a uniform minimum separation for all clients, which may not reflect real-world situations.\", \"questions\": \"1. If there is extreme heterogeneity among clients, how might a larger minimum separation $R$ impact the model's performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Further request on the discussion\", \"comment\": \"Dear Reviewer ehNB,\\n\\nWe appreciate your response and your time reading our rebuttal. We also noticed that you updated your score from 3 to 5 after our rebuttal, which we are pretty grateful for. However, we were wondering if there are any specific remaining concerns or areas for improvement that prevented you from increasing your score further (to a positive one). \\n\\nIf you could share any additional thoughts on how we might address your further concerns and make the paper stronger, it would be greatly appreciated. We look forward to hearing your thoughts.\\n\\nBest regards,\\n\\nAuthors of paper 7365\"}", "{\"summary\": \"This paper proposes a theoretical framework that models client participation in FL as a Markov chain, enabling the study of optimization convergence when clients exhibit non-uniform and correlated participation across rounds. The authors find that FL algorithms converge with asymptotic bias, which can be mitigated by increasing the minimum separation $R$. Additionally, they propose a debiasing algorithm for FedAvg, providing both theoretical and empirical performance guarantees for this approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The authors introduce a theoretical framework that models client participation in FL as a Markov chain, allowing the study of optimization convergence when when each client must wait at least $R$ rounds before participating again and has its own availability probability.\\n2. Through both theoretical and empirical results, the authors find that due to non-uniformity and time correlation effects, FL algorithms converge with asymptotic bias, which can be reduced by increasing the minimum separation $R$. \\n3. To achieve unbiased solutions, the authors propose a debiasing algorithm for FedAvg, with performance guarantees provided through both theoretical analysis and empirical evaluation.\", \"weaknesses\": \"1. The authors restrict the choices of $R$ to range from $0$ to $M-1$. However, the theoretical analysis only considers cases where $R$ ranges from $0$ to $M-2$. It would be beneficial to include the results for $R=M-1$.\\n2. In the experiments, the authors simplify the algorithm by partitioning the $N$ clients into $M$ groups, with exactly one group selected in each round. This setup does not align with the more complex proposed algorithm and is insufficient for a comprehensive evaluation of its performance.\\n3. The experiments are only conducted on synthetic dataset and MNIST dataset, which is relatively simple. More complex datasets (e.g., CIFAR-100, Shakespeare) and tasks (e.g., NLP) are recommended for a more comprehensive evaluation of the proposed algorithm's performance.\", \"minor\": \"In line 109, delete \\u201csome\\u201d.\", \"questions\": \"1. Theorem 2 holds only under specific requirements. What about more general settings that relax these requirements?\\n2. The authors claim that each client can maintain its own specific $R_i$. In this more general setting, Theorems 1 and 3 hold without modification, while Theorem 2 becomes more challenging. What modifications would be needed to obtain Theorem 2?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer ehNB\", \"comment\": \"**Response to Weakness 1:**\\nThere seems to be a misunderstanding of our problem setting and therefore its applicability and associated contribution. We clarify our setup here. Our participation pattern is as following: \\n1. Each client $i$ is sampled with probability proportional to $p_i$, **only after** it has waited for at least $R$ rounds;\\n\\n2. Otherwise, client $i$ is not sampled.\\n\\n3. At every round, $B$ clients are sampled to participate in the system.\\n\\nThus, as cyclic participation in literature (see Cho et al., 2023 in References), sampling $B$ clients every round forces them to participate exactly once within every $N/B$ rounds. Moreover, we note that our setting is much more general compared to literature, although there is still a gap to the real-world scenario. In particular, setting $R=0$, we reduce to uniformly and independently sampling of clients; setting $R=1$, we reduce to conventional first-order Markov chain case; and setting $R=M-1$, we recover the cyclic participation case. More importantly, our introduced markov-chain framework provides a systematic way to quantify the bias under non-uniform and time-dependent participation, and then allow us to propose solutions to resolve the bias. This is the main contribution and novelty of this paper, which is totally not captured by literature.\\n\\n**Response to Weakness 2:**\\nWe acknowledge that our convergence bounds do not enjoy speedup as in literature. This is actually due to the technical difficulty in dealing with time-correlated client sampling. We note that all literature enjoying speedup in convergence relies on the assumption of independent client sampling, which makes the analysis much tractable compared to ours. The only relevant setting that theoretically studies time-correlated participation is (Cho et al., 2023), where it forces clients to participate exactly once within every $M=N/B$ rounds (i.e., in a cyclic way) which is quite restrictive compared to ours. Even in the cyclic participation case, the convergence bounds shown in (Cho et al., 2023) grow with respect to $M$, while ours get rid of the dependence on $M$. How to get a speedup in the convergence analysis is interesting and important and we hope to address it in the future.\\n\\n**Response to Weakness 3:**\\nWe believe that the Markov-chain modeling is an effective way to solve this problem. Actually, we have to point out that Algorithm 1 does not try to estimate $p_i$, but rather it tries to estimate $\\\\pi_i$ which is the stationary distribution induced by the Markov chain. The values of $p_i$ define the transition probability matrix (see eqs. (3)-(5) for details) of the Markov chain, which has a stationary distribution $\\\\pi$, i.e., its left eigenvector corresponding to eigenvalue 1. But the values of $p_i$ do not directly translate to values of $\\\\pi_i$. There is a fundamental difference between $p_i$ and $\\\\pi_i$. Due to time correlation, the \\u201ceffective\\u201d probability of client $i$ to be sampled at round $t$ is no longer $p_i$. Instead, the \\u201ceffective\\u201d sampling distribution is now a time-varying distribution defined in eq. (5). If $R=0$, then it reduces to $p_i$ because now clients are sampled independently. But this is generally not true when $R > 0$! We also recommend the reviewer to refer to Lines 1340-1343 in the proof of Proposition 3 in Appendix F, where $p_1 = p_2 = 0.25, p_3=0.5$ but $\\\\pi_1=\\\\pi_2=0.3, \\\\pi_4=0.4$, indicating that $\\\\pi_i$ and $p_i$ are not the same. \\n\\nTherefore, although Algorithm 1 counts the participation times of clients to estimate $\\\\pi$ which seems simple, this estimator is not independent but time-correlated. We recommend the reviewer to refer to Theorem 5 and Corollary 2 in Appendix G to see that the convergence of the estimator involves much more careful analysis and technical difficulties.\\n\\n**Response to Weakness 4:**\\nWe kindly request the reviewer to elaborate their feedback on the presentation, and would be happy to discuss and adopt concrete suggestions. Meanwhile, we updated the manuscript to include more explicit definitions of some of the notations which are marked in red.\\n\\n**Response to Question:**\\nThere might be some other techniques to analyze this problem, but the Markov-chain framework offers an effective way to systematically solve it. Leveraging such Markov-chain framework, we are able to characterize the bias suffered by existing FL algorithms and then propose methods to mitigate the bias. Moreover, note that there is no existing analysis for such a practical problem in literature and we are the first to provide theoretical characterization.\"}", "{\"comment\": \"Thank the authors for the responses. My concerns have been addressed, and I would like to increase my score.\"}", "{\"title\": \"Request for rebuttal discussion\", \"comment\": \"Dear Reviewer Yi23,\\n\\nThank you once again for reviewing our submission, and for taking the time to provide feedback. As we are currently in the rebuttal phase, we wanted to reach out to ensure we have fully addressed your concerns. If there are any remaining issues or aspects of the work that you believe need further clarification or improvement, we would be happy to engage in a discussion and provide additional details.\\n\\nWe are more than happy to incorporate any additional feedback to enhance the quality the paper. Please let us know if there is anything further we can address or clarify to strengthen your confidence in our work. We look forward to your response.\\n\\nBest regards,\\n\\nAuthors of paper 7365\"}", "{\"metareview\": \"This introduces a theoretical framework modeling client participation in federated learning (FL) as a Markov chain, addressing non-uniform and correlated participation across rounds. The authors analyze scenarios where clients must wait a minimum number of rounds before re-participating, demonstrating that increasing this minimum separation reduces bias caused by non-uniform client availability. They also propose a debiasing algorithm for Federated Averaging (FedAvg) that converges to the unbiased optimal solution under arbitrary minimum separation and unknown client availability distributions.\\n\\nReviewers acknowledged the novelty of modeling client participation as a Markov chain and appreciated the practical relevance of addressing correlated client participation in FL systems. The proposed debiasing algorithm was noted as a significant contribution to improving convergence in realistic FL scenarios. Reviewers found the mathematical rigor appropriate and the assumptions reasonable for the FL context. The empirical results supporting the theoretical claims were well-received. However, some reviewers suggested that additional experiments on diverse datasets and with varying system parameters could strengthen the validation of the proposed methods.\\n\\nIn response to the feedback, the authors provided clarifications on the Markov chain modeling approach and supplied additional experimental results addressing concerns about empirical validation. They also revised the manuscript to improve clarity, especially in sections detailing the theoretical framework and algorithmic procedures.\\n\\n---\\nConsidering the reviewers' assessments and the authors' responses, the decision is to accept the paper. The work presents a novel and practically significant approach to addressing correlated client participation in federated learning through Markov chain modeling and a debiasing algorithm. The theoretical contributions are well-substantiated, and the empirical validations, though with room for further expansion, support the claims made. The authors' revisions have adequately addressed the primary concerns regarding clarity and empirical evidence.\", \"additional_comments_on_reviewer_discussion\": \"All the reviewers recommend accept or increased their score during the rebuttal period.\"}", "{\"title\": \"Response to Reviewer Yi23\", \"comment\": \"**Response to Weakness 1:**\\nWe acknowledge that Theorems 1 and 3 hold for $R \\\\le M-2$. The reason is that when $R \\\\le M-2$ the underlying Markov chain (5) is aperiodic as shown in Lemma 1, then guaranteeing the unique stationary distribution $\\\\pi_R$ exists, based on which our analysis can go through. For $R=M-1$ since now the Markov chain is periodic, then its stationary distribution is not well-defined (so is the mixing time $\\\\tau_{mix}$). Rather, we define $\\\\pi_{M-1}$ to be the induced Perron vector (see Line 244). Actually, the analysis of $R=M-1$ is much easier compared to $R \\\\le M-2$, since now the clients participate cyclically and hence it is equivalent to that clients are sampled uniformly. In particular, the term $e_2$ in the proof of Lemma 10 is zero for any $t$ and $\\\\tau$. Therefore, as we stated in Lines 423-427, much better convergence bounds can be obtained due to such nice cyclic pattern (see Cho et al., 2023 for more details in References in the paper). The reason we did not include $R=M-1$ to our Theorems 1 and 3 is that the mixing time $\\\\tau_{mix}$ is not well-defined in this case, since the Markov chain is periodic. \\n\\n**Response to Weakness 2:**\\nIn Algorithm 1 and all our analysis, there is no restriction to partition clients into separate \\u201cgroups\\u201d. We note that our analysis holds true even for $B=1$, which would effectively remove the partition. Initially, any subset of clients with size $B$ can be sampled. Then due to the requirement of minimum separation, the group of clients becomes available at the same time. We therefore group them in experiments only for implementation simplicity. Our algorithm can still effectively reduce bias without partitioning.\\n\\n**Response to Weakness 3:**\\nWe thank the reviewer for the suggestion. We have added the experiment for Cifar 10 in the appendix. Moreover, we note that the main contribution of the paper is to theoretically analyze FL under correlated client participation. Thus we acknowledge that the experimental part of the paper might not be comprehensive enough, while we note that our experimental results effectively verify our theoretical claims.\\n\\n**Response to Question 1:**\\nIn Theorem 2, the only assumption we made is that the availability probabilities $p_i$\\u2019s are not \\u201ctoo far away\\u201d (characterized by $\\\\delta$) from the uniform distribution. The reason for this assumption is due to the perturbation technique we used in the proof, which only holds for a small neighborhood around uniform distribution. We believe that some new proof technique is needed in order to remove the assumption and we hope to address it in the future.\\n\\n**Response to Question 2:**\\nThe nice thing about a uniform $R$ across clients is that the indices of clients within every $R+1$ rounds are non-repeated, which enables us to analytically get the expression of the column sum $b_R$ of $P_R$ in Appendix D.1. Then, our proof idea relies on studying the monotonicity of $b_R$. However, if each client maintains its own $R_i$, taking $R = \\\\max_i R_i$ no longer guarantees non-repeated indices within $R+1$ rounds, which renders much technical difficulty to analyze the monotonicity of $b_R$ in the sense that the analytical expression of $b_R$ is stochastic and unknow. Therefore, our proof fails in this case. We hope to solve this problem in our future work.\"}" ] }
9fvnZRCGra
Beyond Isolated Words: Diffusion Brush for Handwritten Text-Line Generation
[ "Gang Dai", "Yifan Zhang", "Yutao Qin", "Qiangya Guo", "Shuangping Huang", "Shuicheng YAN" ]
Existing handwritten text generation methods typically focus on isolated words. However, realistic handwritten texts require attention not only to individual words but also to the relationships between them, such as vertical alignment and horizontal spacing. Therefore, generating entire text line is a more promising task. However, this task poses significant challenges, such as accurately capturing complex style patterns including both intra-word and inter-word patterns, and maintaining content structure across numerous characters. To address these challenges, inspired by human writing priors, we focus on both the vertical style (\emph{e.g.}, word alignment) and horizontal style (\emph{e.g.}, word spacing and letter connections) of individual writing samples. Additionally, we decompose text-line content preservation across numerous characters into global context supervision between characters and local supervision of individual character structures. In light of this, we propose DiffBrush, a new diffusion model for text-line generation. DiffBrush employs two complementary proxy objectives to handle vertical and horizontal writing styles, and introduces two-level discriminators to provide content supervision at both the text-line and word levels. Extensive experiments show that DiffBrush excels in generating high-quality text-lines, particularly in style reproduction and content preservation. Our source code will be made publicly available.
[ "Handwritten Text-line Generation;Image Generation;Diffusion Model" ]
https://openreview.net/pdf?id=9fvnZRCGra
https://openreview.net/forum?id=9fvnZRCGra
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zVmw8Hlb4e", "ueZpJJMInX", "fOeOJiCGqI", "YNri8l8R3L", "IYQpzidt5F", "Ei84U3tiij", "BWrnXzGa8l" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1730088331314, 1731474993423, 1730670196456, 1730548530505, 1730598959391, 1730571254565, 1730465657729 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2361/Reviewer_cLAK" ], [ "ICLR.cc/2025/Conference/Submission2361/Authors" ], [ "ICLR.cc/2025/Conference/Submission2361/Reviewer_DkjX" ], [ "ICLR.cc/2025/Conference/Submission2361/Reviewer_LQQF" ], [ "ICLR.cc/2025/Conference/Submission2361/Reviewer_Rzj9" ], [ "ICLR.cc/2025/Conference/Submission2361/Reviewer_Fpfu" ], [ "ICLR.cc/2025/Conference/Submission2361/Reviewer_TQB3" ] ], "structured_content_str": [ "{\"summary\": \"This work proposes a method to generate handwritten text lines conditioned on given writing style and textual content. The key challenge of this task is to capture both the intra-word and inter-word style of the style sample, while maintaining the content correctness. According to the authors, most previous works focus on generating isolated words and therefore overlook the style among different words.\\n\\nTo capture the inter-word style, this work proposes a CNN-Transformer style encoder with two heads: a vertical head and a horizontal head. The purpose of vertical alignment is to place words in a text line on the same horizontal line, while horizontal alignment is to place words with proper spacing. To maintain content correctness, two-level discriminators are proposed to ensure the character order of the text line and word-level structure.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The method is well-motivated and clearly introduced. As far as I can see, the proposed style learning technique is interesting and different from previous methods. From experimental results, the proposed method dramatically outperforms previous methods under different evaluation metrics on English and German text-line datasets.\", \"weaknesses\": \"1. The reported results in Table 1 are inconsistent with the intuitive feeling. From the visualizations in Fig. 5, 12, 13, 14, text line images generated by TS-GAN are obviously better than CSA-GAN and One-DM, but TS-GAN obtain the worst quantitive results.\\n2. Since Unicode can encode all characters, and this paper also claims to propose a general text line generation method, it would be more convincing to conduct experiments using Chinese or Japanese. English and German only contain fewer than 100 character categories, and their structures are relatively simple; whereas Chinese and Japanese consist of thousands of characters, and their structures are complex.\\n3. It is better to give a experimental comparison between a CTC based content discrimnator and the proposed discrminator to support the argument in Introduction and Section 3.3.\", \"questions\": \"1. Why is the background color inconsistent in the style reference image and the generated images ?\\n2. About the line-level content discriminator, how to make sure that I_line and x_real are aligned? How to choose n? Why does Equ.(5) help to maintain the correct character order in the generated text line. It is better to give more detailed explanations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents DiffBrush, a novel diffusion-based approach to handwriting line generation that incorporates a dual-head style module and two-stage content discriminators to address the challenges of realistic handwriting synthesis. This model captures vertical and horizontal style features using a proxy loss method that encourages the style encoder to learn different writing patterns, ensuring accurate vertical alignment and horizontal spacing. The use of two-level content discriminators - operating at both the line and word level - enhances content monitoring by verifying global contextual coherence and local content authenticity. Extensive experimentation on 2 popular handwriting datasets using 7 different metrics demonstrates that DiffBrush outperforms existing state-of-the-art methods.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"The model extends beyond the generation of isolated words to the generation of full text lines, which is crucial for real-world applications such as synthetic data generation.\", \"Extensive testing is performed on two different datasets in English and German.\", \"The evaluation is robust, using three different sets of metrics that assess feature-based, OCR and visual quality aspects.\", \"Competing models are retrained to ensure a fair and direct comparison.\", \"The proposed method demonstrates significant performance improvements over existing alternatives.\", \"The study includes an ablation analysis, with both illustrative examples and quantitative assessments.\"], \"weaknesses\": [\"Little information is provided about the OCR system used, although this is a key evaluation metric. This raises the question of whether the OCR could be specifically designed to favour the proposed generation method.\", \"The data sets used for the experiments are relatively simple and somewhat artificial, consisting of non-spontaneous writing with isolated words on a white background. A demonstration of the model's generalisability to more realistic, complex use cases would have strengthened the evaluation.\"], \"questions\": [\"L315: \\\"The proposed two-level discriminators consist of a text-level discriminator and a word-level discriminator\\\" \\u2013 Should \\\"text-level\\\" be corrected to \\\"line-level\\\"?\", \"L321: Can you clarify what is meant by \\\"3D\\\" in \\\"3D-CNN\\\"?\", \"L305: \\\"The advantage of our discriminators is that they improve content accuracy without disrupting style learning, while CTC-based methods tend to hinder it\\\" \\u2013 Could you explain why your use of CTC in the Word-level Content Discriminator does not have this drawback?\", \"L334: Why is the Word-level Content Discriminator necessary?\", \"L372: Could you provide more details about the OCR system used?\", \"L377: \\\"Resent18\\\" \\u2013 should this be \\\"ResNet18\\\"?\", \"L382: \\\"The model is trained for 800 epochs on eight RTX 4090 GPUs using the AdamW optimizer with a learning rate of 10\\u22124\\\" \\u2013 Can you provide an estimate of the training time?\", \"L384: \\\"For the sampling ratio \\u03c1, we perform a grid search over {0.25, 0.5, 0.75, 1.00} and ultimately set \\u03c1 to 0.25\\\" \\u2013 Since 0.25 is the lowest value tested in the grid search, how sensitive were the results to this parameter? Should you have considered values lower than 0.25?\", \"Figure 5: The word \\\"destination\\\" in your generation appears to be missing a letter and is not highlighted with a red circle. Could you comment on this?\", \"Notably, in Figure 5, your system seems to replicate an artifact seen in the IAM database generation, where isolated words are pasted on a white background. The background within the words is not fully white, whereas the outside background is white. Could you elaborate on why this occurs?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents \\u201cDiffBrush\\u201d, a diffusion model devised for generating realistic handwritten text lines, it claims to address the limitations of traditional methods that focus primarily on isolated words. The authors proposed a dual-head style module that captures both vertical and horizontal style elements and a two-level content discriminator framework to ensure both style fidelity and content readability. The paper introduces a unique \\u201cdual-head style module\\u201d for capturing vertical and horizontal writing styles independently. This module addresses alignment and spacing, crucial for generating realistic text lines that mimic human writing patterns, which are often ignored by other models focused on isolated words. Experiments were conducted on two publicly available datasets(IAM and CVL).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors evaluate DiffBrush using multiple quantitative metrics, such as \\\"Handwriting Distance (HWD)\\\" for style fidelity, \\\"Character Error Rate (CER)\\\" and \\\"Word Error Rate (WER)\\\" for content accuracy, and image quality metrics (FID, IS).\", \"weaknesses\": \"1. Not much discussion is available on the interpretability of the learned style space. It is not clear how distinct are the learned vertical and horizontal style representations, and how do they vary across writers? Visualizations of the learned style features could enhance understanding and trust in the model\\u2019s style-capturing ability.\\n\\n2. More detailed explanation on how procurement of two style representations to clearly explain how they are different from the method proposed in ONE-DM is needed. \\n\\n3.DiffBrush conducted experiments on English datasets (IAM and CVL), its performance on other languages or scripts, such as Arabic, Chinese, or Cyrillic, remains unexplored.\", \"questions\": \"1. It is not clear from the text how the proposed method is different from One-DM method published in ECCV 2024 in the context of blender module (style content fusion module of One-DM).\\n\\n2. Can the model generate handwritten styles that were not present in the training dataset, given only a few sample images of a writer's handwriting? Clarification on this would help in understanding the model's flexibility in handling new, unseen handwriting styles.\\n\\n3. To understand the model's generalization capability, it is desirable to present results on out-of-vocabulary (OOV) text. OOV text refers to text lines that were not part of the training dataset. It is not clear If the model can successfully generate such text, and if the modek can do so then where in the paper are the results for out-of-vocabulary text provided?\\n\\n4. Figure 6 presents a table that highlights OCR results on the generated data, showing significant improvements in the last two lines. These improvements are attributed to the line-level and word-level losses. The architectural components responsible for calculating these losses are the Word-Level Discriminator and Line-Level Discriminator shown in Figure 4. However, there is not enough detailed explanation provided about these two components. The authors should offer a comprehensive description of these components, including their input, output, and intermediate tensor dimensions, for better understanding.\\n\\n5. What inputs are used for the line-level and word-level content discriminators, and what do they contain during loss calculation? if it is the final clean image, does generating it involve the full inference process with hundreds of steps, potentially adding significant overhead and increasing training time by several folds?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Authors propose a tailored for handwritten text-line generation. The proposed method contains a dual-head style module that\\ncaptures both vertical and horizontal writing styles. To make it work better, a two-level content discriminators are introduced, aiming to supervise textual content at both the line and word levels while preserving style imitation performance. Author conduct expensive experiments on two widely-used handwritten datasets verify the effectiveness.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. Authors focus on the handwritten text generation in the wild. The work decomposes text-line content preservation across numerous characters into global context supervision between characters and local supervision of individual character structures.\\n\\n2. A lot of experiments are conducted to support the proposed method, which includes two widely-used handwritten datasets. \\n\\n3. Authors consider more baselines, which is effective and reasonable.\\n\\n4. The paper exhibits some good figures, which is clear.\", \"weaknesses\": \"1. I think the presentation in this paper is not good. Like ' It is non-trivial to accurately capture writing styles from text-lines with multiple words, as it involves not only intra-word style patterns like letter connections and slant but also inter-word spacing and vertical alignment' , In this paper , there are more sentences which is not readable.\\n\\n2. It is hard to follow the story. The main idea is not easy to grasp when I try to read both introduction and the method sections.\\n\\n3. Authors use more long sentences, which is easy to be wrong. I would like to recommend authors to use shore sentence.\", \"questions\": \"The presentation is a big question.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces DiffBrush, a diffusion model for generating realistic handwritten text lines. The paper claims that DiffBrush improves on previous methods by capturing both vertical and horizontal writing styles and ensuring content accuracy through dual-level discriminators. Experiments show that DiffBrush generates style-consistent handwritten text lines, outperforming existing models in both visual quality and content readability.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"(1) This paper introduces a diffusion model for generating handwritten text lines that adeptly captures both vertical and horizontal writing styles, using dual-level discriminators to ensure content accuracy. However, the approach encodes handwriting style in terms of intra- and inter-word spacing, which is somewhat unusual. Additionally, the encoding of styles through space with column sampling and the design of the proxy anchor loss are unclear, as noted in the weaknesses section.\", \"weaknesses\": \"(1) The formulation of \\\\( L_{ver} \\\\) and \\\\( L_{hor} \\\\) as proxy anchor losses is somewhat unclear. These losses appear to assume uniform spacing between words; however, this spacing is often inconsistent for a given writer. For instance, in Figure 2(a) (top row), the space between the first two words differs from that between the last two words. Furthermore, if this focuses solely on word-to-word spacing, how is character-to-character spacing addressed?\\n\\n(2) The proposed method explicitly focuses on vertical and horizontal spacing to model handwriting style. However, handwriting style encompasses more than just spacing, including factors like writing speed and pressure, which are not considered in this paper. Additionally, handwriting generation could be approached as a sequence of strokes (online) rather than as static images (offline), an aspect that the paper does not address. A relevant paper can be found here: https://openreview.net/pdf?id=1ROAstc9jv.\\n\\n(3) The backgrounds of the generated handwriting samples are inconsistent (see Figs. 5-8 with zoomed in), with areas behind the characters appearing slightly darker than the spaces between words. This discrepancy is not clearly explained in the text, but it may be due to the stylization process capturing background elements from the style exemplars. For realistic synthesis, however, the background should be uniform. For example, if historical handwritings from datasets like BH2M (mentioned below) were used as references, the generated handwriting backgrounds would appear unnaturally varied, which would be unacceptable.\\n\\n(4) The qualitative results include only two styles, which are somewhat similar to each other. It would be interesting to consider handwritten lines from the BH2M dataset (http://dag.cvc.uab.es/the-historical-marriages-database/) as styling examples for greater diversity.\", \"questions\": \"(1) How do the authors envision extending the model to generate online handwriting?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a model for the generation of the images of handwritten lines, conditioned on the style sample and the desired textual content.\\n\\nThe model is a diffusion-based GAN, with two discriminators that analyze word and line level, acting to improve the content and the style correctness. \\n\\nThe style extraction model encodes specifically the information about vertical and horizontal offsets between the words by learning the information extracted from either horizontal or vertical random patches from the style source image (thus guaranteeing by construction that the information that is learned would relate to horizontal components of the style, regardless of their vertical positioning, and vice versa).\\n\\nThe evaluation is performed on CVL and IAM datasets, showing state-of-the-art performance in terms of CER/WER when recognizing the generated images, and HWR metric (which is somewhat similar to FID)\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Originality: While there have previously been works on the generation of images of handwritten lines, using GANs and diffusion models for generation of images of handwritten words, this is (one of the) first to combine all 3.\\nClarity & quality: The writing is clear, there is some amount of ablation studies to highlight the importance of the proposed components (namely the specific approach to style extraction module and the need for two different discriminators). The human preference study is also performed, showcasing that this approach is preferred 150% compared to the next one.\", \"weaknesses\": [\"Measuring the effect of the proposed ideas. The paper proposes the style extraction model with very set of biases, but the effect of it compared to a much simpler model that simply passes information from the style source image is not measured. Furthermore, the effect of the style model on the recognizability of the generated image seems very small as per Figure 6. Having more of such ablations would strengthen the paper.\", \"Generalization of the model to other data. The IAM and CVL datasets both feature fairly clear background and thin writing on top of it, making it hard to judge whether the model would generalize well for more difficult writing or backgrounds.\", \"Ease of reproduction. The model is fairly complex, with a custom style extractor, and reproducing the results might be not trivial. It would strengthen the paper to release the training code or the model.\"], \"questions\": \"Results in Table 1 suggest CER of 40+% for TS-GAN and CSA-GAN, suggesting that almost half of the characters should be unrecognizable in the samples generated for these models. However, when looking at all of the results presented in Figure 5, 12, 13, and 14, there don't seem to be any errors in the results generated by these methods. Can you explain why these CER numbers are so high? This also doesn't seem to match CER numbers reported in CSA-GAN (the OCR system from which is used by the authors for the reference), which are closer to 10% (https://arxiv.org/pdf/2204.05539).\\n\\nThe question above is my main concern about the experimental results, happy to upgrade the rating in case of a clear answer there, and to further increase it based on the suggestions in \\\"weaknesses\\\" section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
9fMNxWDZsP
Explainable Concept Generation through Vision-Language Preference Learning
[ "Aditya Taparia", "Som Sagar", "Ransalu Senanayake" ]
Concept-based explanations have become a popular choice for explaining deep neural networks post-hoc because, unlike most other explainable AI techniques, they can be used to test high-level visual "concepts" that are not directly related to feature attributes. For instance, the concept of "stripes" is important to classify an image as a zebra. Concept-based explanation methods, however, require practitioners to guess and collect multiple candidate concept image sets, which can often be imprecise and labor-intensive. Addressing this limitation, in this paper, we frame concept image set creation as an image generation problem. However, since naively using a generative model does not result in meaningful concepts, we devise a reinforcement learning-based preference optimization (RLPO) algorithm that fine-tunes the vision-language generative model from approximate textual descriptions of concepts. Through a series of experiments, we demonstrate the capability of our method to articulate complex and abstract concepts which aligns with the test class that are otherwise challenging to craft manually. In addition to showing the efficacy and reliability of our method, we show how our method can be used as a diagnostic tool for analyzing neural networks.
[ "Concept based Explainable AI", "Vision-Language Models", "Reinforcement Learning" ]
Reject
https://openreview.net/pdf?id=9fMNxWDZsP
https://openreview.net/forum?id=9fMNxWDZsP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xrXNpG2ZM1", "txzmFdUmFO", "qO42t7q2CP", "qGmzyKWv5P", "nP9qvn3VF6", "m77LRQzmIg", "knmAQjyTzv", "hrrvg4xN8o", "hoTLLtWKG8", "g4nbRHKLgT", "eFuzZbSzNA", "eEol5R4wyQ", "a00aR1CxXn", "VUgSHMdGb5", "UZpr7wtIrq", "SOYOn9PLVr", "SD7Dc7CpB3", "RndWDxlBrh", "QrGUIHX5uh", "Q0ivaWWUZp", "MCLiDmgAVr", "JQki9ZCW2o", "ItPU42ZLav", "IHDD2OxsvP", "HgqNGjxlML", "G2bUvnw3c3", "EeyJhbps3b", "EDnkRc68zf", "DHZGg0hFNn", "CIX67XN76U", "AvnIlKMqiu", "91hQqNvqC9", "7HrocA0RVH", "6r1g1atjrU", "6o3zm49K8Q", "6cztfZXtzZ", "65c9CKn29T", "5D68mpEWwv", "2Ua3J7qtQP", "1qHG62xQ23", "0pW8S12lFm", "04iSWbAzyv" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732266965327, 1732922440664, 1732266655294, 1732274314768, 1737523711399, 1732402282738, 1733304282476, 1732647592585, 1730385755799, 1732569684261, 1732402853171, 1733304589286, 1730617465993, 1730693771881, 1732650176525, 1733944249306, 1732476723753, 1732391603076, 1732435514879, 1732477090814, 1732687279034, 1732266812051, 1732267695584, 1732269228223, 1730569065556, 1732928260706, 1732267432315, 1733304992439, 1732285994420, 1729604837113, 1732476978756, 1730703911778, 1732274357588, 1732267401263, 1733305408894, 1732643683611, 1732476818178, 1732389539039, 1733304293838, 1732315601662, 1732391489926, 1732569604948 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Reviewer_KnkW" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5516/Reviewer_D5KJ" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Reviewer_5Zn5" ], [ "ICLR.cc/2025/Conference/Submission5516/Reviewer_5Zn5" ], [ "ICLR.cc/2025/Conference/Submission5516/Reviewer_D5KJ" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Reviewer_tYuD" ], [ "ICLR.cc/2025/Conference/Submission5516/Reviewer_AFj3" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Area_Chair_V6Gt" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Reviewer_TZBx" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Reviewer_TZBx" ], [ "ICLR.cc/2025/Conference/Submission5516/Reviewer_AFj3" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Reviewer_5Zn5" ], [ "ICLR.cc/2025/Conference/Submission5516/Reviewer_D5KJ" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Reviewer_KnkW" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Reviewer_tYuD" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Authors" ], [ "ICLR.cc/2025/Conference/Submission5516/Reviewer_D5KJ" ] ], "structured_content_str": [ "{\"comment\": \"## W2.\\nWe understand that the term \\\"novelty\\\" could be interpreted in multiple ways, and we appreciate the opportunity to clarify its intended meaning and significance in our work.\\n\\n**Novelty as in different concepts.** This is not what we originally meant but because of RLPO\\u2019s exploration strategy we indeed get different concepts (see Figure 6). Thank you for helping us identify this added advantage. \\nIf we read lines 285-288, \\u201cDifferent actions may result in different explainable states, reflecting various high-level concepts inherent to f(\\u00b7)....Also, it is possible for different actions to lead to the same explainable state.\\u201d - The first sentence tells about the novelty as in different concepts. The second sentence tells us if we have two semantically similar seeds (but the RL algorithm \\u201cinitially\\u201d does not know if they are similar), the RL algorithm will force SD to learn similar concepts. Therefore, we do not have to worry much about having semantically repeated seeds. \\n\\nPlease see the experiment below to validate this. \\nWe now ran the RLPO algorithm 3 times (i.e. 3 trials) for the same seed prompt set. During inference, we calculated the CLIP embedding similarity among the top 3 concepts (stripes, running, and mud for the zebra class - see Figure 6). The high Wasserstein and low cosine similarities indicate that the generations are not similar. A very high Hotelling's T-squared score (a generalization of the t test for multiple dimensions) also indicates that the generated images are from the same distribution (for reference, this score is 2.4 for stripe-stripe). \\n\\nInter-class Concept Comparisons for \\u201czebra\\u201d class across three trials:\\n| Metrics | Stripes-Running Concept | Running-Mud Concept | Mud-Stripes Concept |\\n|-------------------------------|-----------------------------|-----------------------------|-----------------|\\n| Average Cosine similarity | 0.677 \\u00b1 0.010 | 0.699 \\u00b1 0.0004 | 0.734 \\u00b1 0.0004 |\\n| Average Wasserstein distance | 8.1533 \\u00b1 0.057 | 7.850 \\u00b1 0.022 | 7.480 \\u00b1 0.033 |\\n| Average Hotelling's T-squared score | 7598.507 \\u00b1 84.5 | 13069.681 \\u00b1 2147.81 | 7615.731 \\u00b1 538.06 |\\n| Are they from the same distribution? | No | No | No |\\n\\n**Novelty as in deviation from the test set.** We believe there is a misunderstanding here. We are not just generating concept images that are far away from test images. We are generating images that are far away from test images but \\u201cstill provide a high TCAV score.\\u201d This is a challenging constrained optimization problem that we address using deep RL and diffusion. As the other reviewers also correctly identify, we do not think developing an algorithm to do this is trivial. Please see the graph here in [Anonymized GitHub Link](https://anonymous.4open.science/r/RLPO-E5C6/D5KJ/README.md) (and Table 4) for comparison. The comparison to prior work is not trivial. Retrieval-based methods directly rely on the dataset, in its simplest form \\\"cropping\\\" parts of existing images to produce explanations. While such an approach can highlight important features in input images that help non-expert users understand the network\\u2019s decisions, it is inherently limited to patterns present in the dataset. RLPO, on the other hand, explores beyond the dataset, generating concepts that trigger the network\\u2013unveiling the vulnerabilities of a neural network help engineers fix issues of the neural network. \\n\\n## W3. \\nFrom an XAI perspective, the comparison might look unnecessary. However, since the major contribution of work is developing an algorithm, given the popularity of LLM feedback and human feedback these days, it raises the question why we use XAI feedback. We wanted to highlight the infeasibility of using LLM and human feedback mechanisms in our framework. If the reviewer thinks this comparison is confusing to have in the main paper, we can move it to supplementary. Let us know.\\n\\n## W4.\\nThe key assumption in our work is that stable diffusion can generate realistic images given a prompt (and generative models will continue to grow in the coming years). If SD generates images that do not help us explain the output, RL will not optimize it further. Therefore, we do not see how this is a problem. Referring to Figure 5, the explanation of the tiger class progresses through levels of abstraction from high to low: the importance of zoos, followed by animals in zoos, then striped animals in zoos, and finally orange-and-black striped animals, and so on. At the highest level of abstraction, what this means is, images of zoos trigger for tigers whereas random abstract concepts such as, say, beach, do not trigger for tigers. This indeed helps verify engineers that the neural network, in this example, has learned the correct representation.\"}", "{\"title\": \"Official Comment by Reviewer KnkW\", \"comment\": \"Thank you for your detailed responses, which have partially addressed my concerns.\\n\\nI now understand better why RL is utilized as you model the trajectory as the sequence of prompt seeds entered into the model. But I'm still confused as you claim each action is a combination of prompt seeds (*the action space is not 20 seed prompts but the combination of the 20 seed prompts*). At least from your results, I don't think I see any experiments of using multiple seeds simultaneously as one action in a step.\\n\\nBy showing the \\\"without RL\\\" results, do you mean the model is trained by iteratively entering prompt seeds or by using the selected prompt alone? My question on the RL design is consistent with my question on the design of \\\"sequential decision-making\\\", as it is straightforward if researchers just tune one SD model for each prompt seed. Though you claim that it takes \\\"2^20 * 30 = 182 years\\\" to turn the model without RL, what I have in mind is to tune one model for each seed independently (20 * 30min = 10h), which is close to the speed of RLPO and do not require complex decision making. Your response to my Q2 shows how RLPO makes decisions in a sequential way, but it does not really demonstrate the benefits of doing it. I think this is also a shared concern that has been raised by Reviewers AFj3 and D5KJ.\\n\\nMy last remaining concern is about the usefulness mentioned by Reviewers 5Zn5 and D5KJ, which is consistent with my motivation for raising W1 and Q1. As you mentioned, the primary objective of this research is to improve the user\\u2019s understanding of the model. Therefore, the interpretability of the generated images themselves should be very important. For example, are users really able to understand what concepts are represented by the generated images, such as those shown in Figures 13-17?\\n\\nDue the issues mentioned above, I would keep my original score.\"}", "{\"comment\": \"We are thankful to the reviewer for the detailed feedback. While we appreciate the reviewer's recognition of the interesting questions posed and the novel directions explored in our work, we believe there are some key misunderstandings regarding the goals and contributions of the proposed method.\\n\\nSince there are various types of contributions such as frameworks, datasets, algorithms, insights, applications, etc. at ICLR, we would like to clarify that our paper is an algorithmic contributions paper. Please see our detailed answers below. \\n\\nPlease refer to this **[Anonymized GitHub Link](https://anonymous.4open.science/r/RLPO-E5C6/D5KJ/README.md)** where we have compiled detailed explanation and images/plots for better\\u00a0understanding.\\n\\n## W1. \\n**Are our goals misaligned with XAI?**\\nWe would like to clarify that the stated goals of our work are not misaligned with the broader goals of explainable AI (XAI). Our primary objective, consistent with XAI and what the reviewer states, is to improve the user\\u2019s understanding of the model. However, there is more nuance to this broad goal. The focus on \\\"novelty,\\\" that most existing methods cannot provide, is not a deviation from this goal but rather an added dimension to further this goal. While we appreciate the reviewer's perspective, we probably do not want to fix the goals of XAI as it\\u2019s an evolving field.\\n\\nUnlike in the most classical setting of XAI where we want to come up with explanations of a particular decision that helps a non-ML-expert user (e.g., say, a medical doctor), our goal is to provide insights to expert machine learning engineers or data scientists to identify what the neural network has learned (so that if there are any vulnerabilities they can fix before deployment). As highlighted in recent discussions [1] and our paper (Figure 2), humans cannot understand everything a neural network has learned because they do not learn using the same concepts that we learn\\u2013their learning manifold is different to ours. If a human were to manually come up with a concept set, they are going to miss important concepts because they do not know about the neural network\\u2019s learning manifold and therefore the engineers cannot fix the vulnerabilities of the network. That is why we need a method to automatically probe millions of concept configurations and see what really matters. Since trying millions of configurations is not computationally feasible, we formulate a deep RL solution that generates concepts using diffusion. By doing so, we do not deviate from the original goal. \\n\\nWe believe the confusion happened because \\u201chuman\\u201d was used as an overloaded term, resulting in a perspective clash. In the classical TCAV setting, two groups of humans are involved: those who create concept sets offline (\\u201ccreator humans\\u201d) and those who utilize these concepts online (\\u201cuser humans\\u201d). When we state \\u201chumans\\\" we are specifically referring to the creator humans, as our contribution is developing an algorithm for concept set creation/generation rather than the downstream application of an existing method. We argue that the concepts that the model indeed uses can be divided into two groups: concepts that creator humans can think of/retrieval methods can create and those they cannot (i.e., novel concepts). Our method can generate concepts from both groups. The latter group is more useful to debug models as shown in our concluding experiment (Figure 9).\\n\\nWe also believe that \\\"abstractness\\\" is an interesting byproduct of RLPO, as correctly identified by R1, R3, and R4. They provide a layered understanding of the model\\u2019s reasoning, revealing both high-level and low-level concepts that contribute to its decisions\\u2014something that, for the best of our knowledge, the XAI community has never seen before. \\n\\nWhile we like the utility metric introduced in [2], it does not help measure novelty. If we crop part of an image as a concept, the utility score will be high, though this concept is not novel at all. Our argument is that there are other patterns/concepts that trigger the neural network (the goal of this work) and identifying them is important to fix the issues. Please see Table 2 where we explain this gap between \\u201cwhat the human thinks\\u201d vs. \\u201cwhat the neural network has actually learned.\\u201d One motivation for closing this gap stems from our attempt to understand why neural networks perform badly for certain complex decision-making tasks.\\n1. Schut, Lisa, et al. \\\"Bridging the human-ai knowledge gap: Concept discovery and transfer in alphazero.\\\" arXiv preprint arXiv:2310.16410 (2023).\\n2. Colin, Julien, et al. \\\"What i cannot predict, i do not understand: A human-centered evaluation framework for explainability methods.\\\" Advances in neural information processing systems 35 (2022): 2832-2845.\"}", "{\"comment\": \"Thank you for identifying the novelty and appreciating the experiments. We hope the following explanations will clarify the queries for the reviewer.\\n\\nPlease refer to this [Anonymized GitHub Link](https://anonymous.4open.science/r/RLPO-E5C6/AFj3/README.md) where we have compiled detailed explanation and images/plots for better understanding.\\n\\n## Wa. \\nTo compute TCAV scores, we need two concept image groups. The reason for random grouping is that, initially, we do not have TCAV scores for individual concept images. Random grouping is not problematic\\u2013it is the unbiased sampling method. Why this works is, because images in the two groups are different, the TCAV score in one group is slightly higher. Then we fine-tune stable diffusion to generate images similar to that group with higher TCAV. We regenerate two groups of images from the fine-tuned model and iterate this process. Although the images in initial groups are highly variable, overtime they learn how to generate images of a particular type/concept. \\n\\nWe can also think of a different analogy. This sampling step is somewhat analogous to rejection sampling and Metropolis-Hasting (MH). In rejection sampling, we pick a sample from a proposal distribution, which is typically a uniform distribution, and reject it based on some criteria. Similarly, we generate two sets of images randomly and evaluate which one to reject. In M-H also we compare two points (though they are old and new points). Through such iterative rejections and model updates, we can converge to the target distribution. \\n\\n## Wb.\\nWe would like to clarify that, the action space is not 20 just seed prompts but the combination of the 20 seed prompts. Assuming 30 mins per run, it will take $2^{20} * 30 = 182$ years to do this if we brute-force. Since RL intelligently and dynamically picks which prompt combinations to use (and not use), RLPO takes only ~8 hours. Therefore, unlike a static ranking approach, our RL-based framework is much more pragmatic to handle unbounded generative models. The high epsilon case in Table 1 is somewhat similar (yet better) to brute forcing through the seed prompts. \\n\\nTo see the quality of generated images with and without RL (seed prompt \\u201cstripes\\u201d for the zebra class after 300 steps), please see the images in this **[Anonymous Github](https://anonymous.4open.science/r/RLPO-E5C6/AFj3/README.md)**. \\n\\n## Wc.\\nWe agree that we should have clarified this point in the paper. We can still obtain explanations in real-time because TCAV can run in real-time. However, we agree that RLPO cannot create concept sets in real-time, mainly because of the diffusion fine-tuning step (RL is very fast). However, we do not think there is a need to create concept sets in real time. For instance, if we apply TCAV for identifying a disease from an X-ray, we can create the concept set using a test set ahead of time before deployment, which will take a few hours, and then run TCAV in real-time. Hence, concept set creation is a one-time investment. In case of a long-term distribution shift in a particular application, we can keep adding concepts to the dictionary, if RLPO discovers anything new. Please also note that the traditional method of manually creating a concept set can not only be slow and labor intensive but also can miss the important concepts.\\n\\n## Wd.\\nLet us explain with an analogy. Why does it snow on Mount Denali in Alaska? It could be due to its high elevation, its location in the Arctic, or the orographic effect\\u2014all valid explanations. Similarly, if an autonomous vehicle hits a pedestrian, why did it happen? Perhaps the pedestrian was occluded, the AV struggled to identify pedestrians wearing pants, or a reflection might have confused its sensors. \\n\\nIf only engineers can obtain the range of reasons why a neural network triggers for a particular output, they can assess the vulnerabilities of the neural network and fix them. Humans cannot think of all these reasons because they do not understand the neural network\\u2019s learning process. That\\u2019s where our method shines. \\n\\nWhile generating multiple concepts is not mandatory, it provides an added advantage. In critical applications, relying on a single explanation can be risky, especially if it fails to capture the full scope of the model's behavior. A diverse set of concepts ensures that users and domain experts can explore multiple dimensions of the model's reasoning. These explanations could also potentially allow experts to learn from the model itself [1].\\n1. Schut, Lisa, et al. \\\"Bridging the human-ai knowledge gap: Concept discovery and transfer in alphazero.\\\" arXiv preprint arXiv:2310.16410 (2023).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for your response.\\n> Our primary objective, consistent with XAI and what the reviewer states, is to improve the user\\u2019s understanding of the model. However, there is more nuance to this broad goal. The focus on \\\"novelty,\\\" that most existing methods cannot provide, is not a deviation from this goal but rather an added dimension to further this goal. While we appreciate the reviewer's perspective, we probably do not want to fix the goals of XAI as it\\u2019s an evolving field.\\n\\nIf this is the primary objective, I still believe that the link between novelty, abstractness, etc... and improving the user's understanding is missing. I am not trying to \\\"fix the goals of XAI\\\", but in order to claim that these secondary properties are valuable to the primary objective, a clear effect, showing that the method improves human understanding, should be demonstrated. \\n\\n> As highlighted in recent discussions [1] and our paper (Figure 2), humans cannot understand everything a neural network has learned because they do not learn using the same concepts that we learn\\u2013their learning manifold is different to ours. If a human were to manually come up with a concept set, they are going to miss important concepts because they do not know about the neural network\\u2019s learning manifold and therefore the engineers cannot fix the vulnerabilities of the network. \\n\\nI have no problem with the goal of generating concepts that may be different than what humans come up with, this is one of the strengths of the work as I indicated. My concern is that the results in Figure 4 do not immediately strike me as useful for human understanding. If it was clear that they were useful, experiments showing that the explanations help human understanding would not have been as necessary. However, the results are, as you indicate, abstract. Thus, you are making a strong claim that this abstractness is a positive and not a negative for human understanding of the model. Unfortunately, there are no experiments that back this claim up. I still believe this work needs a clear demonstration of usefulness for explainability. \\nAlso, it's a reach to claim that this method surfaces vulnerabilities in networks without providing experiments that either exploit or defend against these vulnerabilities. \\n\\n> While we like the utility metric introduced in [2], it does not help measure novelty. \\n\\nI suggested this work because it directly measures usefulness, not novelty. The authors should convince the readers that novelty is worthwhile for explainability through experiments. This experiment may not be the best fit, for example, your work may be best suited for helping users understand failure modes on OOD images.\\n\\n> Our argument is that there are other patterns/concepts that trigger the neural network (the goal of this work) and identifying them is important to fix the issues. \\n\\nIt's known that \\\"strange\\\" patterns and concepts can trigger neurons, this fact is exploited in adversarial attack research. This work does not show adversarial attacks using the method nor fixes those attacks. This would also be an interesting contribution.\\n\\n-----\\n\\nIn summary, my primary concern remains unaddressed. I am not convinced that the applications you write about are possible with the method provided. As you show clearly in your work, your results are more novel and abstract than prior work. While I have no issues with measuring novelty and abstractness, I don't believe these traits can be used as a proxy metric for usefulness as a tool for explainability.\"}", "{\"comment\": \"Thank you for your response! We have clarified the reviewer\\u2019s questions on the RL optimization routine and usefulness raised by another reviewer below. We have also added new experiments to further bolster the latter. Since we have clarified all previous concerns and new concerns, we sincerely hope the reviewer can reconsider the score.\\n\\n## Clarification on the seed prompts\\nApologies for the confusion. RL choses one action at a time. But multiple actions will be used as the prompt for stable diffusion. For instance, at t=1, action={cat}, prompt={cat} and at t=2, action={water}, prompt={cat, water}, and so on. If the combination is not useful (based on TCAV), then the RL will decide not to proceed through this trajectory. Please see Appendix D.3.2 for a plot that shows the number of action combinations used over time. As a side note, please also note that stable diffusion generates a very different set of images for prompt={cat} at t=1 vs. at t=2, because at each timestep, stable diffusion is fine-tuned. Please also see in **[Anonymous Github](https://anonymous.4open.science/r/RLPO-E5C6/KnKW/Response%201/README.md)** to see that updating on one seed prompt at a time without RL does not lead to meaningful explanations. \\n\\n## Clarification on Without RL experiment and results:\\nWe demonstrate the benefits of the sequential approach both conceptually and experimentally. In our benchmark experiment on training the diffusion model without an RL agent, we fine-tuned the same diffusion model for each prompt by going through the seed list in a for loop. If we only had a few seed prompts, we could have trained separate diffusion models. But when we consider the multiple combinations of seeds, it is not feasible to have many diffusion models (if so, for N seeds, we need 2^N diffusion models). Therefore, we have only one diffusion model, and RL decides which optimization trajectories are not useful. The GIF in [anonymous github](https://anonymous.4open.science/r/RLPO-E5C6/KnKW/Response%201/README.md) provides a visualization of this sequential process. As illustrated in the results ([anonymous github](https://anonymous.4open.science/r/RLPO-E5C6/KnKW/Response%201/README.md)), if we don\\u2019t use RL to optimize, we see that stripes seed does not converge to a good quality concept compared to the one obtained using RL for the same time budget. The main reason for that is, the RL agent learns over time what trajectories are worth optimizing and drops the less explainable trajectories. \\n\\n## Clarification on usefulness:\\nBelow we clarify the reviewer's misunderstanding of our primary objective and then present new experimental results. The primary objective of this research is to come up with novel concepts that trigger the neural network (i.e. understanding the neural network). \\n\\n**Our claim:** When considering automated concept set creation techniques (that are retrieval based), the proposed generative method can come up with novel concepts that trigger the neural network (unveiling such new patterns can help engineers fix models).\", \"experimental_evidence_to_support_the_claim\": [\"Quantitative: Table 4 shows that our method can generate new concepts.\", \"Qualitative: Figure 4 illustrates some results of Table 4.\", \"Human evaluation: Most humans cannot imagine certain patterns will even trigger the neural network.\", \"We have recently updated the manuscript to clearly specify the contribution.\", \"Having said that, we now conducted an additional human survey to measure the usefulness of the provided explanation. The experiment is two fold:\", \"Step 1: We first asked 19 ML engineers to choose relevant generated concepts for Googlenet classifier to classify a zebra class without telling them that all shown images are actual concepts. All the engineers selected the \\u2018stripes\\u2019 concept to be most important while some also selected the \\u2018mud\\u2019 concept. But most missed the \\u2018running\\u2019 concept. This indicates that engineers cannot think of all the important concepts that the neural network gets activated.\", \"Step 2: Then we showed engineers the concept-explanation mapping on a random input image (figure 6 in paper) and asked them if the provided explanation helped them understand the model better and if it provided new insights. More than 90% of the engineers agreed that the explanation helped in better understanding the neural network and around 84% agreed that it provided new insights about the neural network that they didn\\u2019t have previously. This result clearly suggests that the new concepts discovered by the proposed method help engineers discover new patterns that they did not imagine before. ([anonymous github](https://anonymous.4open.science/r/RLPO-E5C6/KnKW/Response%201/README.md))\"]}", "{\"comment\": \"Once again, thank you for taking the time to provide us with your constructive feedback on our submission. We understand that the reviewer may not have had the time to thoroughly go through the rebuttal yet. If there are any further clarifications or additional details that we can provide to address your queries, please let us know. We are more than happy to provide any additional information or explanations to ensure our approach is clearly conveyed.\"}", "{\"summary\": \"This paper introduces an algorithm for generating concept images to post-hoc explain a model. It hereby focuses on the use of the TCAV score, an RL learning setting and stable diffusion.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper tackles an important topic, and I find the combination of previous approaches that the authors propose with their work intuitive and interesting.\", \"weaknesses\": \"Although i find the algorithmic approach interesting enough, I am unsure what the actual underlying goal is of the approach and how this is evaluated/evidenced in the experimental section. I.e., in the intro the authors mention \\\"Therefore, it is importantto automatically find human-centric concepts that matter to the DNN\\u2019s decision-making process.\\\" So my understanding is that the ultimate goal is to provide concepts that are helpful for humans to understand the decision making processes, but without the need to provide extensive concept data precollection. But I am missing a clear experiment to evaluate the usefulness of the generated concept images. Table 2 seems to be hinting at something along this line, but the details are not clear to me from the text. IN any case this should be one of the key evaluations to perform. Especially, as the exemplary concept images look far too abstract for me to be able to understand what they should represent. Please point me towards the relevant sections in case this is missing.\\n\\nOverall, I am leaning towards accept, but would like the following points clarified first:\\n- What is the exact goal of the method?\\n- How do the experimental evaluations provide evidence for reaching this goal?\", \"minor\": \"Overall, the paper could benefit from a grammar check, e.g., \\\"also explains what type of features is the model focuses on.\\\" in line 426\", \"questions\": \"Maybe I missed it, but is there any information on the training time of the proposed algorithm? If not, I think this could be valuable information to provide at least in the appendix, but mentioned in the main text. The authors mentioned limitations, but it would be good to have real numbers.\\n\\nWhats the difference between the concept images in Figure 6 and 5? In figure 5 they look very abstract, whereas in figure 6 the are high-quality images.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Ok thank you yes this helps to clarify the difference. I suggest the authors encorporate such a clarification in the paper. Maybe adding a sentence both in the introduction and methods section would suffice, though some details on this in the preliminary section could be good too. I have raised my score with the trust that the authors do so.\"}", "{\"comment\": \"I'm not as familiar with RL and I find your responses adequate for Q1, Q2.\\n\\n\\n> Training the RLPO framework takes approximately 6\\u20138 hours for a particular class...\\n\\nThis is a lot of time spent to analyze a single class of a model. If you were to analyze all of imagenet this would result in 6000 hours (250 days). Once again, this strikes me as not particularly useful for most users unless the method can provide extremely valuable insights.\"}", "{\"comment\": \"Thank you for your response! Since we have clarified all previous concerns and below we clarify new concerns experimentally, we sincerely hope the reviewer can reconsider the score.\\n\\nThe proposed method indeed generates a diverse set of task specific explanations. This is how a practical pipeline would look like. Consider an autonomous vehicle hit a pedestrian. If we apply the proposed method, it will provide a diverse set of explanations (say, 5 explanations - typical XAI methods would provide only one). Then the analyzing human looks at each explanation to rule out the real cause. If we did not have a diverse set of explanations, there is a good chance we would have missed the real cause. \\n\\nBelow we provide a concrete experiment that shows how diverse explanations help resolve issues in neural networks. \\nIn this experiment, we choose a pre-trained Googlenet classifier for the Tiger class whose important seed prompts were \\u2018orange black and white\\u2019, \\u2018orange and black\\u2019, and \\u2018blurry image\\u2019 with TCAV scores of 0.66, 0.66, and 0.62, respectively. Out of these seed prompts, \\u2018orange black and white\\u2019 and \\u2018orange and black\\u2019 highlight the tiger pixels while \\u2018blurry image\\u2019 seed prompt highlights the background pixels (see sample explanations in [anonymous github](https://anonymous.4open.science/r/RLPO-E5C6/AFj3/Response%201/README.md)). What that means is, in order to classify a tiger, Googlenet looks at both the foreground and background. Now the engineers want the classifier to classify the tiger based on tiger pixels, not its background (note: from the classical Wolfe-Husky example in LIME [1], we know the spurious correlation of background). \\n\\nTo this end, we generated 100 concept images based on concepts related to \\u2018orange black and white\\u2019 and \\u2018orange and black\\u2019 using a separate generative model and fine-tuned our Googlenet model. Running RLPO on this fine-tuned model revealed that the model learned some new concepts such as \\u2018whiskers\\u2019 and also revealed that previous concepts such as \\u2018orange black and white\\u2019 and \\u2018orange and black\\u2019 are now more important with TCAV scores of 1.0 and 1.0, respectively. This means that the classifier is now only looking at tiger pixels, not the background. (see dataset samples and shift plot in [anonymous github](https://anonymous.4open.science/r/RLPO-E5C6/AFj3/Response%201/README.md)).\\nThis experiment clearly shows how the proposed method can be used to improve a neural network\\u2019s undesirable behavior.\\n\\n1. Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. \\\"\\\" Why should i trust you?\\\" Explaining the predictions of any classifier.\\\" Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.\"}", "{\"summary\": \"Authors proposed a new algorithm, Reinforcement Learning-based Preference Optimization (RLPO), designed to generate high-level visual concepts that explain the decisions of DNNs. Unlike traditional concept-based explanation methods (such as TCAV), RLPO creates sets of concept images, eliminating the need for manual concept image collection, thus making the process of explaining DNN decisions more efficient and generalizable. RLPO fine-tunes a stable diffusion model with preference optimization to generate images that effectively explain neural network decisions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The idea is very innovative, and the writing is clear. By using a generative model instead of traditional manual concept collection, it reduces human intervention and improves efficiency. RLPO can generate concepts at different levels of abstraction, offering a more detailed explanation of the DNN\\u2019s internal decision-making process.\", \"weaknesses\": \"In some sections (such as the algorithm explanation and mathematical proofs), the descriptions appear overly lengthy, which might hinder readers' understanding. The description of how RLPO is applied in sentiment analysis tasks is somewhat vague, and further detailing the specific steps of this experiment could be helpful. Certain terms (such as \\u201cconcept generation\\u201d and \\u201cconcept extraction\\u201d) are defined and used inconsistently throughout the paper, which could lead to confusion.\", \"questions\": \"Same as weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Concept-based explanation methods typically require practitioners to guess and gather multiple candidate concept image sets, a process that can be imprecise and labor-intensive. To address this challenge, this paper redefines the creation of concept image sets as an image generation problem. To this end, in this work, the authors introduce an RL-based preference optimization algorithm that fine-tunes the vision-language generative model using approximate textual descriptions of concepts. They also conduct extensive sets of experiments to demonstrate that the proposed method effectively articulates complex and abstract concepts that align with the target class, which are often difficult to create manually.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written and easy to follow. The problem definition is novel and interesting. The paper effectively reframes the generation of concept image sets as an image generation problem, providing a novel perspective that addresses the limitations of traditional concept-based explanation methods. This shift enhances the efficiency and effectiveness of generating meaningful concepts.\\nThrough a series of well-designed experiments, the paper demonstrates the capability of the proposed method to generate complex and abstract concepts that align with the target class. This empirical evidence strengthens the paper's contributions.\", \"weaknesses\": \"While the authors demonstrate the effectiveness of their proposed method through both qualitative and quantitative analyses, several aspects of the framework are not thoroughly justified. For instance:\\n\\n(a) The generated concept sets using SD+LORA are randomly divided into two groups. Why is \\\"random\\\" grouping considered optimal? Could this be problematic?\\n\\n(b) The use of reinforcement learning (RL) is also not justified. Is it really necessary to implement an RL policy for the purpose mentioned in the paper? Why not simply calculate the TCAV score for each possible seed prompt kt, and then fine-tune the SD+LORA weights based on the seed prompt that yields the highest TCAV value?\\n\\n(c) Incorporating each component of the proposed framework increases computational demands and time constraints. Consequently, such an analysis may not be feasible in real-time, as generating images with Stable Diffusion, training the DQN-based RL policy, and fine-tuning the Stable Diffusion model with preferences all require significant processing time. Is this complex pipeline truly necessary?\\n\\n(d) I also struggle to see a practical significance for the proposed problem. Why is it important to generate a diverse set of concept images at all? It seems more crucial to generate concepts that directly explain the task at hand.\\n\\n(e) Furthermore, why choose DQN? As we continuously update the LORA weights based on the TCAV preference score, the underlying RL environment becomes non-stationary. This means that the same action taken by an RL policy could lead to different reward values at different times. How can this issue be addressed?\", \"questions\": \"Please see my comments on the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Once again, thank you for taking the time to provide us with your feedback on our submission. We understand that the reviewer may not have had the time to thoroughly go through the rebuttal yet. If there are any further clarifications or additional details that we can provide to address your queries, please let us know. We are more than happy to provide any additional information or explanations to ensure our approach is clearly conveyed.\"}", "{\"metareview\": \"Scientific Claims and Findings:\\n\\nThis paper introduces a novel algorithm called Reinforcement Learning-based Preference Optimization (RLPO) to automatically generate visual concepts that explain the decisions of deep neural networks (DNNs). The algorithm addresses the limitation of traditional concept-based explanation methods (like TCAV) which require manual collection of concept images. The authors claim that RLPO generates more novel and abstract concepts compared to traditional methods, offering a detailed understanding of the DNN's decision-making process. They also suggest that RLPO is generalizable to non-vision domains, specifically NLP tasks.\", \"strengths\": \"The paper is well-written and easy to follow. \\u00a0 \\n\\nThe proposed method is novel and addresses a significant limitation in existing XAI methods. \\u00a0 \\n\\nThe use of a generative model to create concept sets is innovative and reduces human intervention. \\u00a0 \\n\\nThe method can generate complex and abstract concepts, offering a detailed explanation of the DNN's decision-making.\", \"weaknesses\": \"Some reviewers found the explanations of the algorithm and mathematical proofs to be overly lengthy. \\u00a0 \\n\\nThe practical significance of generating a diverse set of concept images wasn't immediately clear to all reviewers. \\u00a0 \\n\\nThe necessity of using RL and the complexity of the pipeline were questioned. \\u00a0 \\n\\nThe paper could benefit from a grammar check and consistent use of terms.\", \"missing_elements\": \"Clear and concrete examples of how the generated concepts can be used to debug or improve DNNs.\\n\\nA more thorough justification for design choices, such as the use of RL and DQN.\\n\\nA clearer explanation of how the method is applied in sentiment analysis tasks.\", \"reasons_for_rejection\": \"The paper does not adequately demonstrate the practical usefulness of the proposed method for explaining network behavior. \\u00a0 \\n\\nThe experiments focus heavily on novelty and abstractness, but their connection to improving user understanding is not well established. \\u00a0 \\n\\nThe paper lacks a key experiment directly measuring the usefulness of the generated concepts for explaining model decisions.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal period involved extensive discussion on several key points:\", \"practical_usefulness_and_novelty\": \"Reviewers questioned the practical significance of the method and the meaning of novelty in the context of XAI. The authors clarified the notion of novelty as the ability to generate concepts that are beyond what humans can typically conceive, emphasizing its importance in identifying model vulnerabilities. They also provided additional experiments to demonstrate the usefulness of the method in debugging and improving DNNs.\", \"necessity_of_rl\": \"The necessity of using reinforcement learning (RL) in the framework was questioned, with suggestions to use simpler methods like brute-forcing seed prompts. The authors defended the use of RL by highlighting its efficiency in exploring the vast space of prompt combinations, arguing that brute-forcing would be computationally infeasible. They also presented experimental results comparing the performance of RLPO with and without RL, demonstrating the benefits of their approach.\", \"abstraction_levels\": \"The concept of abstraction levels generated by the method was discussed, with reviewers seeking clarification on its meaning and relevance to XAI. The authors explained that abstraction levels provide a layered understanding of the model's reasoning, revealing both high-level and low-level concepts that contribute to its decisions. They acknowledged that while not mandatory for every use case, abstraction levels offer an additional layer of insight into the model's behavior.\", \"human_understanding\": \"The ability of humans to understand the generated concepts and their role in improving user understanding was a key point of discussion. The authors clarified the distinction between two types of humans involved in XAI: those who create concept sets and those who use them. They emphasized that their method targets the former, automating the process of concept creation, and that the generated concepts are indeed understandable to the latter. They also presented results from a human survey to support their claim that the generated concepts improve user understanding.\", \"weighing_in_on_the_rebuttal\": \"The authors' clarifications on novelty and the additional experiments provided valuable insights, but they did not fully alleviate concerns about the practical usefulness of the method.\\n\\nThe defense of RL usage was convincing, especially with the experimental comparison.\\n\\nThe explanation of abstraction levels was helpful, but their practical relevance to XAI remains somewhat unclear.\\n\\nThe distinction between different types of humans in XAI was insightful, but the human survey results were not entirely convincing in demonstrating a significant improvement in user understanding.\\n\\nOverall, the rebuttal addressed some of the initial concerns but did not fully resolve the primary issue of demonstrating the practical usefulness of the method for XAI.\"}", "{\"comment\": \"We thank the reviewer for going through our response. To summarize our response, we address several misunderstandings: we clarify that the human survey effectively measures usefulness, that run-time is not a limiting factor, and we resolve the reviewer\\u2019s confusion regarding the meaning and application of abstractions.\\n\\nWe also highly appreciate if the reviewer can suggest concrete and correct experiments, yet feasible and fair to fit in a standard ML methods paper. \\n\\nWe, by not any means, argue that retrieval methods such as CRAFT are bad in all aspects\\u2013they have their own merits such as being simple and fast. In fact, we thought of using retrieval methods as potential priors for the generative model. As clearly shown in Figure 2, this paper wants to expand the set (from orange to blue). In order to expand (in our case, generate novel concepts), we have to sacrifice something (in our case, runtime is higher than retrieval methods, as highlighted under limitations). This expansion is what the reviewer identifies as \\u201cyour work may be best suited for helping users understand failure modes on OOD images.\\u201d\\n\\n> >Our primary objective, consistent with XAI and what the reviewer states, is to improve the user\\u2019s understanding of the model. However, there is more nuance to this broad goal. The focus on \\\"novelty,\\\" that most existing methods cannot provide, is not a deviation from this goal but rather an added dimension to further this goal. While we appreciate the reviewer's perspective, we probably do not want to fix the goals of XAI as it\\u2019s an evolving field.\\n\\n> If this is the primary objective, I still believe that the link between novelty, abstractness, etc... and improving the user's understanding is missing. I am not trying to \\\"fix the goals of XAI\\\", but in order to claim that these secondary properties are valuable to the primary objective, a clear effect, showing that the method improves human understanding, should be demonstrated.\\n\\nIn machine learning conferences, different papers have diverse contributions such as methods, frameworks, datasets, applications, theory, etc. As we highlighted, this is a methods paper. Therefore, we focused on the aspects of the algorithm (e.g., why RL is a better choice) and conducted extensive experiments to validate it. We hope that the reviewer agrees that it is unfair to ask to deploy a model in a real-world application to assess the downstream usefulness (unfortunately, this is the only proper way to assess the real usefulness) as it requires significant more effort and typically beyond the contributions of a methods paper (99% ML papers do not have such deployment usefulness assessments), though it might be fair to ask in a journal paper. Below we discuss why our human survey indeed measures usefulness. If there is a concrete, yet feasible, experiment that the reviewers can suggest, we would be happy to conduct. Unfortunately, the authors find the reviewer\\u2019s request rather vague.\"}", "{\"comment\": \"## Do concepts explain the model?\\nWe believe the philosophical question of whether the concepts really explain the model is valid for all concept-based techniques. If a human collects a concept set, is it guaranteed that another human will perceive the same pattern? Not necessarily, as cognitive biases can influence interpretation. If a retrieval method crops and collects some parts of images, is it always understandable to humans? We believe the same argument is true for generated concepts as well. \\nThe concepts our method generates are explainable by design since RLPO iteratively fine-tunes the diffusion model to generate images with high TCAV scores. Additionally, to further verify whether these generated concepts are indeed explaining the neural network we performed the classical c-deletion experiment. As shown in Fig. 8, we see that gradually removing these concepts from the input leads to drop in average-class accuracy. If they were some irrelevant concepts, we would not have seen such a drop.\\n\\nThe user human would see something similar to Figure 6 in manuscript (see [Anonymized GitHub Link](https://anonymous.4open.science/r/RLPO-E5C6/5Zn5/Responce%201/README.md)). For a zebra test point, the user will see three sets of concept images shown on the right, with the most important at the top. The red set (highest TCAV) highlights stripes, and on the left, it shows which parts of the test image these concepts are most relevant to. The other two sets provide supplementary explanations. The green set illustrates a green wooded background (note that Stable Diffusion\\u2019s seed for this was running, but RLPO fine-tuned it to generate such backgrounds\\u2014seeds are used only for our analysis and are not important for the user human). The blue concept set depicts a brown background with some green on the horizon. These supplementary explanations primarily describe the background. Consolidating everything, while the most important concept for the network to classify a zebra is black-and-white stripes, its habitat, including a brown background and green trees, also contributes to the classification.\\n\\n\\u201cTo this end, we leverage the state-of-the-art text-to-image generative models to generate high quality explainable concepts.\\u201d What we meant by high quality is the quality of images generated by the model. We can rephrase and tone it down. We acknowledge that our perspective was rooted in the creator human\\u2019s mindset as we were on a quest to automate the concept set creation process. As highlighted in the concluding experiments of section 4.5, we truly hope our work would benefit 1) engineers debug issues in neural networks (Figure 9) and 2) make the use of TCAV easier in downstream applications (Figure 10).\"}", "{\"comment\": \"Many thanks for thoroughly answering to my comments. It answers my questions (and especially the last experiment is much appreciated). I am reading the responses by the other reviewers.\"}", "{\"comment\": \"> > Training the RLPO framework takes approximately 6\\u20138 hours for a particular class...\\n\\n> This is a lot of time spent to analyze a single class of a model. If you were to analyze all of imagenet this would result in 6000 hours (250 days). Once again, this strikes me as not particularly useful for most users unless the method can provide extremely valuable insights.\\n\\nThank you for pointing this out. We, however, believe that there is a misunderstanding here. The delay is due to the current speed of generative models, which are expected to be significantly faster in the future [1]. Additionally, the reported time is for the most standard fine-tuning process (LoRA) on a small-scale workstation GPU. If faster versions of LoRA are used on GPC/AWS, run-time is not a significant bottleneck. \\n\\nOn a different note, ImageNet is a standard dataset, not an application. When neural networks are adapted to the real-world applications, though the network is trained on imagenet, the last few layers of the network are removed and fine-tuned for the actual number of classes. In most applications (e.g, medical image analysis, autonomous driving, etc.), the number of classes that the decision-making module has to deal with is relatively low. \\n\\n1. Chadebec, Clement, et al. \\\"Flash diffusion: Accelerating any conditional diffusion model for few steps image generation.\\\" arXiv preprint arXiv:2406.02347 (2024).\"}", "{\"comment\": \"We thank the reviewer for participating in the discussion. We have updated the paper and further clarified the contribution in the revised version.\"}", "{\"comment\": \"We appreciate your thoughtful review, which acknowledges the originality, experiments, and clarity of our approach. Please see our response below. Please refer to this **[Anonymized GitHub Link](https://anonymous.4open.science/r/RLPO-E5C6/KnKW/README.md)** where we have compiled detailed explanation and images/plots for better understanding.\\n\\n## W1. \\n**Are generated images not representative of the seed prompt?**\\nWe would like to clarify a potential misunderstanding that the generated images should look like the seed prompt. With the example described in the paper (Figure 5), we wanted to show that there are millions of very diverse images that can be generated using the seed prompt (in this case, \\u201czoo\\u201d). However, because of RLPO, the diffusion model learns to only generate images that explain the network\\u2019s decisions. Figure 5 demonstrates this process. Please observe that the seed prompt \\u201czoo\\u201d becomes more animal-ish at t=10, then animals get more stripes at t=20, then colors appear at t=30, and so on. *We have now added this description in the paper.*\\n\\n**Are images unrealistic?**\\nNeural network classifiers can provide the same output for different input patterns/concepts. Some of those concepts might be expected whereas some others are unexpected or unrealistic. As we argue with reference to Figure 2, we want RLPO to find such concepts that the human might think of. Therefore, the ability to reveal unrealistic concepts that give a high TCAV score is what we indeed wanted (note that the image quality does not degrade in this case as the TCAV score is higher and the Aesthetics score>3.). Having said that, for the specific case in Figure 5, please note that t=30 is not the final image\\u2014we just showed timesteps until the class label flips to tiger.\", \"aesthetics_scorer\": \"https://github.com/discus0434/aesthetic-predictor-v2-5?tab=readme-ov-file\\n\\n## W2.\\n**What\\u2019s the necessity of RL?**\\nUsing only GPT was indeed our first attempt, which proved to be highly inefficient. \\nNote that the action space is not 20 seed prompts but the combination of the 20 seed prompts. Assuming 30 mins per run, it will take 2^20 * 30 = 182 years to do this if we brute-force. Since RL intelligently and dynamically picks which prompt combinations to use (and not use), RLPO takes only ~8 hours. Therefore, unlike a static ranking approach, our RL-based framework is much more pragmatic to handle unbounded generative models. The high epsilon case in Table 1 is somewhat similar (yet better) to brute forcing through the seed prompts. \\nTo see the quality of generated images with and without RL (seed prompt \\u201cstripes\\u201d for the zebra class after 300 steps), please see the images in this **[Anonymized GitHub Link](https://anonymous.4open.science/r/RLPO-E5C6/KnKW/README.md)**. \\n\\n## W3.\\n**Is computational complexity an issue?**\\nWe agree that we should have clarified this point in the paper. We can still obtain explanations in real-time because TCAV can run in real-time. However, we agree that RLPO cannot create concept sets in real-time, mainly because of the diffusion fine-tuning step (RL is very fast). However, we do not think there is a need to create concept sets in real time. For instance, if we apply TCAV for identifying a disease from an X-ray, we can create the concept set using a test set ahead of time before deployment, which will take a few hours, and then run TCAV in real-time. Hence, concept set creation is a one-time investment. In case of a long-term distribution shift in a particular application, we can keep adding concepts to the dictionary, if RLPO discovers anything new. Please also note that the traditional method of manually creating a concept set can not only be slow and labor intensive but also can miss the important concepts.\\n___\"}", "{\"comment\": \"We thank the reviewer for the constructive comments and identifying the method as interesting. We have addressed the concerns in weakness and questions as follows.\\n\\nPlease refer to this **[Anonymized GitHub Link](https://anonymous.4open.science/r/RLPO-E5C6/5Zn5/README.md)** where we have compiled detailed explanation and images/plots for better\\u00a0understanding.\\n\\n## W.\", \"to_summarize\": \"**Our claim:** When considering automated concept set creation techniques (that are retrieval based), the proposed generative method can come up with novel concepts that trigger the neural network (unveiling such new patterns can help engineers fix models).\\n\\n**Experimental evidence to support the claim:**\\n1) Quantitative: Table 4 shows that our method can generate new concepts. \\n2) Qualitative: Figure 4 illustrates some results of Table 4. \\n3) Human evaluation: Most humans cannot imagine certain patterns will even trigger the neural network. \\n\\nLet us elaborate on this. \\n\\nTo evaluate the concept's relevance to the model, we evaluate the method\\u2019s success through multiple metrics, as reflected in the experiments section. Table 4 provides quantitative evidence for the relevance of the generated concepts by measuring TCAV scores. Higher TCAV scores indicate that the concepts discovered are aligned with the model\\u2019s internal representations, confirming their significance for decision-making. This shows that the generated concepts \\\"matter\\\" to the model. We use metrics such as cosine similarity and euclidean distance to show that the generated concepts are novel\\u2014generated concepts are farther away from the class images, indicating that concepts generated by our method are not a subset of class data as in retrieval methods. \\n\\nWe also conducted a human experiment (results shown in Table 2), where when asked for identifying relevant concepts within generated, retrieved or both, most volunteers irrespective of laymen or experts mostly picked retrieved concepts, even though both were equally important to the network. This experiment indicates that it is not easy for humans to imagine concepts by themselves as the neural network\\u2019s learning process is different to ours.\\n\\nFurthermore, to help humans understand what each relevant concept represents, we made use of ClipSeg to identify the intersection between generated concept images and test images. Figure 6 highlights the regions each relevant concept represents in the test image.\\n\\n---\\n\\n## Q1.\\nFor the experiments presented in the paper, the RLPO framework with DQN+Diffusion typically requires approximately 8 hours per class on a machine equipped with an NVIDIA RTX 4090 GPU to train, with the most computationally intensive step being the iterative fine-tuning of the generative model. This concept set creation is a one-time investment for an application of interest. Evaluating TCAV is pretty much real-time once we have the concept set. We will include this discussion in the revised version of the paper.\\n\\n## Q2. \\nThe images in Figure 6 are final outputs of the RLPO framework\\u2013this is what most people want as explanations. They are the lowest level of abstraction in explanations. However, if someone wants higher levels of abstractions in explanations, as shown in Figure 5, they can also obtain them. In Figure 5, the explanation of the tiger class progresses through levels of abstraction from high to low: the importance of zoos, followed by animals in zoos, then striped animals in zoos, and finally orange-and-black striped animals, and so on.\"}", "{\"comment\": \"We sincerely thank all the reviewers for their detailed and constructive feedback. We deeply appreciate your recognition of the novelty and potential impact of our approach, as well as the thoughtful questions. In this response, we categorize the feedback into two sections for clarity:\\n- Weaknesses (W): Concerns raised in the reviews.\\n- Questions (Q): Specific queries or clarifications sought regarding the methodology, results, or framework.\\n\\nWe address all points comprehensively, providing additional context, experimental evidence where necessary, and detailing revisions planned for the manuscript to address your feedback effectively.\"}", "{\"summary\": \"The paper proposes to treat the concept set creation as an optimisation problem. Through the means of deep reinforcement learning, it iteratively refines the concepts obtained using Stable Diffusion generative model so one can generate concepts instead of retrieving them in the existing data. They also introduce the concept generation with respect to an abstraction level. The authors also explore the ideas of analysing the concepts during fine-tuning and NLP problems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Novelty: the work builds upon existing methods in interpretability, such as concept based explanation, but contains a set of novel ideas including (1) the problem statement and the method for optimisation of concept set through reinforcement learning technique and LoRA (2) the idea to use such concept set generation for to produce explanations for different abstract layers\", \"clarity\": \"the paper is clearly written and well presented.\", \"motivation\": \"The authors give a strong motivation for the proposed method: the retrieved data might not filly describe the concepts embedded into the model, and the optimisation of such concepts using generative models and RL can be a promising alternative\", \"significance\": \"I think this work is significantly improving the available options for explaining visual (and potentially language) models and therefore I believe it is significant enough.\", \"correctness\": \"I checked the paper and I believe the claims and the maths are correct\", \"reproducibility\": \"as far as I can see, taking into account both the paper and the code, the experiments look reproducible to me.\", \"weaknesses\": \"Clarity: there are a few questions regarding the limitations of this work (see below in the questions section)\", \"questions\": \"Questions:\\n1. \\u201cAs a specific application, we see what concepts are removed and added, as well as how the concept importance changes when we fine-tune ResNet50 model on ImageNet to improve accuracy\\u201d If we want to track the evolution of concepts during the training procedure, do we need to retrain the concepts every time we update the model, or is there any workaround which would help track the concepts during the finetuning process? Either way is fine, however might be good to clarify it in the paper or in the appendix. \\n2. Figure 9 should have the x axis annotated ( I understand it represent concept identifier)?\\n3. One of the concerns might be the lack of systematic quantitative comparison with the retrieval-based explanations. Is there any possibility to compare the explanations numerically with the state-of-the-art prototypical retrieval-based explanation methods, such as, e.g. Craft (Fel et al, 2023)?\\n4. I think there is one important limitation of the proposed method which I would like the authors to cover in the paper: the explanations also, in a way, depend upon the inner working of the generative model. Imagine, for example, that the generative model has mode collapse and does not represent the whole set of patterns available to the model we explain. In this case, it would lead us to obtaining the best possible explanation amongst the suboptimal ones, which may lead to the explanations not representing some aspects of the model\\u2019s inner working. In a case when it is a safety-critical application, that mode collapse might represent some anomalous event, which may not be covered by the set of explanations achievable by the model and therefore would not allow us to understand the reasons behind the model. A variation of such problem might include the problem of explanation when the generator only offers poor quality of image output(for whatever reason, e.g. it does not cover some particular concept). To make it clear, I understand that there is also a counterpart to such limitation in a case of standard, retrieval-based, prototypical explanation: there might not be such a piece of data which would closely match the phenomenon. It is therefore, in one way or the other, a limitation of many post hoc explanation models. I wonder if the authors agree with such limitation, and in any case would like to ask to include the discussion. \\n5. In relation to that, perhaps not mandatory, but another idea of an experiment: how does the numerical performance of the algorithm, e.g. Figure 8, would compare for different image generators? Does the GAN model, which is prone to mode collapse, result in worse C-insertion metrics?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for the clarifications. However, I am still not convinced about its practical significance. Each example (such as Why does it snow on Mount Denali in Alaska? or Why did an autonomous vehicle hit a pedestrian? ) demands a task-specific explanation that is both tailored to the particular requirements of the task and diverse in its approach. It is not enough to provide a generic variety of explanations; the diversity must directly align with and enhance the task-specific goals. This distinction ensures that explanations are not only varied but also relevant and meaningful in the context of the task.\\n\\nHow does the current approach succeed in generating this level of diversity within task-specific explanations?\"}", "{\"comment\": \"## Q1.\\n**Are explanations faithful?**\\nGood point. Compared to a random search algorithm, RL dynamically rewards updates that give a high TCAV score. Therefore, conceptually, we do not expect RLPO to generate totally different concept sets every run. We now ran the RLPO algorithm 3 times (i.e. 3 trials) for the same seed prompt set. During inference, we calculated the CLIP embedding similarity between trials for the top three seed prompts (stripes, running, and mud for the zebra class - see Figure 6). The low Wasserstein and high cosine similarities indicate that the generations are similar. A low Hotelling's T-squared score (a generalization of the t test for multiple dimensions) also indicates that the generated images are from the same distribution (for reference, this score is 7488.86 for stripes-running).\\n\\nInter-trial Concept Comparisons for \\u201czebra\\u201d class across three trials:\\n| Metrics | Stripes-Stripes concept | Running-Running concept | Mud-Mud concept |\\n|-----------------------------|-------------------------|--------------------------|-----------------|\\n| Avg. Cosine similarity | 0.996 \\u00b1 0.0008 | 0.997 \\u00b1 0.0004 | 0.997 \\u00b1 0.0004 |\\n| Avg. Wasserstein distance | 0.955 \\u00b1 0.074 | 0.828 \\u00b1 0.074 | 0.823 \\u00b1 0.068 |\\n| Avg. Hotelling's T-squared score | 2.462 \\u00b1 0.091 | 2.365 \\u00b1 0.081 | 2.663 \\u00b1 0.229 |\\n| Are they from the same distribution? | Yes | Yes | Yes |\\n\\n## Q2. \\nIn our approach, SD is fine-tuned iteratively: generating a set of images, assessing their relevance, and then refining the model to improve alignment with explainability objectives. This sequential framework allows the model to adaptively optimize towards explanations by building upon the outcomes of previous steps.\\nEach step in the sequence informs the next, ensuring that the fine-tuning process converges towards increasingly meaningful and interpretable concept representations. This dynamic adjustment is essential for steering SD toward generating high-quality explanations that progressively align with the target class, rather than relying on static, one-shot methods that lack adaptability. Please see the animation we posted in the **[Anonymized Github](https://anonymous.4open.science/r/RLPO-E5C6/KnKW/README.md)**.\\n\\n## Q3. \\nYes, fine-tuning one RLPO framework with DQN+Diffusion per class is necessary. This is because each class may have unique features and abstractions that require tailored exploration to generate meaningful and interpretable concept representations. The RLPO framework ensures that the generated concepts align closely with the specific nuances of the class as learned by the model, which a generic approach cannot achieve.\\n\\n\\u201cAn untuned diffusion model and prompt LLMs\\u201d is exactly what is behind most VLMs such as GPT4 (it calls DALL.E2 which is based on a diffusion). In other words, the question is more akin to why don\\u2019t we use GPT4 to generate images directly. Finding a single long, descriptive prompt that effectively encapsulates the target class's abstractions is highly challenging, if not impossible, except for something obvious such as \\u201cstripes.\\u201d Therefore, even in that case we have to develop an automated prompt engineer using RL. From our experience, it is much more efficient and controllable to fine-tune stable-diffusion compared to developing an automated prompt engineer.\\n\\n## Q4. \\nThank you for catching this oversight in the appendix. We have rectified this in the revised paper. $t_{\\\\eta}$ refers to the time at which the system reaches an explainable state. After this point, though we can fine-tune SD, TCAV does not change significantly.\"}", "{\"comment\": \"Thank you for the response! The reviewer\\u2019s new understanding about the claims is correct. Thank you for acknowledging that the paper has successfully demonstrated its claims.\\n\\n**Since we have clarified all of previous concerns and new concerns below through new experiments, we sincerely hope the reviewer can reconsider the score.** As the reviewer can also see, we honestly put a lot of extra effort into rebutting 6 reviewers, mostly because this is not a classical XAI-style paper. We are sure that the reviewer also thinks that the contribution of the paper, the amount of experiments we have run, and the rebuttal is worth updating the score. Please see our answers below. \\n\\nThe results presented in Figure 9 of the paper highlights a potential use case of the proposed method. It shows how concepts shift with fine-tuning and the proposed method\\u2019s ability to detect these new concepts. \\n\\nTo further showcase the usefulness, we conducted an additional experiment. In this experiment, we choose a pre-trained Googlenet classifier for the Tiger class whose important seed prompts were \\u2018orange black and white\\u2019, \\u2018orange and black\\u2019, and \\u2018blurry image\\u2019 with TCAV scores of 0.66, 0.66, and 0.62, respectively. Out of these seed prompts, \\u2018orange black and white\\u2019 and \\u2018orange and black\\u2019 highlight the tiger pixels while \\u2018blurry image\\u2019 seed prompt highlights the background pixels (see sample explanations in **[anonymous github](https://anonymous.4open.science/r/RLPO-E5C6/D5KJ/Response%201/README.md)**). What that means is, in order to classify a tiger, Googlenet looks at both the foreground and background. Now the engineers want the classifier to classify the tiger based on tiger pixels, not its background (note: from the classical Wolfe-Husky example in LIME [1], we know the spurious correlation of background). \\n\\nTo this end, we generated 100 tiger images based on concepts related to \\u2018orange black and white\\u2019 and \\u2018orange and black\\u2019 using a separate generative model and fine-tuned our Googlenet model. Running RLPO on this fine-tuned model revealed that the model learned some new concepts such as \\u2018whiskers\\u2019 and also revealed that previous concepts such as \\u2018orange black and white\\u2019 and \\u2018orange and black\\u2019 are now more important with TCAV scores of 1.0 and 1.0, respectively. This means that the classifier is now only looking at tiger pixels, not the background. (see dataset samples and shift plot in **[anonymous github](https://anonymous.4open.science/r/RLPO-E5C6/D5KJ/Response%201/README.md)**).\\nThis experiment clearly shows how the proposed method can be used to improve a neural network\\u2019s undesirable behavior.\\n\\n1. Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. \\\"\\\" Why should i trust you?\\\" Explaining the predictions of any classifier.\\\" Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.\"}", "{\"comment\": \"Thank you for your time and I now understand the claims better. However then I don't quite yet understand the interpretation of Table 2 results. What do these results tell us about how the concepts can be utilized by humans? It seems the concepts are not likely to be understood by humans. This seems undesirable, unless the point is about generating concepts that the model indeed uses. Just the interpretation of this evaluation for the global goal is a little unclear.\", \"so_the_gist_of_my_questions\": \"what is the benefit of having \\\"explanations\\\" if they are not understandable by humans? I understand that the combination with ClipSeg provides aid here. It would just be good if the authors can elaborate more on this explainable aspect. In other words the Evaluation section clarifies whether the identified concepts are novel and reliable, but not whether they \\\"explain\\\" the model. I ask these questions because in the methods section the authors state \\\"To this end, we leverage the state-of-the-art text-to-image generative models to generate high quality explainable concepts.\\\" But for me, clear evaluations on the explainable aspect are missing. I hope this clarifies the issue.\"}", "{\"summary\": \"Some concept-based XAI methods, such as [1,2], require the creation of a concept-specific set of images to pass through the network. To eliminate the human-in-the-loop, the authors explore using a combination of reinforcement learning and generative models to create concept-specific image sets. The authors stated goals include:\\n (1) producing concept sets that are beyond what human practitioners may be able to discover on their own. \\n (2) producing concepts at a variety of abstraction levels that they are able to control with a parameter. \\n (3) producing concept sets that demonstrate an increases in novelty, abstractness, diversity, generalizability (to non-vision domains) and actionability (utility).\\n\\nThe core of the method is to optimize a LORA-adapted stable diffusion model to produce concept sets that trigger the target model (model to be explained). The RL agent is used to search a set of text prompts for the diffusion model for the text prompts. \\n\\n[1] Kim, Been, et al. \\\"Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav).\\\" International conference on machine learning. PMLR, 2018.\\n[2] Schut, Lisa, et al. \\\"Bridging the human-ai knowledge gap: Concept discovery and transfer in alphazero.\\\" arXiv preprint arXiv:2310.16410 (2023).\\n[3] Hu, Edward J., et al. \\\"Lora: Low-rank adaptation of large language models.\\\" arXiv preprint arXiv:2106.09685 (2021).\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"(S1) The authors identify an interesting question: how to choose a probe set for extracting concepts from models?\\nThe proposed method considers an interesting direction of using generative methods to create images that may not be present in the dataset. \\n(S2) The formulation of considering TCAV scores as preference scores is interesting. \\n(S3) The authors make an effort to quantitatively assess the concept images generated by their method.\", \"weaknesses\": \"(W1) The authors stated goals for this work are misaligned with the goals of explainable AI in general. The goal of XAI is to improve a user's understanding of a model as a primary goal. However, the authors focus on \\\"abstractness\\\" and \\\"novelty\\\" as the goals of their method. While the authors propose several experiments to measure abstractness and novelty, they fail to provide any convincing experiments on the utility of their method for explaining network behavior. For example, [1] provides a clear, well-motivated experiment, i.e. users are asked to predict model behavior when given an explanation.\\n\\n(W2) Additionally, I find the term novelty to be misleading. The term novelty can be interpreted in at least two ways, for example, it could be interpreted as how distinct different discovered concepts are from each other. When measuring novelty in this manner, it seems that the author's method may not exhibit novelty, since in Line 288 they state, \\\"it is possible for different actions to lead to the same explainable state.\\\" Instead, the authors measure novelty as the distance between concept images and images the test set. They use this metric to compare their work to prior work, resulting in the fairly trivial result that generated images are farther from the test set than methods that use the test set. \\n\\n(W3) The ablation study for choosing TCAV is unnecessary. The authors choose to use either a human, LLM, or TCAV scores from the model to make preference judgements. However, since the goal is to explain the model the other first two options are completely unnecessary. \\n\\n(W4) The authors claim to be able to generate concepts sets at different levels of \\\"abstraction\\\". However, whether this \\\"abstraction\\\" is due to SD or due to the target model is unclear. Since, the goal is to understand the target model, once again, I find this experiment to be insufficient to say anything interesting about the target model. \\n\\n(W5) Finally, the computational cost of this method is quite high and it is unclear if the results are worth the cost. \\n\\nIn summary, the authors lack a key experiment measuring the utility of their method. Additionally, I find that some experiments that the authors conduct to have trivial results. \\n\\nThere are several places with typos. \\nL91 mode -> model\\nL534 the term NUT is introduced without being defined?\\n\\n[1] Colin, Julien, et al. \\\"What i cannot predict, i do not understand: A human-centered evaluation framework for explainability methods.\\\" Advances in neural information processing systems 35 (2022): 2832-2845.\", \"questions\": \"The most important question is: can the authors provide any experimental result that concretely shows that their method is able to explain model decisions?\", \"some_secondary_questions_are\": \"1) Why RL at all? Why not just optimize every \\\"seed prompt\\\"?\\n2) How often do seed prompts converge to the same outputs? \\n3) This work brings up the question, under what domain should we care about explaining model behavior? The generated images are OOD for the model. When do we want to explain OOD behavior for the model? Is this useful?\\n4) How does your method relate to adversarial attack methods?\\n5) How long does it take for your method to run?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> > While we like the utility metric introduced in [2], it does not help measure novelty.\\n\\n> I suggested this work because it directly measures usefulness, not novelty. The authors should convince the readers that novelty is worthwhile for explainability through experiments. This experiment may not be the best fit, for example, your work may be best suited for helping users understand failure modes on OOD images.\\n\\nBefore showing an explanation is useful for an application, firstly we should come up with correct explanations. Although the contribution of the paper is about coming up with explanations, we went a step beyond to show that the explanations are useful for the user to explore and understand a model\\u2019s inner workings. The human survey (Table 2) in fact measures the usefulness. All the concept images we showed the human are concepts, either from retrieval methods or our generative method. The participants tell us how much information they gained about the model after seeing the explanation. This new information gain is the usefulness/utility of the explanation.\\n\\nHere is a simple method to come up with an incorrect explanation that gives a high utility score using [2]: given an image that contains a zebra to test, run an object detector to crop the image and pick the bounding box that has the highest similarity to the test image (i.e., bounding box around the zebra). If we use the metric in [2], the usefulness will be 100% and will beat any other XAI method to-date because the user associates the zebra test image with the zebra concept image. But is the explanation correct and trustworthy? No (unless lucky), because it never looked inside the neural network.\\n\\nUtility should always be measured with respect to the end-goal, which, in our case, is finding novel concepts. Simply adapting [2], which measures the association between the test input and the concept, is not correct in our case because what we want to measure is what the user did not expect (information gain as in Information Theory). This setup is different to retrieval methods, where they show compared to a random crop, cropping from the zebra itself is better. To reiterate, our comparison was not to show that retrieval methods cannot explain but to show that we can expand the concept set beyond retrieval methods. \\n\\n> > Our argument is that there are other patterns/concepts that trigger the neural network (the goal of this work) and identifying them is important to fix the issues.\\n\\n> It's known that \\\"strange\\\" patterns and concepts can trigger neurons, this fact is exploited in adversarial attack research. This work does not show adversarial attacks using the method nor fixes those attacks. This would also be an interesting contribution.\\n\\nIt is a long-term goal. We believe it is not fair to ask to run adversarial examples and how to fix them as it is an application which is work worthy for a completely different contribution/paper. \\n\\n> In summary, my primary concern remains unaddressed. I am not convinced that the applications you write about are possible with the method provided. As you show clearly in your work, your results are more novel and abstract than prior work. While I have no issues with measuring novelty and abstractness, I don't believe these traits can be used as a proxy metric for usefulness as a tool for explainability.\\n\\nThe usefulness of our tool is the ability to find new concepts. The human survey exactly measures that and it is not a proxy. We believe the reviewer\\u2019s confusion stems from trying to compare with some other literature whose objective might be different. As this confusion can happen to the broad readership of XAI, we will clarify this in the paper.\"}", "{\"summary\": \"This paper focuses on the generation of concept images to explain black-box image classification models. It proposes a reinforcement learning-based preference optimization (RLPO) algorithm to fine-tune a diffusion model for generating images that can maximize the TCAV scores. It also proposes to use DQN to search appropriate actions from the seed prompts. Experiments show that the proposed approach can generate complex and abstract concepts aligning with the test class.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Optimizing diffusion models to maximize TCAV scores sounds interesting and original to me.\\n2. The authors performed extensive experiments to demonstrate the effectiveness RLPO.\\n3. Most parts of the paper are clearly written and easy to follow.\\n4. The visual results presented are helpful in illustrating the advantages introduced by RLPO.\", \"weaknesses\": \"1. While diffusion models make the whole system more flexible than \\\"retrieval methods\\\" by generating new images that may not be presented in current datasets, the quality of such images is questionable. For example, in Figure 5, the generated image at timestep 30 looks unrealistic and is not closely aligned with the seed prompt (worse than the initial image).\\n2. Though Table 1 shows RL helps the model to select approprate seed prompts from the given set, I'm still concerned about the necessity of using DQN in the framework. VLM/LLM can be used to generate a small high-quality seed prompt set, and even rank them based on their potential importance (e.g., the experiment in this paper has a set with only 20 prompts). One can use the whole set and focus on the fine-tuning of the diffusion model. \\n3. The efficiency seems to be a problem.\", \"questions\": \"1. While Tabel 4 shows the concept images generated by RLPO are more diverse, how do we know if they are faithfully reflecting the the same concept instead of overfitting the TCAV?\\n2. Why should the problem be designed as a sequential decision-making problem? \\n3. Do people need to finetune one RLPO framework with DQN+Diffusion for each class? Would it be more efficiently and equally effective if they just use an untuned Diffusion model and prompt LLMs/VLMs to generate more detailed text descriptions for the target class given the seed prompt?\\n4. What is $t_\\\\eta$ in Property 2?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## We.\\nCompared to, say, PPO, DQN is stable for discrete action spaces and more sample efficient. However, DQN can be replaced with any deep RL algorithm that supports discrete action spaces. While the LoRA weights are updated iteratively, the TCAV scores (or rewards) generated for a given set of concepts remain consistent for the same configuration of weights (we do not change the neural network under test).\"}", "{\"comment\": \"## W5.\\nWe can still obtain explanations in real-time because TCAV can run in real-time. However, we agree that RLPO cannot create concept sets in real-time, mainly because of the diffusion fine-tuning step (unfortunately, good concepts, whether designed by a human or diffusion comes at a cost). However, we do not think there is a need to create concept sets in real time. For instance, if we apply TCAV for identifying a disease from an X-ray, we can create the concept set using a test set before deployment, which will take a few hours, and then run TCAV in real-time. Hence, concept set creation is a one-time investment. In case of a long-term distribution shift in an application, we can keep adding concepts to the dictionary, if RLPO discovers anything new. On a different note, please also note that the traditional method of manually creating a concept set can not only be slow and labor intensive but also can miss the important concepts. In a real-world setting, retrieval methods or human designed concepts can be used as a starting set of concepts and expand it using RLPO to generate what retrieval or humans could not see/think of. \\n\\nThank you for pointing out the typos. We have fixed them.\\n\\n---\\n## Q1.\\n**Why RL**\\nThe RL action space are a combination of seed prompts. Assuming 30 mins per run, it will take 2^20 * 30 = 182 years (assuming maximum combination to be 20) to do this if we brute-force. Since RL intelligently and dynamically picks which prompt combinations to use (and not use), RLPO takes only ~8 hours. Therefore, unlike a static ranking approach, our RL-based framework is much more pragmatic to handle unbounded generative models. The high epsilon case in Table 1 is somewhat similar (yet better) to brute forcing through the seed prompts. \\nTo see the quality of generated images with and without RL (seed prompt \\u201cstripes\\u201d for the zebra class after 300 steps), please see the images in this **[Anonymized GitHub Link](https://anonymous.4open.science/r/RLPO-E5C6/D5KJ/README.md).** \\n\\n## Q2.\\nThough theoretically plausible (e.g., when the seeds are \\u201cstripe\\u201d and \\u201cstripes\\u201d), we\\u2019d say the probability is negligible because, 1) as explained in Appendix C2, we remove repeated seeds in the first place and 2) RL will stop taking the same action if its not getting rewards. In our experiments, that never happened.\\n\\n## Q3.\\nIt will generate both in-distribution and out-of-distribution concepts. Even with retrieval methods or human-created datasets, we will have both and we do not not see why it is a problem as we are generating concept images not test images. If the question is where would testing OOD behavior is useful:\\n- Safety-Critical Domains: In applications like healthcare or autonomous driving, understanding how the model generalizes to rare or unexpected scenarios (potentially OOD-like) is crucial for trust and reliability.\\n- Bias Detection and Debugging: Generated concepts can reveal biases or spurious correlations learned by the model, allowing for more targeted mitigation strategies.\\n- Generalization and Robustness Analysis: Abstract concepts help analyze how the model reasons under different conditions, providing insights into its robustness.\\n\\n\\n## Q4.\\nOur method seeks to generate explainable concept images that reflect the model\\u2019s internal representations. As an application, if an adversarial method (e.g., perturbing the inputs) can find adversarial test points, we can explain it using TCAV. Our method will help to get better concepts for TCAV that humans might have not thought of but useful to explain those test points. \\n\\n## Q5.\\nTraining the RLPO framework takes approximately 6\\u20138 hours for a particular class, depending on a workstation with a gaming GPU\\u2013most of the time is for SD fine-tuning. As explained under W5 above, please note that we have to run this only once (to pre-compute the concepts), not for every single test image. At test time, the run-time is the same as the TCAV\\u2013a few milliseconds to seconds.\"}", "{\"title\": \"Summary of the rebuttal process to all reviewers and AC\", \"comment\": \"We sincerely thank all the reviewers for the discussion and appreciate AC\\u2019s efforts in overseeing the review process. We appreciate all the detailed and constructive feedback. It has helped us refine and clarify our work. We have addressed all the concerns and clarified our contributions as well as potential misunderstandings.\\n\\nWe explained the distinction between the roles of \\\"creator humans\\\" (who design concept sets) and \\\"user humans\\\" (who interpret the results). Our method primarily targets creator humans by automating concept creation, a tedious and perhaps even impossible task for humans. We also elaborated on the role of abstraction in our method, emphasizing that abstractions, such as moving from \\\"zoo\\\" to \\\"stripes,\\\" are a natural byproduct of the approach, offering optional and insightful layers of understanding without being a primary claim. Furthermore, we validated the novelty of the generated concepts through quantitative metrics and qualitative assessments. We also showed the importance of RL in our framework as requested by reviewer AFj3, KnkW, D5KJ. Reviewers KnkW, TZBx already agrees that RL plays an important role in learning over time\\u2014what trajectories to optimize and what to drop so that we find explanations in the fastest way.\\n\\nThe additional experiments conducted during the rebuttal process further solidify our contributions. As requested by reviewers KnkW and 5Zn5, we conducted additional experiments to validate the faithfulness of generated concepts. These experiments showed that the generated concepts are unique, consistent across runs, and represent distinct distributions for different seed prompts. \\n\\nMost importantly, we show that our method can provide **actionable/useful insights** through our additional experiments where concept shifts were effectively directed through fine-tuning to show (concerns raised by reviewers KnkW and D5KJ). Finally, a human survey validated that the generated concepts improve the understanding of model behavior, showing the practical utility for debugging and analysis. \\n\\nWe are grateful for the reviewers' input, which has greatly strengthened our submission. With these we believe we have addressed all the concerns raised by the reviewers and we are hopeful that reviewers D5KJ, AFj3, KnkW will reconsider the original review.\"}", "{\"title\": \"Thanks for your feedback.\", \"comment\": \"Thanks for your feedback. I will keep the positive score.\"}", "{\"comment\": \"> > As highlighted in recent discussions [1] and our paper (Figure 2), humans cannot understand everything a neural network has learned because they do not learn using the same concepts that we learn\\u2013their learning manifold is different to ours. If a human were to manually come up with a concept set, they are going to miss important concepts because they do not know about the neural network\\u2019s learning manifold and therefore the engineers cannot fix the vulnerabilities of the network.\\n\\n> I have no problem with the goal of generating concepts that may be different than what humans come up with, this is one of the strengths of the work as I indicated. My concern is that the results in Figure 4 do not immediately strike me as useful for human understanding. If it was clear that they were useful, experiments showing that the explanations help human understanding would not have been as necessary. However, the results are, as you indicate, abstract. Thus, you are making a strong claim that this abstractness is a positive and not a negative for human understanding of the model. Unfortunately, there are no experiments that back this claim up. I still believe this work needs a clear demonstration of usefulness for explainability. Also, it's a reach to claim that this method surfaces vulnerabilities in networks without providing experiments that either exploit or defend against these vulnerabilities.\\n\\nWe believe there is a misunderstanding about terminology. We mean abstractions not as in \\u201cabstract arts\\u201d but as abstraction levels as in Computer Science\\u2014a representation of an idea or concept, often removing specific instances or details to focus on the broader principle. In our case, we go from zoomed-out to zoomed-in the neural network\\u2019s activation space (to clarify, not physically zooming): zoo->animals->animals with stripes->stripes, etc. \\n\\nWe do not think we make a strong claim about abstractions as what we state is \\u201cwe observe the progression of output concepts generated by the SD\\u201d and, we visualize it to help understand the progression theoretically (Figure 1 and Appendix B). Also, please note that abstraction levels are a byproduct\\u2014the main contribution is generating novel concepts, for which we have provided plenty of experiments (the main results figure is Figure 6, not Figure 4.). Since it is not a must for the user to use all levels of abstractions (just using the final abstraction level is totally fine for some applications as in most XAI methods), we do not see how it is a negative aspect. For applications such as long-horizon planning, abstractions are important, though not mandatory for every use case. Instead, they serve as an additional layer of insight.\\n\\nWe do not claim \\u201cmethod surfaces vulnerabilities,\\u201d what we claim is that \\\"if there are any vulnerabilities, they can be fixed before deployment.\\u201d The first sentence is a definite claim and the second one is an ability/opportunity. If it was misunderstood, we can further tone it down. The typical workflow of a methods paper with 9 pages is providing the method and experimentally validating that the method is correct ideally with some theoretical insights. We went a step further by illuminating the future potential for model debugging as depicted in Figure 9. We believe it is not fair to ask to run adversarial examples and how to fix them as it is an application which is work worthy for a completely different contribution/paper.\"}", "{\"comment\": \"Thank you for your detailed and insightful review! We appreciate the reviewer\\u2019s novelty of ideas. Please see our answers below.\\n\\n## W1.\\nTo track the evolution of concepts over iterations, it is necessary to retrain the RL algorithm iteratively. This ensures that we can account for changes in the network as it learns new features and potentially forgets old ones during fine-tuning. However, we can maintain a concept set and keep adding newly discovered concepts to the set. Conditioning on a prior set, how to efficiently find a novel set would be a very new and interesting research direction for generative models in general.\\n\\n## W2.\\nThanks for pointing this out! The x-axis represents the concept identifiers, and we will include appropriate annotations in the revised manuscript to make this clear.\\n\\n## W3.\\nIn terms of comparisons, we extensively compared the novelty of concepts generated by our method and retrieval methods using a variety of metrics (Table 4) as well as through human participant evaluation (Table 2). The challenge with retrieval methods is that they leak information about the test set into the explanation, as in its simplest form, they are cropped parts of class images. Human created or generated methods do not have this limitation. However, we find it difficult to design a fair and solid experiment to show this aspect numerically. Not just in our method, even if we compare human created concepts and retrieval methods, they are hard to compare as they offer different perspectives. \\n\\n## W4.\\nWe acknowledge that the quality and diversity of generated outputs depend on the generative model's capabilities. Issues such as mode collapse or insufficient representation of certain patterns could limit the range of explanations. However, such limitations are not unique to generative approaches\\u2014they are also inherent to retrieval-based methods, which are similarly constrained by available data and even human collected concepts, which are influenced by cognitive bias. Though mode collapse was an issue in GANs, we find it hard to find solid literature on mode collapse in SOTA diffusion models. However, we agree that this is a good point and we will include a discussion of this limitation in the revised manuscript. Specifically, we will highlight the dependency of the explanations on the generative model's capability to produce high-quality and diverse outputs. Thank you for pointing it out.\\n\\n## W5.\\nAlthough the experiment the reviewer suggested is really interesting, it would be difficult to replicate RLPO with GANs, since we can not generate arbitrary images based on any random text prompt with GANs. As explored by past research [1], it is possible to develop a GAN to generate random images based on textual description, but they have to be trained specifically for a particular dataset. That is in a way one of the major limitations of using GANs, it is hard to use them beyond the task they are trained to do. Instead, to test the reviewer\\u2019s hypothesis, we tested the RLPO experiment with an older version of Stable Diffusion (SD v1.1), which is known to produce biased (arguably, it is a result of mode collapse) and suboptimal images.\\n1. Reed, Scott, et al. \\\"Generative adversarial text to image synthesis.\\\" International conference on machine learning. PMLR, 2016.\\n\\nArea under curve obtained after applying c-deletion on top 3 concepts generated by SD-1.1 and SD-1.5 for \\u201czebra\\u201d class:\\n| Concept | Stable Diffusion v1.1 | Stable Diffusion v1.5 |\\n|---------|------------------------|------------------------|\\n| C1 | 4.182 | 1.265 |\\n| C2 | 5.957 | 2.525 |\\n| C3 | 6.180 | 2.905 |\\n\\nWe see that we get the lowest area under the curve for the SD v1.5, indicating that concepts generated by SD v1.5 (the good generator) are more related to the class features learned by the neural network. Therefore, the quality of the generator matters.\"}", "{\"comment\": \"Taking an additional step, through another experiment, we demonstrate how these new explanations help engineers fix issues in neural networks.\\n\\nTo further showcase the usefulness, we conducted an additional experiment. In this experiment, we choose a pretrained Googlenet classifier for the Tiger class whose important seed prompts were \\u2018orange black and white\\u2019, \\u2018orange and black\\u2019, and \\u2018blurry image\\u2019 with TCAV scores of 0.66, 0.66, and 0.62, respectively. Out of these seed prompts, \\u2018orange black and white\\u2019 and \\u2018orange and black\\u2019 highlight the tiger pixels while \\u2018blurry image\\u2019 seed prompt highlights the background pixels (see sample explanations in [anonymous github](https://anonymous.4open.science/r/RLPO-E5C6/KnKW/Response%201/README.md)). What that means is, in order to classify a tiger, Googlenet looks at both the foreground and background.\\n\\nNow the engineers want the classifier to classify the tiger based on tiger pixels, not its background (note: from the classical Wolfe-Husky example in LIME [1], we know the spurious correlation of background). To this end, we generated 100 tiger images based on concepts related to \\u2018orange black and white\\u2019 and \\u2018orange and black\\u2019 using a separate generative model and fine-tuned our Googlenet model. Running RLPO on this fine-tuned model revealed that the model learned some new concepts such as \\u2018whiskers\\u2019 and also revealed that previous concepts such as \\u2018orange black and white\\u2019 and \\u2018orange and black\\u2019 are now more important with TCAV scores of 1.0 and 1.0, respectively. This means that the classifier is now only looking at tiger pixels, not the background. (see dataset samples and shift plot in [anonymous github](https://anonymous.4open.science/r/RLPO-E5C6/KnKW/Response%201/README.md)). This experiment clearly shows how the proposed method can be used to improve a neural network\\u2019s undesirable behavior.\\n\\n1. Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. \\\"\\\" Why should i trust you?\\\" Explaining the predictions of any classifier.\\\" Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.\"}", "{\"comment\": \"Thank you for recognizing the innovation and clarity of RLPO, as well as its contribution to improving the efficiency and generalizability of concept-based explanation methods. We also appreciate your valuable feedback on areas requiring clarification, which we will address comprehensively in the revised version.\\n\\n## W1. \\nWe acknowledge that some sections, such as the algorithm explanation and mathematical proofs, may appear overly lengthy. To address this, we plan to streamline these sections in the final version by:\\n- Providing concise summaries alongside detailed explanations to improve accessibility for readers. \\n- Moving some of the more intricate details (e.g., mathematical derivations) to the appendix, ensuring the main text remains reader-friendly while preserving the rigor.\\n\\nWe believe this restructuring will make the algorithm and proofs more digestible without compromising their completeness.\\n\\n## W2. \\nGiven the page limit we were not able to include all the details on the sentiment analysis experiment in the main paper. We have added additional explanation for our experiment on sentiment analysis tasks in the Appendix D.4. We will highlight it in the main paper. \\n\\n## W3. \\nWe appreciate your feedback regarding the inconsistent use of terms like \\u201cconcept generation\\u201d and \\u201cconcept extraction.\\u201d We will fix these in the revised paper.\"}", "{\"comment\": \"We thank the reviewer for their prompt response and giving us the opportunity to clarify the queries.\\n\\nPlease refer to this updated **[Anonymized GitHub Link](https://anonymous.4open.science/r/RLPO-E5C6/5Zn5/Responce%201/README.md)** where we have compiled detailed explanation and images for better understanding.\\n\\nWe believe the confusion happened because \\u201chuman\\u201d was used as an overloaded term. In the classical TCAV setting, two groups of humans are involved: those who create concept sets offline (\\u201ccreator humans\\u201d) and those who utilize these concepts online (\\u201cuser humans\\u201d). When we state \\u201chumans cannot,\\u201d we are specifically referring to the creator humans, as our contribution is developing an algorithm for concept set creation/generation rather than the downstream application of an existing method. To clarify, we do not claim that the generated concepts are not understandable to the user humans. **Rather, we argue that the concepts that the model indeed uses can be divided into two groups: concepts that creator humans can think of and those they cannot (i.e., novel concepts). Our method generates explanations from both groups, and both types of concepts are understandable to user humans.**\\n\\nUnderstandable should be corrected as guessed. For instance, in the reviewer\\u2019s summary \\u201cconcepts are not likely to be ~understood~ guessed by creator humans.\\u201d Vanilla TCAV requires creator humans to come up with (i.e., guess) a set of concepts to test ahead of time. In other words, it assumes the creator humans know a set of explanations ahead of time as TCAV only picks from the set which concept/explanation is correct. When we tried to apply TCAV in some real-world applications, we found out that it is impossible for creator humans to guess all such concepts/explanations ahead of time. That motivated us to generate concepts.\\n\\n## Interpretation of results in Table 2: \\n**Hypothesis:** Creator humans struggle to guess all important concepts that truly matter to the network. \\n\\n**Experimental setup:** Rather than asking creator humans to come up with large concept sets, we present them with two concepts and ask them which one they would include in the concept set. To this end, as detailed in Appendix D6 (and the following image), we show creator humans a test image along with two explanations\\u2014one obtained from a retrieval-based method and the other generated by our method (without revealing the experimental setup to them). Those creator humans were asked to determine which of the two explanations could have influenced the network's decision that the test image, for example, represents a zebra. They could choose the first explanation, the second, or both. \\n\\n**Metrics:** All generated concepts matter to the network as they have a high TCAV score. Out of them, how many of them are identifiable by creator humans? \\n\\n**Results:** The results in Table 2 shows that creator humans often recognized explanations from retrieval-based methods, as these align with cropped elements of the test images. However, they were less likely to guess generated explanations, even though these are valid concepts that influence the neural network's decision and have high TCAV scores. This confirms that our method successfully discovers valid explanations that are not immediately apparent to creator humans.\"}", "{\"comment\": \"Thank you for your response and clarifications.\", \"my_understanding_of_your_claims_are\": \"1) You are presenting a method to expand into the \\\"blue\\\" region, producing more novel concepts. \\n2) Your work aims to test the method in the context of its ability to produce such concepts. Specifically, you test that the concepts are outside of the space that humans can generate themselves. \\n\\nI believe the authors have successfully demonstrated that they discover visual concepts that activate the network that are novel to humans (Table 2).\\n\\nUnfortunately, I believe we disagree on the value of this contribution to XAI (which, to my understanding, is primarily explored in Fig. 9). While most ML papers do not deploy their methods in real-world use cases, most fields have clear metrics. It is challenging to come up with clear metrics in XAI and while I can agree that an extensive evaluation of the usefulness is out of the scope of the paper, I think it is reasonable that some evidence of potential usefulness to XAI (or some other domain) is demonstrated. In my opinion, Fig. 9 does not sufficiently demonstrate this potential usefulness.\\n\\nOn abstractness, I appreciate the clarification and I understand the intention better. In the paper the authors state:\\n> These abstractions hint us about what\\nthe model prefers when it is looking for tiger, starting from a four-legged orange furred animal, to\\nblack and white stripes with orange furred animal, to black and white stripes with orange furred and\\nwhiskers.\\n\\nHowever, this seems to me heavily confounded by both the prompt and the path through stable diffusion's weight space. I am not sure it can be ascribed to the model's preferences. I also find similar issues with ClipSeg, which introduces another model's biases as an intermediary to interpreting the target model.\\n\\nOverall, I prefer to maintain my rating. Thank your for the discussion.\\n\\n------- \\nI am not factoring the following points into my decision, but I would like to point them out. \\n\\n> Here is a simple method to come up with an incorrect explanation that gives a high utility score using [2]: given an image that contains a zebra to test, run an object detector to crop the image and pick the bounding box that has the highest similarity to the test image (i.e., bounding box around the zebra). If we use the metric in [2], the usefulness will be 100% and will beat any other XAI method to-date because the user associates the zebra test image with the zebra concept image. But is the explanation correct and trustworthy? No (unless lucky), because it never looked inside the neural network.\\n\\nThis is an incorrect representation of the experiment proposed in [2]. The classes selected by the authors of [2] are specific and designed to eliminate trivial explanations.\\n\\n> our goal is to provide insights to expert machine learning engineers or data scientists to identify what the neural network has learned (so that if there are any vulnerabilities they can fix before deployment). \\n> We do not claim \\u201cmethod surfaces vulnerabilities,\\u201d what we claimed was method \\u201cso that if there are any vulnerabilities they can fix before deployment.\\u201d The first sentence is a definite claim and the second one is an ability/opportunity. \\n\\nI find that the writing style in the paper and the responses leans towards strong claims which have been followed up with re-interpretations in the discussion. For example, the first sentence (to me) strongly implies that your method would help engineers surface vulnerabilities, I do not believe this is a surprising or incorrect interpretation.\"}" ] }
9ehJCZz4aM
AutoCGP: Closed-Loop Concept-Guided Policies from Unlabeled Demonstrations
[ "Pei Zhou", "Ruizhe Liu", "Qian Luo", "Fan Wang", "Yibing Song", "Yanchao Yang" ]
Training embodied agents to perform complex robotic tasks presents significant challenges due to the entangled factors of task compositionality, environmental diversity, and dynamic changes. In this work, we introduce a novel imitation learning framework to train closed-loop concept-guided policies that enhance long-horizon task performance by leveraging discovered manipulation concepts. Unlike methods that rely on predefined skills and human-annotated labels, our approach allows agents to autonomously abstract manipulation concepts from their proprioceptive states, thereby alleviating misalignment due to ambiguities in human semantics and environmental complexity. Our framework comprises two primary components: an *Automatic Concept Discovery* module that identifies meaningful and consistent manipulation concepts, and a *Concept-Guided Policy Learning* module that effectively utilizes these manipulation concepts for adaptive task execution, including a *Concept Selection Transformer* for concept-based guidance and a *Concept-Guided Policy* for action prediction with the selected concepts. Experiments demonstrate that our approach significantly outperforms baseline methods across a range of tasks and environments, while showcasing emergent consistency in motion patterns associated with the discovered manipulation concepts. Codes are available at: https://github.com/PeiZhou26/AutoCGP.
[ "Self-Supervised Manipulation Concept Discovery", "Concept-Guided Policy for Robotic Tasks" ]
Accept (Spotlight)
https://openreview.net/pdf?id=9ehJCZz4aM
https://openreview.net/forum?id=9ehJCZz4aM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yOJyuNk7A0", "tQNTb9NE3s", "tNbKaNu5hQ", "tN8EskCQRU", "t7Bj6utPzT", "t4Iq7T4gBQ", "sAwfMu5NGX", "rkCpHiT6nZ", "plBg9FB3Fe", "mGYK5YvmfU", "kgvG2bqcbD", "jLvXXdgRCG", "gsJEhlqaGm", "ad2fOWvD4B", "YDiRI5PwNu", "X1UGBB37MW", "V7yJm0bUA8", "RQZZFMx9Bb", "Pcfi9WnnBQ", "LT5i08OR8M", "CzYtzmeCMf", "9WECGyeCRi", "7e7WPXdcWd", "5ANerQJsq2", "2bK4wIPw07", "2Rcsd7FRhq" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1734660901283, 1732802203579, 1732212189793, 1729156611736, 1732213501781, 1732212638213, 1732279329767, 1732213089288, 1732261452915, 1732213963144, 1732212831677, 1732212990751, 1732213608920, 1732216290802, 1732709119045, 1737523496940, 1732213758226, 1730716677856, 1732213290825, 1732372261892, 1730825367996, 1732211835093, 1732214055814, 1732212383758, 1732702937751, 1729954856382 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2315/Area_Chair_jjUi" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Reviewer_Umdm" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Reviewer_Wk2p" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Reviewer_UouH" ], [ "ICLR.cc/2025/Conference/Submission2315/Reviewer_Umdm" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Reviewer_UouH" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Reviewer_LkiS" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Authors" ], [ "ICLR.cc/2025/Conference/Submission2315/Reviewer_Wk2p" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents a novel method for autonomously abstracting manipulation concepts from proprioceptive states. It introduces two main components: Automatic Concept Discovery (ACD), which identifies meaningful and consistent manipulation concepts by abstracting from low-level proprioceptive states, Concept-Aware Policy Learning (CAPL), which utilizes these manipulation concepts to guide task execution adaptively. The experiments, conducted in simulation, demonstrate significant improvements over state-of-the-art (SOTA) baselines.\", \"strengths\": [\"Innovative Methodology with Potential Impact: The proposed method is highly innovative, enabling the discovery of manipulation concepts without relying on human-annotated labels or predefined skills. This is a significant step forward in robotics research.\", \"Clarity and Technical Depth: The paper is well-written and well-structured, making the methodology and contributions easy to understand.\", \"Strong Experimental Validation: The comprehensive comparison with SOTA baselines, while limited to simulation, clearly demonstrates the effectiveness of the proposed approach.\"], \"weaknesses\": \"- The experiments focus primarily on simulated tabletop manipulation tasks, with limited evaluation on more complex morphologies, diverse tasks, or real-world datasets. Expanding the experiments to include real-world settings and incorporating multimodal data would strengthen the paper\\u2019s claims. \\n\\nDespite these concerns, I believe the strengths and potential impact of the paper outweigh its weaknesses, making it a valuable contribution to the field. One minor point for improvement would be to discuss and compare the approach with \\\"Discovering Robotic Interaction Modes with Discrete Representation Learning,\\\" published in CoRL just before the ICLR deadline. This work also seeks to learn manipulation concepts autonomously.\\n\\nOverall, I wish the authors the best of luck and look forward to the final version of the paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers asked several clarificatory questions that were adequately addressed by the authors and acknowledged by the reviewers. A few concerns that remain are:\\n\\nReviewers pointed out missing literature and comparisons, specifically works on learning abstract actions with clustering (BeT) and VQ-VAE. The authors acknowledged the missing literature and committed to including a comparison with BeT and VQ-VAE-based methods in the revised manuscript.\\n\\nReviewers suggested that while the method performs well in simulated environments, real-world validation is necessary to demonstrate practical applicability. Reviewer Umdm suggested that while the method performs well in simulated environments, real-world validation is necessary to demonstrate practical applicability.\\n\\nDespite the concerns, the reviewers were positive about the work and acknowledged its merits. I agree with them and hence recommend acceptance of the manuscript.\"}", "{\"title\": \"Thanks again for your valuable feedback\", \"comment\": \"Dear Reviewer Umdm,\\n\\nWe are glad that the raised concerns and questions have been addressed in the rebuttal. We will incorporate these answers in the final paper. And thanks again for helping us improve the quality of our work.\\n\\nThanks,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer LkiS:\\n\\nThank you for recognizing the strengths of our paper, particularly the method we developed for hierarchical imitation learning, which autonomously discovers motion concepts and conditions motion policies on these concepts without human annotation. We appreciate your positive feedback on the reasonableness and effectiveness of the three techniques we employed for discovering these concepts from trajectories. We address your comments and questions in the following.\\n***\\n>**W.1** Some literature potentially relating to the method or problem setting of this paper is missing. For example, learning abstract action with clustering (BeT[1]) and with VQ-VAE[2]; please contrast this work with others listed above.\\n\\n**A. to W.1**\\nThank you very much for your constructive suggestions regarding the inclusion of related works [1] and [2]. Following your recommendation, we have included these references in **Sec. 2 Related Work**, and have clarified the distinctions between these works and our own.\\n\\nTo summarize, while [1] and [2] predominantly focus on discovering a discretized encoding of the instantaneous **actions** in a continuous space, our work, in contrast, focuses on learning discretized **motion codes** that can effectively represent **motor skills** \\u2014 defined as manipulation concepts in our study \\u2014 over a **short horizon**. We appreciate your suggestion to refine our literature review and enhance the contextual positioning of our research.\\n\\nWe will also include the results of BeT in **Tab.1** as baseline methods. Some results have already been obtained, and you can refer to our response to **Reviewer UouH's Q2 (A to Q2)** for further details.\\n***\\n>**W.2** The proposal is tested with the Robosuite (MimicGen) benchmark, whose task is inherently highly compositional, so the concept discovery seems to be relatively easy. It is interesting to see the result with an \\u201cin-the-wild\\u201d dataset, such as dataset collected with a \\u2018learning from play\\u2019 manner [3]. \\n\\n**A to W.2**\\nThank you for your insightful question related to the potential performance on the learning-from-play dataset. However, we would like to clarify the experimental setup related to the Automatic Concept Discovery (Sec. 3.1) **to reveal the essential similarity** between the learning-from-play and our settings. Our process indeed resembles a learning-from-play scenario in that it **does not utilize task descriptions or any explicit information about task objectives** (Please refer to the Implementation details in Sec. 4.1), such as natural language descriptions of tasks. This approach essentially mirrors the collection of an unannotated dataset comprising several demonstrations. In this context, our Automatic Concept Discovery process does not presuppose any knowledge of the specific goals being pursued in any given demonstration sequence. As such, it is highly comparable to the concept of learning-from-play setting, where the sequences or demonstrations observed by a robot do not target a specific aim (resemble data used in [3]). This similarity underscores the relevance and applicability of our methodology to scenarios akin to learning from play.\\n\\nWe have also included this discussion in **Potential of learning-from-play in Sec.D** of the appendix.\\n***\\n>**Q.1** Why is only proprioception used for the automatic concept? It seems natural to include observations in addition to proprioception to assign the abstract concept to the time step. How can we generalize it to observation? There may be a case where similar proprioception (e.g., robot pose) but a different observation (e.g., an obstacle in the environment).\\n\\n**A to Q.1**\\nThank you for your thoughtful question regarding the choice between using proprioceptive state data or whole environment observations in our research.\\n\\nThrough our studies, we observe that **the proprioceptive states of robots are intrinsically linked to manipulation concepts akin to motor skills** such as grasping, throwing, pushing, and pulling (please refer to our explanation in the Sec. A.1, as also echoed by **Reviewer UouH in the discussion of the strengthen** of our method). Our rationale for choosing proprioceptive state data is twofold:\"}", "{\"summary\": \"The paper presents a novel imitation learning framework for training closed-loop concept-guided policies for robotic tasks, focusing on long-horizon tasks. The framework leverages self-supervised learning to autonomously discover manipulation concepts directly from robotic demonstrations, without the need for predefined skills or human-labeled data. The two main components of the system are:\\n- Automatic Concept Discovery, which identifies manipulation concepts by analyzing proprioceptive states and assigning discrete embeddings to different task segments.\\n- Concept-Guided Policy Learning, which utilizes these discovered concepts for policy learning. A Concept Selection Transformer (CST) selects relevant manipulation concepts in real time, while a Concept-Guided Policy (CGP) generates actions based on the selected concepts, using a diffusion policy to handle high-dimensional actions.\\n\\nThe framework is designed to dynamically adjust policies in response to environmental changes, ensuring robust performance across a variety of tasks. Experimental results show that the proposed method outperforms existing baselines in multiple robotic manipulation tasks, demonstrating improved generalization and performance in dynamic environments.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"- The approach is highly original in combining concept discovery with closed-loop policy learning. Instead of relying on predefined, human-annotated labels or skills, the system autonomously discovers meaningful manipulation concepts directly from demonstrations. This removes the need for manual intervention and avoids misalignment between human semantics and robot operations.\\nThe use of a self-supervised concept discovery mechanism is innovative, especially in its application to long-horizon robotic tasks. The introduction of a Concept Selection Transformer (CST) for dynamic task execution is novel and enables adaptive behavior based on feedback.\\n- The methodology is well-designed and robust, incorporating a clear pipeline for concept discovery and policy learning. The experimental setup is thorough, with comparisons to multiple baselines and a wide range of tasks, which demonstrates the effectiveness of the proposed framework.\\nThe paper also includes an ablation study to assess the contributions of different components within the framework, which adds depth to the analysis and validation of the proposed method.\\n- The paper is clearly written, with detailed explanations of the method, supported by visualizations of the discovered concepts. The distinction between the two main components (Automatic Concept Discovery and Concept-Guided Policy Learning) is clearly articulated, and the figures help in understanding the flow of the framework.\\nThe technical details of the system, including the loss functions and architecture choices, are well-explained.\", \"weaknesses\": [\"Concept Interpretability: While the system autonomously discovers manipulation concepts, it would be interesting to try to explore how interpretable or human-understandable these concepts are. It would also be interesting to see if all the discovered concepts are utilized in downstream policy learning and, if not, what the unused ones look like.\", \"Evaluation Scope: Although the paper evaluates the framework on a variety of tasks, the focus is primarily on tabletop manipulation in simulated environments. Expanding the evaluation to more complex morphologies or real-world scenarios would strengthen the claim that the approach generalizes well.\", \"Lack of Real-World Validation: While the framework performs well in simulation, it has not been validated on real-world robotic platforms. The paper acknowledges this limitation but does not provide a clear roadmap for transitioning the system to physical robots, which is crucial for demonstrating its practical applicability.\"], \"questions\": [\"There is some line of work using mutual information maximization to discover concepts/skills from demonstration datasets [1, 2] or self-supervised methods [3]. It may help to include some insights into those works.\", \"While the annotations used in the method section are very stringent and scientifically sound, it can be confusing for readers with less background knowledge or patience to fully understand the message the paper tries to deliver.\", \"In the automatic concept assignment stage, is the encoder trained separately from other components, or are they trained simultaneously? If the former, how did you prevent the encoder from collapsing to map to the same codebook index?\", \"Are all the discovered concepts meaningful and physically possible? Can I condition on an arbitrary concept index and the generated trajectories are good to use, or is the Concept Selection Transformer still necessary to act as a filter to rule out discovered but useless concepts?\", \"How to deal with diversity? In skill discovery works, one common issue is a diverse set of skills/concepts can be discovered, but this diversity is canceled out by a downstream skill/concept selector due to its collapse on one or two most useful skills/concepts. How can you retain all the concepts discovered and demonstrate the diversity there?\", \"[1] Li, C., Blaes, S., Kolev, P., Vlastelica, M., Frey, J. and Martius, G., 2023, May. Versatile skill control via self-supervised adversarial imitation of unlabeled mixed motions. In 2023 IEEE international conference on robotics and automation (ICRA) (pp. 2944-2950). IEEE.\", \"[2] Peng, X.B., Guo, Y., Halper, L., Levine, S. and Fidler, S., 2022. Ase: Large-scale reusable adversarial skill embeddings for physically simulated characters. ACM Transactions On Graphics (TOG), 41(4), pp.1-17.\", \"[3] Li, C., Stanger-Jones, E., Heim, S. and Kim, S., 2024. FLD: Fourier Latent Dynamics for Structured Motion Representation and Learning. arXiv preprint arXiv:2402.13820.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"In addition to leveraging proven techniques, we carefully considered the rationale behind continuing with certain designs. For example, the use of a Hypernetwork was primarily driven by the need to efficiently form the value function $\\\\mathcal{V}$ as specified in Eq.5. Using a separate neural network for each of the $K$ manipulation concepts would be parameter-intensive. Therefore, we opted for a more parameter-efficient approach that conditions on the manipulation concepts. This decision also maintains a balance between data adaptability and parameter efficiency, as discussed in the referenced paper on the benefits of HyperNetworks (available at [https://arxiv.org/abs/2306.06955](https://arxiv.org/abs/2306.06955)).\\n\\nFurthermore, the choice of the Diffusion Policy was informed by its robust capability to handle highly variable demonstrations provided by the MimicGen Demos, such as those tasks with larger subscript of $D$ (e.g. Coffee $D_2$). The strong distribution capture capability of diffusion models makes them an ideal choice for such scenarios.\\n\\nRegarding our use of VQ-VAE, it naturally provides discrete symbolic labels based on index, which facilitates the process of identifying the completion of manipulation concept processes. By labeling each timestep with a manipulation concept from the VQ-VAE codebook, we can naturally delineate continuous timesteps under the same label as part of a subprocess and identify the transition between subprocesses when the VQ-VAE label changes. For a detailed explanation, please refer to Goal State Detection in Sec. 3.1.\\n***\\n>W.3 Style notes\\n>- perhaps this is a notational convention I am unaware of, but the parentheses in epsilon^(n) seem like unnecessary clutter.\\n>- Should eq 11 be negative to account for the fact that the log term will always be negative?\\n>- In eq 10, is the selected action from pi_D conditioned on the k selected by p_CST? If so, this could be made clearer, perhaps by splitting into two equations.\\n\\n**A to W.3:**\\nThank you for bringing these style-related issues to our attention. Regarding the notation $\\\\epsilon^{(n)}$, we have retained this format since it explicitly denotes different noise levels at different timesteps n, which we believe is important for clarity. We have carefully addressed the remaining points and revised the manuscript accordingly to ensure greater clarity.\\n***\\n>**Q.1** At some point, the number of different networks + their parameters runs a risk of over-parameterization and overfitting. This is especially true since it seems that the set of demonstrations were collected over the same set of tasks as used in policy generation and success measurement. Combined with the lack of qualitative cross-task concept comparison or similarity analysis, I do have overfitting worries here. I have no evidence to directly support this flaw other than my own experience training large robotics models, but would still like to know the authors' thoughts on how overfitting is avoided in the proposed approach.\\n\\n**A to Q.1**\\nThank you for raising the important issue of handling overfitting in our study.\", \"we_would_like_to_address_your_concern_by_discussing_how_overfitting_is_avoided_in_our_proposed_approach_through_two_key_aspects\": \"Automatic Concept Discovery and Policy Learning.\\n\\n**1. Why Automatic Concept Discovery Mitigates Overfitting**\\n\\nWe outline below the reasons and evidence demonstrating that the Automatic Concept Discovery process (Sec. 3.1) effectively mitigates overfitting.\\n\\n**1.1. Blending of Tasks in Automatic Concept Discovery**\\n\\nOur Automatic Concept Discovery process leverages demonstrations from a variety of tasks without incorporating task-specific information such as task names or descriptions (as detailed in Implementation Details, Sec. 4.1). This ensures that the discovered manipulation concepts are not overly influenced by the specifics of any single task and can be applied across a variety of settings. By blending data from multiple tasks and excluding task-specific information, we aim to prevent the model from overfitting to narrow, task-specific manipulation concepts.\\n\\n**1.2. Focus on Proprioceptive States in Concept Discovery.**\\n\\nAs discussed in our response to **Reviewer LkiS\\u2019s Q.1** (A to Q.1) and elaborated in Sec. A.1, we focus exclusively on proprioceptive states during concept discovery. This design choice allows the model to concentrate on features derived from the robot's motion patterns, effectively mitigating overfitting to environmental nuances or background contexts that might arise from full environmental images. Our visualizations (Fig. 5 and Sec. C.3) demonstrate that the learned manipulation concepts consistently capture key motion patterns across tasks, rather than fitting on some unrelated environmental feature differences.\"}", "{\"comment\": \"The results in the table below highlight the robustness of our approach, both in the number of discovered manipulation concepts and in the corresponding policy performance, underscoring its stability. Here, in order to evaluate the number of discovered manipulation concepts, we conducted 10 trials for each VQ-VAE codebook size setting and reported the average number of Manipulation concepts identified.\\n| | Num. of discovered manipulation concepts | Succ. rate of coffee_d2 | Succ. rate of mug_cleanup_d1 |\\n|:-:|:-:|:-:|:-:|\\n| 30 codebook items | 22.9 | 0.72 | 0.50 | \\n| 40 codebook items | 22.1 | 0.70 | 0.48 |\\n***\\nTo recap, we attempted to address the weaknesses and questions raised in the review. We added references to related works [1] and [2] and clarified the distinctions between these works and our own. We also explained how our experimental setup is comparable to a learning-from-play scenario, supporting the relevance of our methodology. We provided detailed reasoning for using proprioceptive state data for automatic concept discovery and presented preliminary results from additional experiments incorporating visual information. Finally, we explained our approach to selecting the number of codebook items in VQ-VAE and demonstrated the robustness of our proposal through additional experiments.\\n\\nWe hope that the information and clarifications provided in this rebuttal address your concerns and help you in re-evaluating our work. Please feel free to let us know if you have further questions or comments.\\n\\nThanks,\\n\\nThe Authors\\n\\n[1] Behavior Transformers: Cloning k modes with one stone https://arxiv.org/abs/2206.11251\\n[2] Behavior Generation with Latent Actions https://arxiv.org/abs/2403.03181\\n[3] Learning Latent Plans from Play https://arxiv.org/abs/1903.01973\"}", "{\"title\": \"thank you for the detailed responses\", \"comment\": \"Thank you very much for your responses. I don't feel comfortable raising my rating even higher as I still think that the overall approach is unnecessarily complex, particularly in the number of different learning techniques used. But the responses and edits definitely help strengthen the paper's position and address my comments.\"}", "{\"comment\": \">**Q.1** The concerns in the \\\"Weakness\\\" part.\\n\\n**A to Q.1**\\nThank you for the question. Please refer to our replies in the Weakness part.\\n***\\n>**Q.2** A minor point is to add an IL SOTA baseline specific to the problem, which is not limited to a Diffusion Policy basis.\\n\\n**A to Q.2**\\nThank you for suggesting the addition of an extra SOTA baseline. We have chosen to include BeT (https://proceedings.neurips.cc/paper_files/paper/2022/hash/90d17e882adbdda42349db6f50123817-Abstract-Conference.html) as the baseline. We are currently testing BeT's performance, and the table below presents some preliminary results. Comprehensive results for all tasks will be included in the next revised version of the manuscript.\\n| Tasks | Cof. d0| Cof. d1 | Cof. d2 | Mug. d1 |\\n|:-:|:-:|:-:|:-:|:-:|\\n| BeT | 0.66 | 0.52 | 0.42 | 0.26 |\\n| Ours | 0.98| 0.84 | 0.72 | 0.50 |\\n***\\nTo recap, we attempted to address the weaknesses and questions raised in the review. We clarified the consistency and task-agnostic nature of the discovered concepts and explained our experimental setup to mitigate concerns about overfitting. We acknowledged the challenges in data collection and labeling, emphasizing our future plans to optimize these processes. We also discussed the generalizability and scalability of our approach, providing experimental evidence to show the stability of the number of discovered concepts. Lastly, we included an additional SOTA baseline for comparison.\\n\\nWe hope that the response, clarifications, and discussions provided in this rebuttal alleviate your concerns and help you in re-evaluating our work. Please feel free to let us know if you have further thoughts or questions.\\n\\nThanks,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer UouH,\\n\\nThank you for your thoughtful feedback and for recognizing our efforts in the rebuttal. We appreciate your suggestion regarding the subjectivity of human labeling and agree it is a valuable avenue for further exploration. In our future work, we will conduct additional experiments to investigate the potential impact of labeling subjectivity and its implications on downstream tasks. Your insights have been invaluable in shaping these directions.\\n\\nThank you once again for your constructive comments and for raising your rating\\u2014we truly appreciate your support!\"}", "{\"comment\": \">**W.3** Lack of Real-World Validation: While the framework performs well in simulation, it has not been validated on real-world robotic platforms. The paper acknowledges this limitation but does not provide a clear roadmap for transitioning the system to physical robots, which is crucial for demonstrating its practical applicability.\\n\\n**A to W.3**\\nThank you for your interest in the roadmap for real-world implementation of our methodologies. We believe that the Automatic Concept Discovery process (Sec. 3.1) holds the potential of identifying manipulation concepts from a diverse array of real-world robotic data. Furthermore, the policy formation based on these manipulation concepts can be effectively executed in a manner similar to the imitation learning approach in Sec. 3.2 for real work robotic data, utilizing the discovered concepts.\\n\\nTo showcase the potential of our method, we applied the Automatic Concept Discovery process to BridgeDataV2(https://rail-berkeley.github.io/bridgedata/) and visualized some discovered manipulation concepts in **Fig. 15** of the revised manuscript. Our findings reveal that the consistency of proprioceptive states across different manipulation scenarios is preserved, underscoring the robustness of our concept discovery methodology.\\n\\nLooking ahead, we anticipate that leveraging a larger and more diverse robotics dataset will further enhance the discovery of varied and effective manipulation concepts. This advancement would, in turn, strengthen the guidance provided by these concepts for the closed-loop, concept-guided policy discussed in Sec. 3.2.\\n***\\n>**Q.1** There is some line of work using mutual information maximization to discover concepts/skills from demonstration datasets [1, 2] or self-supervised methods [3]. It may help to include some insights into those works.\\n\\n**A to Q.1**\\nWe sincerely appreciate the reviewer pointing out these valuable related works on mutual information maximization for concept/skill discovery. We have incorporated these references and their key insights into our updated Sec. 2 Related Work, which helps provide a more complete context for our work.\\n***\\n>**Q.2** While the annotations used in the method section are very stringent and scientifically sound, it can be confusing for readers with less background knowledge or patience to fully understand the message the paper tries to deliver.\\n\\n**A to Q.2**\\nThanks for this helpful feedback. We will revise **Sec.3** to make our method description more accessible and reader-friendly in our final version.\\n***\\n>**Q.3** In the automatic concept assignment stage, is the encoder trained separately from other components, or are they trained simultaneously? If the former, how did you prevent the encoder from collapsing to map to the same codebook index?\\n\\n**A to Q.3**\\nThank you for your question regarding the training of the encoder in Eq.1.\\n\\nFor a detailed understanding of the training dynamics within a single iteration of the Automatic Concept Discovery process (Sec. 3.1), please refer to the pseudocode provided in Sec. A.4. The formation of concepts that fulfill the features described in Sec. 3.1 (including goal state detection, goal state evaluation, and goal consolidation) requires that the encoder loss ($L^{\\\\text{vq}}$) be trained in conjunction with the predictive decoder losses ($L_{\\\\text{gd}}$ $L_{\\\\text{ge}}^a$ $L_{\\\\text{ge}}^c$ $L_{\\\\text{gc}}$) (Also see Eq. 9).\\n\\nRegarding the common issue of collapse in VQ-VAE models, we have observed that the inclusion of $L_{\\\\text{gc}}$ (goal consolidation loss) assists in preventing this phenomenon. To illustrate this, we have provided statistics comparing the number of discovered manipulation concepts with and without the use of $L_{\\\\text{gc}}$ below (we run the experiments 10 times and provide the average number here).\\n\\n| | Ours | w/o GC |\\n| :-: | :-: | :-: |\\n| Num. of Manipulation Concepts | 23.7 | 6.3 |\\n\\nThese figures clearly demonstrate the effectiveness of $L_{\\\\text{gc}}$ in maintaining the diversity of the concepts discovered.\\n***\\n>**Q.4** Are all the discovered concepts meaningful and physically possible? Can I condition on an arbitrary concept index and the generated trajectories are good to use, or is the Concept Selection Transformer still necessary to act as a filter to rule out discovered but useless concepts?\\n\\n**A to Q.4**\\nThank you for your question related to the manipulation concepts\\u2019 meaningfulness and validity, as well as the role of Concept Selection Transformer (CST in Sec. 3.2).\\n\\nNote that all our concepts are discovered in the multitask dataset, and each concept represents a repeated motion pattern, in certain tasks. Thus, the manipulation concepts are all meaningful, but each concept will only be understandable and used in its corresponding tasks and motions.\"}", "{\"comment\": \"Dear Reviewer UouH,\\n\\nThank you for recognizing the strengths of our proposed framework in addressing the challenges of imitation learning (IL), e.g., human effort needed for annotation and inherent subjectivity in manual data annotation. We appreciate your acknowledgment of how our method autonomously extracts and effectively utilizes manipulation concepts from the robot\\u2019s proprioceptive states, reducing manual annotation efforts and adapting dynamically to environmental interactions. Furthermore, thanks for confirming the solidity of our experiments and the significance in the improvements. In the following, we address your concerns and questions.\\n***\\n>**W.1** Unclear presentation. The figures are a little bit messy (e.g. lack the connection of each part in Fig 3, the Closed-Loop Policy part in Fig 2 is unclear ), but it's a minor point. What matters are: 1) if the discovered concepts are consistent in all demos, i.e. if the concepts are in a unified, task-agnostic space for all demos; 2) For evaluation, it says 950 demos for each task, so does it mean: 2.1) the training data is 950 * |# of task| for single unified policy (set of concepts) and get tested all 6 type of tasks at once( in a mixed way). Or: 2.2) for each task, it uses 950 demos to train and test only for the task. For 1), I suppose the concepts are consistent in all demos according to the formula (1), but it's better to be clear at the beginning. For 2), I'm not sure if it's 2.1) or not, if it's 2.2), the performance will be questionable as it's very close to overfitting.\\n\\n**A to W.1**\\nThank you for your insightful questions regarding the features of the manipulation concepts discovered and the experimental setup used in our study. We greatly appreciate your interest and are eager to address concerns 1) and 2) in a sequential manner.\\n\\n**Regarding concern 1)**, our Automatic Concept Discovery process (Sec. 3.1) is implemented on demonstrations across various tasks without the use of specific task-related information such as task names and descriptions (Please refer to the Implementation details in Sec. 4.1 for more specifics). This strategy enables the discovery of manipulation concepts that are applicable across sub-processes of various tasks, irrespective of each task's individual objectives. We have observed that this approach consistently yields manipulation concepts that maintain consistency across different tasks. These findings are supported by the visualization results presented in Sec. 4.3, and further elaborated upon in the visualization results in Sec. C.3 of our paper.\\n\\n**Regarding concern 2)**, The MimicGen benchmark is designed for single-task scenarios. Consequently, our Closed-Loop Concept-Guided Policy Learning process (Sec. 3.2) trains and evaluates policies within single-task settings. It is worth noting that despite its single-task focus, the MimicGen environment provides a high degree of variability in its demonstrations, which contributes to mitigating overfitting during imitation learning. For example, in the coffee-making task, MimicGen offers three distinct levels of variation\\u2014$D_0$, $D_1$, and $D_2$\\u2014where $D_2$ represents the highest variation in environmental conditions. These conditions include variations in object placement, robot/object motion, and noise of demonstration provided for imitation learning. Specifically: In Coffee $D_2$, demonstrations provided by MimicGen include challenging scenarios such as accidental knocking on the coffee maker (Please check the extreme example of coffee-making at the following path of our revised supplementary material: **Supplementary Material/rebuttal_visualizations/extreme_cases/coffee_d2-020.gif**. An additional example can be seen in Stack Three $D_2$, where demonstrations include scenarios in which two stacked cubes are inadvertently knocked over. The example can be found in supplementary material: **Supplementary Material/rebuttal_visualizations/extreme_cases/stack_three_d1-202.gif**). Such variability helps prevent overfitting by exposing the learning process to diverse scenarios, even within single-task environments. This is reflected in the results presented in Tab. 1 and Tab. 2. Notably, tasks with higher initial variability (e.g., $D_2$ conditions) show success rates that leave substantial room for improvement.\"}", "{\"comment\": \">**W.2** Less practical problem setting. There was a trend to label the raw robot demonstration automatically, but a more practical problem is the raw demo collection is much more expensive than demo annotation, and therefore, some researchers turn to human-play data to get more data to use. And it's the same story for this paper, the expensive robot demo collection reduces the significance of this work.\\n\\n**A to W.2**\\nThank you for highlighting the challenges associated with collecting and labeling demonstrations.\\n\\nWe acknowledge that the labeling process is generally less resource-intensive compared to the substantial effort required for data collection itself. Nonetheless, we fully recognize the importance of making use of labeling raw demonstrations and minimizing labeling costs whenever possible, as this represents a critical step in enhancing the efficiency and scalability of the entire process.\\n\\nFurthermore, we believe that the challenges associated with data collection and labeling differ significantly in scope and nature, with each presenting unique complexities and requirements. Moving forward, we plan to delve deeper into both aspects, exploring ways to optimize data collection and labeling processes. Additionally, we aim to investigate potential bridges between these two stages to uncover synergies that could further streamline and enhance the overall workflow. For example, we can leverage the discovered manipulation concepts to improve the generalization and reduce the amount of demonstrations needed for new tasks; also, we can figure out challenging sub-tasks (concepts) that are critical for the success of overall task completion and prioritize the data collection for those sub-tasks and hence be more efficient.\\n***\\n>**W.3** Concern about generalizability and scalability. Though the paper claims better performance (compared with baselines) in a more diversified initial environment, here the diversity is only expressed by object positions and placement angles. Some concerns are: 1)What if there is concept inconsistency in the demos, e.g. to do the same manipulation, in a different way; 2) With the growth of the demo amount, will the # number of concepts converge or grow fast;\\n\\n**A to W.3**\\nThank you for your thoughtful concerns about generalization and scalability.\\n\\n**Regarding concern 1)**, We want to clarify that the MimicGen environment we use provides demonstrations for imitation learning that exhibit the inconsistencies you are concerned about\\u2014namely, different approaches to achieving the same manipulation for the same task. Through our Automatic Concept Discovery process (Sec. 3.1), we effectively discover manipulation concepts capable of distinguishing sub-processes that perform manipulations in different ways.\\n\\nTo illustrate this, consider an example related to threading tasks. Below are two sub-processes derived from different demonstrations of the threading task used in the Automatic Concept Discovery process. In human terms, both sub-processes fall under the category of \\\"after grasping the needle, transitioning it to align with the hole.\\\" However, due to variations in the needle's initial position, the first sub-process requires rotating the needle 180 degrees to face the hole, while the second involves only minimal rotation. In these distinct scenarios, our method differentiates between the two processes by assigning two distinct sets of manipulation concepts: one for the 180-degree rotation and another for the minimal rotation case. Please also refer to **Fig.8 in Sec.C.3** of our revised manuscript.\\n\\n**Traj. Example 1** Demonstrates the process of transitioning the needle to align with the hole, using the combination of #17, #1, and #26. This process requires rotating the needle 180 degrees. Path in Supplementary Material: **Supplementary Material/rebuttal_visualizations/inherent_inconsistency/rotate_180**\\n\\n**Traj. Example 2** Demonstrates the process of transitioning the needle to align with the hole, using #15. This process requires minimal rotation. Path in the revised Supplementary Material: **Supplementary Material/rebuttal_visualizations/inherent_inconsistency/rotate_0**\\n\\n**Regarding concern 2)**, we have conducted experiments using 200, 400, 600, and 800 demonstrations from each task for the Automatic Concept Discovery (Sec. 3.1). The findings presented below indicate that, after the Automatic Concept Discovery process, the number of discovered concepts (we test 10 times for each demonstration number setting, and provide the average number of discovered manipulation concepts here) remains relatively stable, showing that the number of concepts converge with respect to growth of the demo amount.\\n\\n| Number of demo. per tasks | Average Number of discovered manipulation concepts |\\n|:-:|:-:|\\n| 200 | 23.0 |\\n| 400 | 22.9 |\\n| 600 | 23.0 |\\n| 800 | 22.9 |\"}", "{\"comment\": \"**1.3. Abstraction of Sub-processes**\\n\\nThe Average Segment Length metric (Tab. 3, Sec. C.2) demonstrates that our manipulation concepts serve as meaningful abstractions of underlying processes rather than overfitted representations of the proprioceptive data. This metric measures the average duration (in continuous time steps) of segments labeled with the same manipulation concept. To further illustrate the abstraction achieved through our Automatic Concept Discovery process, we present a portion of Tab. 3 below, showing the Average Segment Length of our manipulation concepts:\\n\\n|Tasks| Cof. $D_0$ | Cof. $D_1$ | Cof. $D_2$ | Ham. $D_0$ | Ham. $D_1$ | Stk. $D_0$ | Stk. $D_1$ | 3 Pc. $D_0$ | 3 Pc. $D_1$ | Thd. $D_0$ | Mug $D_0$ | Mug $D_1$ |\\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | \\n| Average Segment Length | 28.3 | 31.0 | 27.6 | 33.9 | 29.5 | 26.7 | 22.5 | 28.5 | 28.8 | 33.3 | 29.3 | 28.5 |\\n\\nIn summary, the strategies incorporated in the Automatic Concept Discovery process, combined with the presented evidence, demonstrate its robustness against overfitting.\\n\\n**2. How Policy Learning Alleviates Overfitting**\\n\\nPolicy learning in our approach is exposed to the extensive variability in demonstrations provided by the MimicGen environment. This variability serves as a critical mechanism for mitigating overfitting during imitation learning, as elaborated in our response to **Reviewer UouH\\u2019s W.1**, second concern. By exposing the model to a diverse range of scenarios, including challenging and non-ideal conditions, the training process ensures that the policies are not trained on a limited set of conditions, thereby mitigating the risk of overfitting during policy learning.\\n***\\n>**Q.2** Why was a hypernetwork used for a single piece of the approach?\\n\\n**A to Q.2**\\nThank you for your inquiry about the Hypernetwork. For a detailed explanation of our rationale for using it, please refer to our response to you under **A to W.2**.\\n***\\n>**Q.3** How were the 950 demonstrations in the experiment generated, and were they over all 6 tasks, or a different set/subset of tasks? I was not able to find this anywhere in the paper.\\n\\n**A to Q.3**\\nThank you for your question regarding the demonstrations used in our experiments. We have employed 950 training demonstrations from the MimicGen benchmark for each task we selected and their respective levels of initial setting variation. To provide further clarity on this matter, please refer to the Implementation details section in Sec. 4.1 of our revised manuscript.\\n***\\nTo recap, we have made effort to address the weaknesses and questions raised in the review. We provided a new visualization experiment to clarify the consistency of our manipulation concepts. We also explained the rationale behind our architectural choices, offering insights into the reasoning behind these decisions. We have revised the manuscript to enhance the clarity of the notation, refine the descriptions of the methodology, and detail the experimental settings more precisely. Finally, we provided insights into how the proposed methods mitigate overfitting in both the Automatic Concept Discovery process and policy learning.\\n\\nWe hope our response can address your questions. If you have any further questions, please don't hesitate to contact us.\\n\\nThanks,\\n\\nThe Authors\"}", "{\"title\": \"My questions are well-answered\", \"comment\": \"Dear authors:\\nThanks for the careful and detailed rebuttal. \\n\\nFor W2, you present a few future tracks: \\\"Moving forward, we plan to delve deeper into both aspects, exploring ways to optimize data collection and labeling processes. Additionally, we aim to investigate potential bridges between these two stages to uncover synergies that could further streamline and enhance the overall workflow. \\\" this sounds good. Another valuable point for this paper that could be further investigated is the subjectivity within human labeling, e.g., we know humans cannot be 100% right, so do the manually labeled demos. More experiments may be needed to showcase the drawbacks of such subjectivity or to showcase that it doesn't matter a lot. \\n\\nHowever, this rebuttal mitigates most of my concerns(W1,2,3, Q1,2). Therefore, I will raise my rating.\"}", "{\"title\": \"Thanks\", \"comment\": \"Thank you for addressing the concerns and questions raised. With the answers given by authors properly integrated in the final paper, the its strengths and position can be further enhanced.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"comment\": \"Dear Reviewer Umdm,\\n\\nThank you for your thoughtful and encouraging feedback on our work. We appreciate the recognition of our framework's originality in integrating concept discovery with closed-loop policy learning, particularly its ability to autonomously discover manipulation concepts without relying on human annotations. Also, thank you for your acknowledgment of the novelty and robustness of our Concept Selection Transformer (CST) and the thoroughness of our experimental validation, including the ablation studies. We are delighted that the clarity and depth of our methodology, technical details, and visualizations resonated with you. Next we address the questions and comments raised in the review.\\n\\n***\\n>**W.1** Concept Interpretability: While the system autonomously discovers manipulation concepts, it would be interesting to try to explore how interpretable or human-understandable these concepts are. It would also be interesting to see if all the discovered concepts are utilized in downstream policy learning and, if not, what the unused ones look like.\\n\\n**A to W.1**\\n\\nWe sincerely appreciate your concern regarding the interpretability and utilization of the learned manipulation concepts.\\n\\n**1. Interpretability of Manipulation Concepts**\\n\\nTo address the interpretability issue, we have presented visualizations of the manipulation concepts in Fig. 4, Fig. 5, and Fig. 11-14. Notably, the annotations in Fig. 5 and Fig. 11-14 assign semantic labels to each manipulation concept based on their correspondence to human interpretations in the annotations of the figures. These annotations highlight the resemblance between the manipulation concepts identified by our model and human-understandable semantics.\\n\\n**2. Utilization of Manipulation Concepts**\\n\\nRegarding the utilization of manipulation concepts, we would like to clarify that **all discovered manipulation concepts are utilized in downstream policy learning**, and we have provided a detailed explanation of how CST utilize these manipulation concepts in the latter **A to Q.4** and **A to Q.5**. In addition, we also want to clarify that **manipulation concepts\\u2019 applicability may vary depending on the specific task**. This variation aligns with human intuition; for instance, a concept such as \\\"pulling\\\" may not be relevant in certain subprocesses, such as \\\"walking to the kitchen.\\\"\\n\\nTo illustrate this, we analyzed the manipulation concepts used in two tasks: Coffee (coffee_d0, coffee_d1, coffee_d2) and Mug Cleanup (mug_cleanup_d0, mug_cleanup_d1). The results are summarized below:\\n\\n**Manipulation Concepts Used in Each Task** \\n| Task | Manipulation Concepts |\\n|:-:|:-:|\\n| Coffee | 0, 1, 2, 3, 4, 5, 7, 8, 9, 10, 15, 17, 20, 24, 25, 26, 27, 28 |\\n| Mug Cleanup | 0, 1, 2, 3, 4, 5, 7, 8, 9, 10, 15, 17, 19, 23, 24, 25, 26, 27, 28, 29 |\\n\\n**Manipulation Concepts Exclusively Used by Each Task**\\n| Task | Manipulation Concepts |\\n|:-:|:-:|\\n| Coffee | 20 |\\n| Mug Cleanup | 19, 23, 29 |\\n\\nWe observed that the manipulation concepts unused in one task but employed in the other exhibit distinct proprioceptive state patterns and functionalities, explaining their task-specific relevance. Two examples are provided below:\\n- **Manipulation Concept 20** (exclusive to the Coffee task): This concept corresponds to the action of closing the lid of a coffee maker. The robotic arm\\u2019s gripper performs a sub-circular motion to complete the operation. Example can be found at this path of the revised supplementary material: **Supplementary Material/rebuttal_visualizations/exclusive_concept/concept20_close_tap.mp4**\\n- **Manipulation Concept 29** (exclusive to the Mug Cleanup task): This concept represents a straightforward \\\"pushing\\\" motion used to close a drawer, with the gripper following a linear trajectory. Example can be found at this path of the revised supplementary material: **Supplementary Material/rebuttal_visualizations/exclusive_concept/concept29_pushing_drawer.mp4**\\n***\\n>**W.2** Evaluation Scope: Although the paper evaluates the framework on a variety of tasks, the focus is primarily on tabletop manipulation in simulated environments. Expanding the evaluation to more complex morphologies or real-world scenarios would strengthen the claim that the approach generalizes well.\\n\\n**A to W.2**\\nThank you for your inquiry regarding the scope of our evaluation. We acknowledge the limitations posed by our current equipment, which restrict us from performing real-world evaluations and evaluation on more complex morphologies. However, we are actively exploring the effectiveness of our methods (like human pose estimation) within our current capabilities. We appreciate your understanding and are committed to expanding our evaluation scope as resources allow.\"}", "{\"summary\": \"## Research Question\\nFor imitation learning(IL), 1)data annotation requires additional human effort; 2) the inherent subjectivity in manual data collection\\\\annotation leads to manipulation concepts that do not align well with the robot\\u2019s configuration, and such inconsistency will accumulate, and 3) IL will face further difficulties due to the varying and unpredictable conditions encountered at each step. \\n\\n\\n## Proposed Method\\nTo mitigate the above challenges, the author proposed a framework that combines automatic concept discovery with closed-loop concept-guided policy learning. Such a framework will take unlabeled demonstration for training and autonomously extract and utilize manipulation concepts directly from the robot\\u2019s proprioceptive states. Specifically, for training, the \\\"Automatic Concept Discovery\\\" module will first derive manipulation concepts automatically. Then, for both training and testing, the \\\"Concept Selection Transformer (CST)\\\" will propose and adjust manipulation concepts in real time during the robot\\u2019s interactions with the environment. Finally, the \\\"Concept-Guided Policy (CGP)\\\" utilizes the selected manipulation concepts to execute actions based on instantaneous visual input\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The idea is a good catch for research questions\\n\\nThe manual annotation(sample efficiency), accumulative action inconsistency, and environmental unpredictability are classic bottlenecks for imitation learning, and the general idea of \\\"self-discovered sub-policy\\\\concepts\\\" is also not something new. However, the author directly extracts the concepts from the robot\\u2019s proprioceptive states, such design not only mitigates the misalignment caused by ambiguities in human semantics but also adapts dynamically to unforeseen situations. \\n\\n2. Appropriate baseline and good performance\\n\\nThe selected baselines are new and SOTA, and reasonable to compare with the proposed method. Major advances of the proposed method are observed according to Table 1.\", \"weaknesses\": \"1. Unclear presentation\\n\\nThe figures are a little bit messy (e.g. lack the connection of each part in Fig 3, the Closed-Loop Policy part in Fig 2 is unclear ), but it's a minor point. What matters are: 1) if the discovered concepts are consistent in all demos, i.e. if the concepts are in a unified, task-agnostic space for all demos; 2) For evaluation, it says 950 demos for each task, so does it mean: 2.1) the training data is 950 * |# of task| for single unified policy (set of concepts) and get tested all 6 type of tasks at once( in a mixed way). Or: 2.2) for each task, it uses 950 demos to train and test only for the task.\\n\\nFor 1), I suppose the concepts are consistent in all demos according to the formula (1), but it's better to be clear at the beginning. For 2), I'm not sure if it's 2.1) or not, if it's 2.2), the performance will be questionable as it's very close to overfitting. \\n\\n2. Less practical problem setting\\n\\nThere was a trend to label the raw robot demonstration automatically, but a more practical problem is the raw demo collection is much more expensive than demo annotation, and therefore, some researchers turn to human-play data to get more data to use. And it's the same story for this paper, the expensive robot demo collection reduces the significance of this work \\n\\n3. Concern about generalizability and scalability \\n\\nThough the paper claims better performance (compared with baselines) in a more diversified initial environment, here the diversity is only expressed by object positions and placement angles. Some concerns are: 1)What if there is concept inconsistency in the demos, e.g. to do the same manipulation, in a different way; 2)With the growth of the demo amount, will the # number of concepts converge or grow fast;\", \"questions\": \"Dear author:\\n\\nGenerally, the paper is a decent work, but it needs several extra explanations and explorations.\", \"and_i_will_raise_my_rating_if_the_following_concerns_are_solved\": \"1. The concerns in the \\\"Weakness\\\" part.\\n2. A minor point is to add an IL SOTA baseline specific to the problem, which is not limited to a Diffusion Policy basis.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer Wk2p,\\n\\nThank you for recognizing the strengths of our paper. We appreciate your acknowledgment of our novel approach in using advanced machine learning techniques for discovering robot actions and manipulation policies from unlabeled data. Your acknowledgment of our method's systematic integration of VQ-VAE, transformers, and hypernetwork is encouraging. We also appreciate your positive feedback on our method's robustness and effectiveness, demonstrated by higher success rates in diverse task simulations. Next we address your comments and questions in sequence.\\n***\\n>**W.1** Unclear presentation\\nThe primary weakness of this study is that it only contains a brief evaluation of the quality of the learned skills (i.e., are they generalized, are they more generalized than prior approaches), and a single visual example. Characterizing XSkill \\\"concepts\\\" via K-means clustering is perhaps not the most charitable interpretation of their concepts, as K-means clustering will naturally tend to collect noise in its clusters (noise is one of the stated drawbacks of the xskill results). Ideally there would be some sort of mean similarity score or some other comparison method - even something like a 2D embedding visualization of the different skills and concepts, if possible to generate, would give better faith in the stated quality of the learned concepts. This is a conference about learning representations, so I would love to see more focus on the quality of the representation of \\\"concept\\\". That being said, other papers, including InfoCon, also use final task success as the main comparison metric, so I don't consider this a big enough weakness to reject the paper.\\n\\n**A to W.1**\\nThank you for your thoughtful and insightful comments regarding the direct evaluation of the quality of representation for manipulation concepts.\\n\\nIn response to your concern, we have conducted a new visualization study inspired by the approach in [https://arxiv.org/abs/2410.11758](https://arxiv.org/abs/2410.11758). This study, presented in **Fig. 9 and 10** in **Sec. C.3** of the revised manuscript, examines the consistency of our manipulation concepts in relation to the proprioceptive states of the robots. The results indicate that our methodology effectively captures consistent manipulation concepts across various tasks.\\n\\nWe provide further detail on the visualization. In Fig. 9 and 10, each row corresponds to a task, and each column represents a specific manipulation concept. The sub-figure at the intersection of a given row and column visualizes all the proprioceptive states from the task in the row that are associated with the manipulation concept in the column. To facilitate visualization, we applied PCA to the proprioceptive states across all tasks, and the visualized points represent the PCA-reduced representations. The results show that non-empty sub-figures within each column exhibit similar patterns, emphasizing the consistency of the learned manipulation concepts across different tasks.\\n\\nThat said, we acknowledge a limitation of this visualization method. For manipulation concepts associated with broader ranges of proprioceptive state patterns, the visualizations may become less informative. In such cases, the points tend to densely populate the entire space, leading to distributions that are challenging to interpret. Another limitation arises when the manipulation concepts are represented as feature vectors rather than discrete symbols. While the method is well-suited for naturally discrete symbols, feature vectors\\u2014where each component can take on continuous values\\u2014require clustering to convert them into symbolic representations (i.e., discrete symbols representing each cluster). This clustering step introduces potential subjectivity and bias, depending on the choice of clustering method.\\n\\nLooking ahead, we aim to explore more nuanced approaches to evaluate and analyze manipulation concepts, developing assessments that offer deeper and more persuasive insights.\\n***\\n>**W.2** It is not clear why such a large variety of approaches were used in this paper. Many different transformer sizes were used, in addition to the diffusion policy and the VQ-VAE. Specifically, the use of hypernetworks also seems like an unnecessary addition to what is already a diverse and expressive set of architecture choices.\\n\\n**A to W.2**\\nThank you for your inquiry regarding our architectural choices.\\n\\nMost of the structural designs employed in our architecture are derived from established works. For example, the incorporation of designs such as Hypernetworks, VQ-VAE, and the number of Transformer layers, is adapted from the InfoCon (https://arxiv.org/abs/2404.10606) design. We have made slight adjustments to these components to enhance the performance for our experiments. Also, the implementation of the Diffusion Policy was influenced by its application in the MimicGen environment of our paper.\"}", "{\"comment\": \"Thank you for your thoughtful feedback. We understand your concern about complexity. While our approach does involve several learning techniques, each component serves a distinct and necessary functional purpose in our system, as our anstwer to weakness 2. We appreciate your recognition of how our revisions have strengthened the paper. We'll continue exploring ways to balance functionality and simplicity in our future work.\"}", "{\"summary\": \"This paper proposes a framework for long-horizon imitation learning by combining the \\u2018automatic concept discovery\\u2019 module and the \\u2018concept-guided policy learning\\u2019 module. The former is trained without human annotations of the sub-sequences of robot trajectories. Instead, they assign the concept to the trajectories in a self-supervised manner.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper proposes a method for hierarchical imitation learning by discovering the motion \\u2018concept\\u2019 without human annotation and motion policy conditioned on the discovered concept. The three tricks for discovering these concepts stably from trajectories seem to be reasonable.\", \"weaknesses\": \"Some literature potentially relating to the method or problem setting of this paper is missing. For example, learning abstract action with clustering (BeT[1]) and with VQ-VAE[2]; please contrast this work with others listed above.\\nThe proposal is tested with the Robosuite (MimicGen) benchmark, whose task is inherently highly compositional, so the concept discovery seems to be relatively easy. It is interesting to see the result with an \\u201cin-the-wild\\u201d dataset, such as dataset collected with a \\u2018learning from play\\u2019 manner [3].\\n\\n[1] Behavior Transformers: Cloning k modes with one stone https://arxiv.org/abs/2206.11251\\n[2] Behavior Generation with Latent Actions https://arxiv.org/abs/2403.03181\\n[3] https://arxiv.org/abs/1903.01973\", \"questions\": \"1)\\tWhy is only proprioception used for the automatic concept? It seems natural to include observations in addition to proprioception to assign the abstract concept to the time step. How can we generalize it to observation? There may be a case where similar proprioception (e.g., robot pose) but a different observation (e.g., an obstacle in the environment).\\n2)\\tHow do you decide the total number of concepts/codebooks of VQ-VAE (K)? How robust is the proposal in terms of the number of concepts?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"We sincerely thank all reviewers for their insightful feedback and constructive comments on our manuscript. We are grateful to note that the reviewers appreciate the soundness of our concept-guided pipeline in addressing the robot manipulation problem (R1, R2, R3, R4). Furthermore, our work is acknowledged for providing a fresh perspective of concept discovery without the need of human annotations (R1, R3, R4). The reviewers have also echoed our solid experiment results across a range of comprehensive tasks, as well as the selection of appropriate baselines for comparison (R2, R3, R4). Additionally, we are grateful for the positive remarks on the clarity of our paper's presentation and writing quality (R3, R4).\", \"In response to the reviewers' feedback, we have provided additional explanations and experiments to address their concerns. The manuscript has been revised accordingly, with all modifications highlighted in orange for ease of reference. A summary of the updates is as follows:\", \"**Main Body**\", \"**Sec. 1:** Include a reference to Sec. A.1 in the Appendix, explaining the use of proprioceptive states (R#1).\", \"**Sec. 2:** Add comparisons and analyses of works with similar objectives (R#1) and those related to our methodology (R#4).\", \"**Sec. 3.2:** Refine Eq. 10 and Eq. 11 for improved clarity and correctness (R#3).\", \"**Sec. 4.1:** Revise the implementation details of experiments (R#1, R#2, R#3).\", \"**Sec. 5:** Add Sec. D in the Appendix to detail future works, with references to it in this section (R#1, R#3, R#4).\", \"**Appendix**\", \"**Sec. A.1:** Provide explanations and experiments justifying the use of proprioceptive states (R#1).\", \"**Sec. A.3:** Include additional references on the benefits of Hypernetworks (R#3).\", \"**Sec. B.2:** Expand on experimental details (R#1, R#2, R#3).\", \"**Sec. C.3:** Add visualizations to evaluate the consistency of discovered manipulation concepts (R#3) and additional real-world application results (R#4).\", \"**Supplementary Material**\", \"Include some videos related to our rebuttal.\", \"Once again, we sincerely thank all reviewers for their valuable feedback and thoughtful suggestions towards enhancing our manuscript. We have carefully addressed the queries and concerns raised by each reviewer. Should there be a need for further clarification to assist in advancing our score, please do not hesitate to reach out.\", \"Thank you for your review!\"]}", "{\"comment\": \"In regards to the usage of Concept Selection Transformer (CST), we want to clarify that CST is the module that makes the concept-guided policy (Sec. 3.2) align with the manipulation concepts discovered from the Automatic Concept Discovery (Sec. 3.1) process. For every time-step in a manipulation process, this alignment provides the policy with a manipulation concept, featuring the motor skill needed to carry out. Thus, CST does not \\u2018filter out useless concepts\\u2019, but rather it maps current observation with corresponding concepts, given the fact that all the concepts are useful in their own regions.\\n\\n***\\n>**Q.5** How to deal with diversity? In skill discovery works, one common issue is a diverse set of skills/concepts can be discovered, but this diversity is canceled out by a downstream skill/concept selector due to its collapse on one or two most useful skills/concepts. How can you retain all the concepts discovered and demonstrate the diversity there?\\n\\n**A to Q.5**\\nThank you for your question related to the diversity of manipulation concepts.\\n\\nRelated to our response to you in **A to Q.4**, the diversity of manipulation concepts we discovered is only related with the Automatic Concept Discovery (Sec. 3.1) process itself. In our experiment, the diversity is not canceled out, because the concept selector is a well-trained transformer and doesn\\u2019t collapse on one or two most concepts.\\n\\nWe employ Shannon Entropy to quantify the diversity in CST's output distribution. A high entropy value indicates that the concept selector utilizes a broad range of concepts rather than collapsing to a few dominant ones. Specifically, if the concept selector were to concentrate solely on one or two concepts, the Shannon Entropy would fall within [0, 1]. Our experimental results demonstrate that the Shannon Entropy consistently exceeds 1.5, indicating that our concept selector effectively leverages a diverse set of concepts for downstream policy learning.\\n\\n| Tasks | Coffee d2 | Mug cleanup d1 |\\n|:-:|:-:|:-:|\\n| Shannon Entropy | 1.6241 | 2.4504 |\\n***\\nTo recap, we have made effort to address the weaknesses and questions raised in the review. We conducted a detailed study on the application of our manipulation concepts across different tasks.We outlined the potential and provided a roadmap for implementing our methodology in more complex and real-world scenarios. We reviewed the relevant works provided, incorporating and citing them where appropriate. Lastly, we clarified the training details of the Automatic Concept Discovery process (Sec. 3.1) and elaborated on the role of the Concept Selection Transformer (CST), specifically CST\\u2019s use of manipulation concepts discovered during the Automatic Concept Discovery process.\\n\\nWe hope our response can address your questions. If you have any further questions, please don't hesitate to contact us.\\n\\nThanks,\\n\\nThe Authors\\n***\\n[1] Li, C., Blaes, S., Kolev, P., Vlastelica, M., Frey, J. and Martius, G., 2023, May. Versatile skill control via self-supervised adversarial imitation of unlabeled mixed motions. In 2023 IEEE international conference on robotics and automation (ICRA) (pp. 2944-2950). IEEE. \\n[2] Peng, X.B., Guo, Y., Halper, L., Levine, S. and Fidler, S., 2022. Ase: Large-scale reusable adversarial skill embeddings for physically simulated characters. ACM Transactions On Graphics (TOG), 41(4), pp.1-17.\\n[3] Li, C., Stanger-Jones, E., Heim, S. and Kim, S., 2024. FLD: Fourier Latent Dynamics for Structured Motion Representation and Learning. arXiv preprint arXiv:2402.13820.\"}", "{\"comment\": \"1. Manipulation concepts, akin to motor skills, predominantly describe the proprioceptive motion of robots. Considering how humans categorize manipulation concepts, we typically summarize a single motor skill as \\\"grasping\\\" rather than distinguishing it by the object (e.g., we do not differentiate between \\\"grasping an apple,\\\" \\\"grasping cloth,\\\" or \\\"grasping a phone\\\" as distinct manipulation concepts). Therefore, learning manipulation concepts based on proprioceptive states provides a more direct means to extract representations that are closely associated with the robot's inherent motion patterns. Previous works such as BeT [1] and VQBeT [2] have similarly utilized action sequences to extract discretized representations, where actions represent changes in the proprioceptive state across single time steps.\\n2. Leveraging proprioceptive states inherently **excludes contextual variations** across tasks and environments, **facilitating** the maintenance of **consistency** in learned manipulation concepts. For instance, a robot may need to apply the concept of \\\"grasping\\\" across different tasks, each involving the manipulation of distinct objects. The proprioceptive state sequences during these grasping subprocesses are likely to be more consistent across tasks compared to those derived from environmental observations, which can vary due to differences in objects, arrangements, and background settings in each scenario. This highlights the potential for extracting more stable and transferable manipulation concepts by focusing on proprioceptive states.\\n\\nRegarding the concern of the \\\"obstacles problem\\\", we believe that obstacles will not influence the concept discovery. If there are obstacles in the way, the action sequence will be different, which will lead to different concepts.\\n\\nTo support our response, we conducted an additional experiment by incorporating observations of the entire environment (images) into the concept discovery process for several tasks. The results are presented below (**Ours** refers to our method that utilizes proprioceptive states for discovering manipulation concepts, while **Img** refers to the method that uses camera images of the entire environment for discovering manipulation concepts). The results indicate a decline in performance when using manipulation concepts derived from images compared to those derived from proprioceptive states.\\n| Tasks | coffee_d2 | mug_cleanup_d1 |\\n|:-:|:-:|:-:|\\n| Ours | 0.72 | 0.50 |\\n| Img | 0.64 | 0.40 |\\n\\nNevertheless, we acknowledge that visual information is still important to help us to better understand the manipulation process and discover useful concepts, but the vision input has to be effectively utilized. We plan to further explore how to better utilize visual information for the discovery of manipulation concepts in our future work (Please refer to Sec.D in our revised Appendix).\\n\\n***\\n>Q.2 How do you decide the total number of concepts/codebooks of VQ-VAE (K)? How robust is the proposal in terms of the number of concepts?\\n\\n**A to Q.2**\\nThank you for your thoughtful question regarding the size of the VQ-VAE codebook.\\n\\nThe selection of the size of the VQ-VAE codebook was based on a trial-and-error process. Our goal is to strike a balance between discovering a sufficient number of manipulation concepts and maintaining a high utilization rate of the codebook items.\\n\\nRegarding utilization rates, our findings indicate that increasing the size of the VQ-VAE codebook beyond a certain point does not significantly enhance the number of discovered manipulation concepts (We find that when K>30 The number of manipulation concepts discovered is always around 20\\\\~30, or 20\\\\~30 items of the VQ-VAE codebook items were actively utilized to represent manipulation concepts). We believe this is a reasonable outcome because manipulation concepts often exhibit high **transferability** across various tasks. Consequently, the environments and tasks in our experiments may not require a large number of distinct manipulation concepts.\\n\\nTo provide a more detailed analysis, we conducted additional experiments using a VQ-VAE with 40 codebook items. (This choice was informed by the observations mentioned above, as we did not find a need to test with significantly larger codebook sizes).\\n\\nWe compare the number of manipulation concepts discovered and the policy performance guided by manipulation concepts between Automatic Concepts Discovery with VQ-VAE with 30 codebook items (the main results reported in our work) and Automatic Concepts Discovery with VQ-VAE with 40 codebook items.\"}", "{\"title\": \"after-rebuttal feedback\", \"comment\": \"Dear Reviewer LkiS,\\n\\nWe would like to extend our gratitude for your valuable feedback on our paper.\\n\\nWe have carefully addressed all the points you raised and have submitted a detailed response to your review. As we are approaching the final stages of the discussion process, we wanted to follow up to see if you had the opportunity to review our rebuttal. Your feedback is crucial for us to improve and advance our research further.\\n\\nCould you kindly provide your response to our rebuttal at your earliest convenience so we can have a chance to address any remaining questions you may have? \\n\\nWe greatly appreciate your time and effort in reviewing our work and providing constructive feedback.\\n\\nThanks,\\n\\nThe Authors\"}", "{\"summary\": \"This paper presents a method for discovering robot actions and manipulation policies from unlabeled demonstration data. It presents techniques for discovering generic action/manipulation concepts and their goals, evaluating progress towards those goals, and generating policies to achieve the goals. This is done using various modern machine learning techniques and problem/loss function formulations. For concept discovery, A VQ-VAE is used to discover concepts and put them into a finite codebook. A transformer model is used for goal state detection. A hypernetwork and another transformer are used for goal state evaluation, and a final transformer is used for goal consolidation. All of these networks are jointly trained using a large combined loss function. Then, concept selection and policy generation are learned using a combination of a transformer and a diffusion policy. Finally, the approach is validated by training on a set of 950 robot demonstrations in 6 different tasks, then evaluated with randomized starting positions (all in simulation). The success rate of different tasks learned via the proposed method is higher than for competing metrics, even/especially in the presence of perturbing noise.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper compares the proposed approach to a set of highly relevant prior work in a fairly direct head-to-head comparison and the proposed approach nearly universally outperforms the prior work by a good margin.\", \"The approach is a novel combination of several modern techniques to solve a complex open problem in robotics. It expands upon work such as InfoCon by adding policy learning directly into the framework of manipulation concepts.\", \"The paper is written clearly and the construction of the experiments is well-explained (training, structure of network, etc.), including the re-implementation of baselines.\"], \"weaknesses\": [\"The primary weakness of this study is that it only contains a brief evaluation of the quality of the learned skills (i.e., are they generalized, are they more generalized than prior approaches), and a single visual example. Characterizing XSkill \\\"concepts\\\" via K-means clustering is perhaps not the most charitable interpretation of their concepts, as K-means clustering will naturally tend to collect noise in its clusters (noise is one of the stated drawbacks of the xskill results). Ideally there would be some sort of mean similarity score or some other comparison method - even something like a 2D embedding visualization of the different skills and concepts, if possible to generate, would give better faith in the stated quality of the learned concepts. This is a conference about learning representations, so I would love to see more focus on the quality of the representation of \\\"concept\\\". That being said, other papers, including InfoCon, also use final task success as the main comparison metric, so I don't consider this a big enough weakness to reject the paper.\", \"It is not clear why such a large variety of approaches were used in this paper. Many different transformer sizes were used, in addition to the diffusion policy and the VQ-VAE. Specifically, the use of hypernetworks also seems like an unnecessary addition to what is already a diverse and expressive set of architecture choices.\", \"**Style notes**\", \"perhaps this is a notational convention I am unaware of, but the parentheses in epsilon^(n) seem like unnecessary clutter.\", \"Should eq 11 be negative to account for the fact that the log term will always be negative?\", \"In eq 10, is the selected action from pi_D conditioned on the k selected by p_CST? If so, this could be made clearer, perhaps by splitting into two equations.\"], \"questions\": [\"At some point, the number of different networks + their parameters runs a risk of over-parameterization and overfitting. This is especially true since it seems that the set of demonstrations were collected over the same set of tasks as used in policy generation and success measurement. Combined with the lack of qualitative cross-task concept comparison or similarity analysis, I do have overfitting worries here. I have no evidence to directly support this flaw other than my own experience training large robotics models, but would still like to know the authors' thoughts on how overfitting is avoided in the proposed approach.\", \"Why was a hypernetwork used for a single piece of the approach?\", \"How were the 950 demonstrations in the experiment generated, and were they over all 6 tasks, or a different set/subset of tasks? I was not able to find this anywhere in the paper.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
9e5syenoVE
Multiple-play Stochastic Bandits with Prioritized Resource Sharing
[ "Hong Xie", "Yanying Huang", "Haoran Gu", "Defu Lian", "Enhong Chen" ]
This paper proposes a variant of multiple-play stochastic bandits tailored to resource allocation problems arising from LLM applications, edge intelligence applications, etc. The proposed model is composed of $M$ arms and $K$ plays. Each arm has a stochastic number of capacities, and each unit of capacity is associated with a reward function. Each play is associated with a priority weight. When multiple plays compete for the arm capacity, the arm capacity is allocated in a larger priority weight first manner. Instance independent and instance dependent regret lower bounds of $\Omega( \alpha_1 \sigma \sqrt{KM T} )$ and $\Omega(\alpha_1 \sigma^2 \frac{MK}{\Delta} \ln T)$ are proved, where $\alpha_1$ is the largest priority weight and $\sigma$ characterizes the reward tail. When model parameters are given, we design an algorithm named \texttt{MSB-PRS-OffOpt} to locate the optimal play allocation policy with a computational complexity of $O(M^3K^3)$. Utilizing \texttt{MSB-PRS-OffOpt} as a subroutine, an approximate upper confidence bound (UCB) based algorithm is designed, which has instance independent and instance dependent regret upper bounds matching the corresponding lower bound up to factors of $K \sqrt{ \ln KT }$ and $\alpha_1 K$ respectively. To this end, we address nontrivial technical challenges arising from optimizing and learning under a special nonlinear combinatorial utility function induced by the prioritized resource sharing mechanism.
[ "Multiple-play stochastic bandit", "prioritized resource sharing", "regret bounds" ]
Reject
https://openreview.net/pdf?id=9e5syenoVE
https://openreview.net/forum?id=9e5syenoVE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nu7SLbkdGy", "bv6jKqOjsl", "YuX8rdTJ2h", "Y8dUq8gKN3", "GmbGvU3W9c", "1WEcMArSOg" ], "note_type": [ "official_review", "official_review", "decision", "official_review", "meta_review", "official_review" ], "note_created": [ 1730496284652, 1730621364404, 1737524300948, 1730702784712, 1733620638384, 1730523314843 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14147/Reviewer_Rvss" ], [ "ICLR.cc/2025/Conference/Submission14147/Reviewer_cHu6" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission14147/Reviewer_CXEK" ], [ "ICLR.cc/2025/Conference/Submission14147/Area_Chair_5jsU" ], [ "ICLR.cc/2025/Conference/Submission14147/Reviewer_oKpD" ] ], "structured_content_str": [ "{\"summary\": \"This paper considers a variant of the stochastic bandit problem where players can select multiple arms from a pool consisting of a fixed number of arms, with each arm's capacity following a time-invariant distribution. Players receive rewards only if the chosen arms have sufficient corresponding capacity. The objective is to maximize cumulative rewards over a fixed-horizon game. To address this problem, the paper proposes a new algorithm based on the philosophy of combinatorial bandits, along with learning the capacity distribution, under the assumption of an oracle's existence. For the proposed algorithm, the paper provides both lower and upper bound analyses on the regret to demonstrate its (near) optimality. Numerical experiments are conducted to validate the proposed algorithm and demonstrate improvements compared to benchmarks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper considers a novel problem setting where the arm capacity is stochastic, unlike existing work.\\n2. The paper also develops algorithms specifically to address this proposed problem setting.\\n3. The theoretical effectiveness of the proposed algorithm is supported through both lower and upper bounds.\\n4. The numerical experiments help illustrate the algorithm\\u2019s performance.\", \"weaknesses\": \"1. The paper mentions a use case for this problem setting in the LLM context. However, I am curious if this could be more practical, specifically whether it is something that could feasibly be deployed in that context.\\n\\n2. The existence of an oracle depends on locating the maximum weight matching, which is referenced from existing work. I wonder if this reference includes any theoretical guarantee supporting the claim that this oracle is theoretically optimal. Further justification would be beneficial here.\\n\\n3. The lower bound analysis lacks technical novelty.\\n\\n4. The real challenge posed by stochastic capacity is unclear. Existing work assumes deterministic arm capacity without requiring it to be known. It is difficult to assess whether the stochastic realization actually makes the problem more challenging (due to randomness) or possibly easier (given known observations).\", \"questions\": \"I would refer to the weakness part. Any responses/comments would be very helpful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a new framework called Multiple-Play Stochastic Bandits with Prioritized Resource Sharing (MSB-PRS), which belongs to the research area of multi-play multi-armed bandit. Within this framework, an efficient algorithm is developed to identify the optimal play allocation policy while maintaining low computational complexity. The study establishes lower bounds for both instance-independent and instance-dependent regret. Additionally, the proposed algorithm is based on the application of the classic Upper Confidence Bound (UCB). It maintains the same per-round computational complexity and achieves sublinear regret upper bounds that closely align with the established lower bounds.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The multi-play multi-armed bandit (MP-MAB) problem is a significant model in online learning, and I appreciate the efforts that the authors intest in solving an interesing model of it, that is the MSB-PRS problem. For it, an algorithm has been developed to identify the optimal play allocation policy with a specific complexity. The upper bounds on regret are close to the lower bounds (up to some factors) in both instance-dependent and instance-independent scenarios.\", \"weaknesses\": \"My main concern is that while this work provides rigorous theoretical analysis and proofs, I am still not entirely clear on its contributions.\\n\\nFirst, although the problem model is somewhat introduced, I find it challenging to connect it with specific examples of resource allocation. While the authors mention its applicability in high-interest areas like LLMs, they do not provide corresponding explanations. Is there a way to contextualize its application in LLMs, or could examples of practical applications be included? \\n\\nSecond, the authors offer some related work, but I still struggle to compare them with this study. To address this, I suggest including a table to compare the results of this work with previous findings. \\n\\nFinally, the experiments are overly simplistic; they do not thoroughly describe the experimental setups or compare it with other studies. While I appreciate the authors' efforts in deriving theoretical results, I believe there is still significant room for improvement in the presentation of this work.\", \"questions\": \"I do not have any questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper works within the MP-MAB framework and proposes a variant of the framework catered to LLM and edge intelligence applications. With these applications in mind, the work imposes additional structure in the form of their MSB-PRS Model.\\n\\nIn Section 3 they introduce the model and provide the problem formulation. In Section 4 they characterize the hardness of the problem and fundamental learning limits by providing lower bounds, In Section 5 they present their UCB based learning algorithms, and in Section 6 they present experiments validating their approach and comparing it to baselines from the literature.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. New variant of MP-MAB with resource prioritization.\\n2. Lower bounds characterizing the hardness of their problem variant\\n3. UCB based algorithm for learning - ApUCB\\n4. Instance dependent and instance independent upper bounds on ApUCB\", \"weaknesses\": \"In my opinion while there seem to be innovative and impactful ideas in the paper it can use a lot better presentation before being accepted. In this bullet I would highlight how Section 3 on the MSB-PRS Model is barely readable by being cluttered by endless notation. Such a presentation makes it incredibly hard to takeaway any intuitive mental pictures of the setup that could then serve as the basis of appreciating the methods presented in the remaining paper.\", \"actionable_suggestions\": \"Please add a high-level overview paragraph in Section 3 before introducing the model mathematically using the complete notation. Please include an illustrative example or visual representation of the MSB-PRS model alongside this new paragraph.\\n\\n2. The paper does a poor job of motivating their target applications with the exposition being limited to a few lines in the introduction with vague wordings. In particular the only reference to LLM applications reads the following in the introduction: \\\"in LLM applications, reasoning tasks and LLM instances can be modeled as plays and arms respectively. .... priority quantified by price, membership hierarchy\\\". Which by itself gives very little insight into how the modeling in the paper is good for this application.\\n\\nI would encourage the authors to expand on their motivating example by dedicating a sub section to explaining how the MSB-PRS model applies to LLM and edge intelligence applications.\", \"questions\": \"1. On pg 8 around Eqn 6 the authors try to say that computing exact-UCB is intractable and introduce UCB instead. In the process they say that \\\"Exact-UCB may attain the max value at different selections of \\\\mu, P for different action values especially when the confidence band fails\\\". It is not clear at all by what is meant by this sentence and better explanation and writing are needed in this regard.\\n\\n2. What are the baselines OnlinActPrf and -v version doing exactly? If this paper is the one introducing this new MSB-PRS framework structure then how was this approach from the literature adapted to make a fair comparison with your novel approach? These details are currently missing from both Section 6 and Appendix B.\\n\\nI would suggest that the authors provide a brief description of the OnlinActPrf and OnlinActPrf-v algorithms in Section 6 or Appendix B explaining how they were adapted to the MSB-PRS setting. Additionally, please discuss the fairness of the comparison given the differences in problem formulation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents a new framework called Multiple-Play Stochastic Bandits with Prioritized Resource Sharing (MSB-PRS), which belongs to the research area of multi-play multi-armed bandit. Within this framework, an efficient algorithm is developed to identify the optimal play allocation policy while maintaining low computational complexity. There are many concerns raised by the reviewers for motivation and contribution, which are not addressed by the authors.\", \"additional_comments_on_reviewer_discussion\": \"NA\"}", "{\"summary\": \"The paper extends the multi-play multi-armed bandit (MP-MAB) model to include a prioritized resource-sharing mechanism, referred to as MSB-PRS. The model targets resource allocation scenarios in LLM and edge intelligence applications, where different plays are assigned different priorities, and each arm has multiple but random capacities. The authors establish both instance-independent and instance-dependent regret lower bounds for the model and propose an efficient learning algorithm, MSB-PRS-ApUCB, which achieves order-optimal regret bounds. The authors also conduct simulations based on synthetic data to validate their proposed algorithm.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The introduction of prioritized resource sharing into the multi-play bandit framework is novel, nabling random arm capacities and differentiated priorities for various plays, which are well-suited for practical applications such as LLM and edge intelligence.\\n\\n2. The proposed MSB-PRS-ApUCB algorithm is thoughtfully designed and well-motivated, achieving regret bounds that closely align with the established lower bounds, up to acceptable factors.\\n\\n3. The synthetic experiments provide a good assessment of the performance of MSB-PRS-ApUCB compared to baseline algorithms.\\n\\n4. The paper is well-structured and clearly written, making it easy to follow.\", \"weaknesses\": \"1. The learning component of the algorithm and the regret analysis are fairly standard, as there exists an optimal matching between players and arms, and the remaining thing is to target this optimal matching through UCB strategy, as done in much of the literature. However, I acknowledge that finding the optimal matching is not easy due to the nonlinear combinatorial structure of the utility functions.\\n\\n2. The concentration bound in Lemma 5.4 seems incorrect. I believe the authors should use Lemma 9 from [Maillard, et al., 2017] instead of Lemma 10. Consequently, the inequalities in lines 977, 1006, and 1053 also appear to be incorrect. The authors should check these carefully.\", \"questions\": \"1. In line 190, the reward is defined as being scaled by the priority parameter. Could the authors provide a practical example to clarify this definition?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
9dfRC2dq0R
ChinaTravel: A Real-World Benchmark for Language Agents in Chinese Travel Planning
[ "Jie-Jing Shao", "Xiao-Wen Yang", "Bo-Wen Zhang", "Lan-Zhe Guo", "Yu-Feng Li" ]
Recent advances in Large Language Models (LLMs), particularly in language reasoning and tool-use capabilities have sparked the rapid development of \emph{Language Agents} to assist humans across various real-world applications. Among these, travel planning stands out as a significant domain, presenting both academic challenges and practical value due to its inherent complexity and real-world relevance. However, existing travel plan benchmarks do not test language agents with human users or their ability to follow customized requirements, both of which are vital for deploying them in real-world applications. In this paper, we propose ChinaTravel, a new benchmark tailored to authentic Chinese travel requirements, aiming to provide a more realistic evaluation framework for future language agents. We collect the travel requirements through questionnaires and employ an efficient and faithful evaluation process with 46 metrics covering feasibility, constraint satisfaction, and preference comparison. Moreover, we identify three challenges in the real-world deployments of travel planning, including \emph{constraint recognition}, \emph{concept openness}, and \emph{customized preference}. The empirical studies show that even state-of-the-art neural-symbolic agents succeed in 51.3\% constraint validation of human queries. Our findings point to the need for methods that can improve the ability of agents to understand diverse intentions or keep track of constraints with emerging concepts from human requirements.
[ "Language Agents", "Evaluation", "Travel Planning", "Neural-Symbolic Learning" ]
https://openreview.net/pdf?id=9dfRC2dq0R
https://openreview.net/forum?id=9dfRC2dq0R
ICLR.cc/2025/Conference
2025
{ "note_id": [ "voVm7J8z9a", "p3IwE0S7il", "JCdLsnBznh", "FjqmqAUACE", "FVRZYfA770" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730037612262, 1731163356093, 1730499749693, 1732550350424, 1730645297678 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5928/Reviewer_Pfyh" ], [ "ICLR.cc/2025/Conference/Submission5928/Reviewer_oexi" ], [ "ICLR.cc/2025/Conference/Submission5928/Reviewer_gn6Q" ], [ "ICLR.cc/2025/Conference/Submission5928/Authors" ], [ "ICLR.cc/2025/Conference/Submission5928/Reviewer_DdDN" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces the ChinaTravel benchmark, designed to address real-world travel planning requirements specific to China. The benchmark aggregates a large number of POIs across 10 cities, creating a rich sandbox for evaluation. Furthermore, realistic travel scenarios were developed using questionnaires, and queries were generated through either LLMs or human annotators. Agent performance is evaluated with 46 metrics encompassing feasibility, constraint satisfaction, and preference alignment. Language agents utilizing step-by-step search achieved a 51.3% success rate, highlighting both the complexity of the benchmark and the limitations of current language agents.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This work provides a comprehensive testbed for evaluating the travel planning abilities of current language agents in the context of Chinese travel requirements, with extensive data and high-quality, LLM-generated, or human-annotated queries.\\n2. The proposed benchmark is valuable for future research in travel planning for China, offering practical and substantial data entries.\\n3. The paper is well-structured and easy to follow, with a clear and well-presented structure and analysis.\", \"weaknesses\": \"1.\\tThe success rate of search-algorithm-powered language agents falls below expectations since DFS theoretically guarantees a solution if one exists (although the search space may be vast). The reported success rate of only 51.3% is attributed to \\\"arbitrary descriptions for defined concepts\\\" and the \\\"emergence of undefined concepts.\\\" Given that LLMs cannot access the entire database, they can only plan with the information provided in the environment. This limitation is more of a design issue within the system itself than a deficiency in the LLMs, as expecting agents to plan beyond their accessible information is unreasonable, even for humans. Solutions such as fuzzy matching might mitigate this issue. I wonder if the results could approach the near 100% success rate achieved in the previous approach designed for TravelPlanner[1].\\n2.\\tIn the experiment, before performing DFS, all constraints are extracted, and then hard-coded rules guide the agents during the search process. This raises the question: if constraints are already extracted and hard rules still need to be manually coded, what is the added value of using language agents? Would it not be more efficient to complete the system without involving language agents at all? If the goal is to develop an autonomous agent, extracting constraints and designing code manually seems counterproductive.\\n3.\\tThis work seems more like an extension of the previous TravelPlanner, adding extra constraints and adapting the setting to suit the China Travel context rather than introducing a new novel and timely benchmark the community needs now.\\n\\n### Reference\\n[1] Hao Y, Chen Y, Zhang Y, et al. Large Language Models Can Plan Your Travels Rigorously with Formal Verification Tools[J]. arXiv preprint arXiv:2404.11891, 2024.\", \"questions\": \"1. In Algorithm 1, how does the validation function operate? Is it determined by the agents or by external hard rules?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Real-world planning remains a challenging task for LLM agents and requires significant research effort. While neuro-symbolic reasoning demonstrates great potential in a few travel planning scenarios, real-world planning brings additional challenges. This paper proposed to create a real-world travel planning dataset with human preferences and logical constraints by considering information from 10 cities of China. The proposed approach consists of 5 main steps including manual database design, automated data generation using LLMs, automatic validation, curating requirements and constraints from human and finally creating a preference data to accommodate human expectations. A depth-first greedy planning agent is then proposed to satisfy the travel planning requirements. Experimental results demonstrate that the greedy solution typically performs better than ReAct agents, but still faces significant challenges in handling hard requirements and preferences.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Real-world travel planning is a challenging and complex task where LLM agents typically perform poorly. The author developed a reasonable size dataset with the help of user study and LLM. The real-world travel itinerary constraints and preferences are collected directly from humans which are then leveraged to generate additional synthetic information for large-scale data creation. This dataset can be useful to expand future research on LLM agentic planning. Moreover, it is demonstrated that a simple neuro-symbolic greedy method can outperform ReAct type agents.\", \"weaknesses\": \"Although I appreciate the effort to create large-scale travel planning dataset that will be helpful for future research, I have some concerns regarding the technical and experimental novelties of the paper:\\n1. The constructed dataset has only 154 queries which are generated from survey with 250 users. Therefore, it is not clear to me whether the dataset covers all possible real-world travel plan constraints and user preferences. \\n2. Additional constraints and requirements are generated from LLMs, but it is not described how these are validated. An automated validation method would have been great to generate large-scale data with high coverage of potential constraints.\\n3. The proposed NeSy planning algorithm is a simple greedy extension of neuro-symbolic planning. So, it is hard to validate the quality of the solution generated from NeSy planner. Is it possible to identify some bounds on how far the solution is from optimal preference?\\n4. How is the preference data handled by NeSy planner? For travel planning, it is essential to understand how LLMs deal with human preferences (e.g., optimized cost, minimal travel time and so on).\", \"questions\": \"1. How do you validate the accuracy of LLM generated constraints and requirement?\\n2. How can NeSy planner identify human preferences and come up with an optimized solution?\\n3. How do you guarantee the coverage of constraints and preferences in the proposed dataset?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"This paper presents ChinaTravel, a solution targeting multi-point-of-interest (multi-POI) itineraries within selected cities in China. Notably, the authors adapt well-known in-context learning techniques and neural-symbolic approach to solve the multi-POI planning problem and evaluate their method on the ChinaTravel benchmark. A good contribution of the paper is the provision of 46 evaluation metrics that comprehensively assess:\", \"23 environment constraints\", \"10 hard logical constraints\", \"13 preference requirements\"], \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Originality: This is a solution targeting multi-point-of-interest (multi-POI) itineraries within selected cities in China.\", \"Quality: The empirical study is robust. The used technique to produce the baseline leveraged in context learning based methods and neuro-symbolic techniques, covering a good ground of applicable approaches to solve the task with LLM agents.\", \"Clarity: The paper is well-structured, with clear descriptions of each step in data collection and validation, providing readers with a transparent view of the methods.\", \"Significance: With an array of evaluation metrics, the paper provides a detailed assessment of the generated travel plans, enhancing the reliability of LLM-agent performance.\"], \"weaknesses\": \"It is important to note that most of these constraints (environmental, logical) are extensions from existing works, enhancing their coverage aspect.\\n\\nMajor Comments\\n1. Comparative Baseline:\\nThe empirical study could benefit from a broader range of reasoning-action-based frameworks. In particular, it would be valuable to see a comparison with \\u201cReflexion,\\u201d a reasoning-action framework considered in the TravelPlanner benchmark. This would offer a more comprehensive evaluation of ChinaTravelPlanner\\u2019s performance relative to established approaches.\\n2. Alternative Model Performance (Section 4.2, Line 432):\\nIn Section 4.2, the paper mentions that \\u201c...with many models failing entirely...\\u201d. Could you clarify whether additional models beyond those in Table 2 were evaluated? If so, details on their performance, especially regarding delivery, environmental, and logical pass rates, would add clarity and highlight the robustness of the proposed method.\\n3. Table 2 and Preference Requirements Constraints:\\nThe paper introduces preference requirements constraints, yet they are not referenced in Table 2. Including these would provide a fuller picture of constraint adherence and improve the interpretability of results across all introduced metrics.\\n\\nMinor Comments (Spelling and Grammar Corrections)\\n* Line 140: Please revise \\u201cThey integrates\\u2026\\u201d for subject-verb agreement to improve grammatical fluidity.\\n* Line 454: Correct \\u201cNesy planning\\u201d to \\u201cNeSy Planning,\\u201d and add a missing period at the end of this sentence for completeness.\", \"questions\": \"Overall, the paper provides a meaningful contribution to multi-POI itinerary planning within an agentic framework; however, it appears to be an incremental extension of existing datasets and methods rather than a substantial innovation. The work closely parallels the objectives of the TravelPlanner paper, though adapted for a different geographical focus. The main advancements seem to be limited to alternative baseline dataset and evaluation metrics, rather than introducing new approaches or frameworks. Addressing the questions asked above could help clarify the paper\\u2019s contributions more and enhance its impact.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper introduces ChinaTravel, a realistic travel planning benchmark designed to accommodate diverse Chinese travel requirements. Featuring a sandbox environment, ChinaTravel encompasses 10 top travel cities in China with hundreds of evaluation instances. Compared to existing benchmarks, ChinaTravel presents greater challenges due to its flexible travel requirements and a wide range of realistic metrics. Results from the STOA neural-symbolic agents, including GLM, DeepSeek, and GPT-4, illustrate the benchmark's effectiveness in distinguishing the capabilities of various large language models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-organized and easy to understand.\\n2. The travel planning problem presents a realistic task for evaluating the capabilities of large language models. The proposed challenging benchmark advances the field of travel planning by encouraging the development of more practical solutions.\\n3. As a complex task with diverse inputs and evaluation procedures, the benchmark-building process introduced in the paper is both appropriate and clearly articulated.\", \"weaknesses\": \"1. Compared to TravelPlanner, the differences appear to be limited, even though the authors have highlighted some distinctions in Table 1. While the most significant distinction lies in spatial coverage, other differences, such as constraints and metrics, appear less pronounced. For instance, I believe one of the most crucial metrics is the ability to guide users in articulating their requirements and to interact with them through multi-turn dialogues to refine the schedule.\\n\\n2. The evaluation could be enhanced by including more results from various large language models, particularly open-source models. For example, you can consider Qwen2.5[1], LLama3.1[2] and Mistral-Small[3], in your experiments.\\n\\n[1] Qwen2.5 https://qwenlm.github.io/blog/qwen2.5/\\n\\n[2] Llama3.1 https://ai.meta.com/blog/meta-llama-3-1/\\n\\n[3] Mistral-Small https://huggingface.co/mistralai\\n\\n3. Based on the cases presented in Figure 4, the performance of large language models may be underestimated. For instance, the arbitrary descriptions in the two cases are not particularly challenging for large language models, as they simply require common sense knowledge within the travel service domain.\", \"questions\": \"1. Additional results from a variety of large language models, particularly open-source models, would be beneficial. For example, you can consider Qwen2.5, LLama3.1 and Mistral-Small, in your experiments.\\n\\n2. Could the authors provide a more comprehensive explanation of ChinaTravel\\u2019s unique contributions? For instance, what additional evaluation metrics and travel requirements are introduced, how do these features benefit the research community, and why are they significant and challenging?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
9dFCm4uZo8
Exploring Compositionality in Vision Transformers using Wavelet Representations
[ "Akshad Shyam Purushottamdas", "Pranav K Nayak", "Yashmitha Gogineni", "Sumohana S. Channappayya", "Konda Reddy Mopuri" ]
Insights into the workings of the transformer have been elicited by analyzing its representations when trained and tested on language data. In this paper, we turn an analytical lens to the representations of variants of the Vision Transformers. This work is aimed to gain insights into the geometric structure of the latent spaces of each encoding layer. We use representation-similarity measures, and representation-visualization approaches to analyse the impact of training regimes on the latent manifolds learned. We then use our approach to design a test for quantifying the extent to which these latent manifolds respect the compositional structure of the input space. We restrict our analysis to compositional structure induced by the Discrete Wavelet Transform (DWT). Interestingly, our empirical analysis reveals that ViT patch representations give notions of compositionality with respect to the DWT primitives.
[ "Vision Transfomers", "Explainability", "Compositionality", "Latent Representations" ]
Reject
https://openreview.net/pdf?id=9dFCm4uZo8
https://openreview.net/forum?id=9dFCm4uZo8
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wUQ8zbLCUw", "myXY7V9Po4", "mW26YyOuIj", "kjvPVVTvCQ", "kMHXCGByuV", "if7n838E2a", "aPIghO2sWv", "R61KeVNvwc", "O3GsaXZnZr", "NvQ7XUswlk", "AWmK3uEV6L", "5vsOMmKXe2" ], "note_type": [ "official_comment", "decision", "official_comment", "meta_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732734321175, 1737524285258, 1732736594984, 1734394130492, 1732792343097, 1730772811092, 1730245403704, 1732908786961, 1732781416580, 1732736637523, 1732792283088, 1730710190622 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13839/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13839/Authors" ], [ "ICLR.cc/2025/Conference/Submission13839/Area_Chair_PaJ1" ], [ "ICLR.cc/2025/Conference/Submission13839/Authors" ], [ "ICLR.cc/2025/Conference/Submission13839/Reviewer_guFm" ], [ "ICLR.cc/2025/Conference/Submission13839/Reviewer_cTeu" ], [ "ICLR.cc/2025/Conference/Submission13839/Reviewer_guFm" ], [ "ICLR.cc/2025/Conference/Submission13839/Authors" ], [ "ICLR.cc/2025/Conference/Submission13839/Authors" ], [ "ICLR.cc/2025/Conference/Submission13839/Authors" ], [ "ICLR.cc/2025/Conference/Submission13839/Reviewer_iA5z" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for their detailed response and valuable feedback. We hope to answer all of their points in similar fashion.\\n\\n## **Weaknesses**\\n### **Clarity**\\n> **1)** Figure 1: What does it mean here for the original image\\u2019s representation to be compared to the composed image representation?\\nDoes it mean that the maps we see in the figure are the result of the comparison, or are these just the composed representations and the\\ncomparison was done outside of the figure?\\n\\nThe SSIM maps shown in the figure are the results **after** comparison of the original representation and the composed representation at every layer of the encoder. We have included the clarification in the caption of the figure in the revised draft.\\n\\n> **2)** Section 3.1 ViT: What was the ViT model used for this experiment? What was it pre-trained on? The section does not explicitly specify\\nthese experimental conditions.\\n\\nWe thank the reviewer for pointing this out, and we have included the clarifications (L321-L322) in the revised draft. To answer this query, the models used are ViT- Base and ViT Large, both of which are pre-trained on the Imagenet-21k dataset.\\n\\n> **3)** L214-L215: What is the purpose of f_eta ? I understand that this is defined in Andreas 2019, but its definition is missing from this paper.\\nI think explicitly defining it and its role would improve the clarity of this section.\\n\\nWe thank the reviewer for pointing this out, and we have included the clarifications in the revised draft. The clarifications are included at L200-L201 and L225-L226 in the revised draft. To answer the question $\\\\hat{f_{\\\\eta}}$ is a compositional approximation of the complex model $f$ . It can be used to measure the compositionality of the model.\\n\\n> **4)** What do the C\\u2019s and a\\u2019s represent?\\n\\nThe Ca's and Cd's represent the approximate and detail coefficients for each level of decomposition. The approximate is the output\\nof a low pass filter and the detail is the output of a high pass filter. For the next level, the approximate is passed through the low pass and high pass filter to get the coefficients for the next level. \\n\\n> **5)** What are LL, LH, HL, and HH?\\n\\nLL,LH,HL and HH are the corresponding Low-Low, Low-High, High-Low and High-High subbands from the wavelet decomposition. We have edited the figure and corrected the labeling. The labels for each image now follow the previously defined notations.\\n\\n### **Soundness**\\n\\n> **1)** Section 3.1: I believe this section aims to show that simply adding the DWT wavelets with equal weights does not yield a correct composition. However, this claim is supported by showing results over a single image from a single ViT model. I believe this experiment\\nwould need a much larger sample size over images/models in order to make such a broad claim.\\n\\nThis is a minor confusion. The SSIM map (Figure 1) was computed over a representative image but plot (in the initial submission) showing the CKA scores (Figure 2) takes 200 images and averages their performance over all encoder layers. Taking the reviewer\\u2019s point into account we have conducted another experiment by taking a total of 10000 images (10 images/per class from imagenet-1k dataset). The current figure in the revised draft shows these results.\\n\\n> **2)** L216-217: The original compositionality formulation from Andreas 2019 is modified to shift the application of the encoder function\\nfrom the input space to an arbitrary intermediate representation space within the model. If I\\u2019m understanding correctly, doesn\\u2019t this\\nviolate the core premise of the problem statement? The purpose of compositionality tests is to find homomorphisms between the input\\nspace and the representation space, this is important because the input space comes from the data generating function. I\\u2019m not sure if it\\nmakes sense for this test to be defined for the transformation from one hidden layer to another. Minor, but I would suggest modifying the\\nsentence \\u201cinstead of drawing exact parallels, we tweak this statement to suit our analysis\\u201d, since the wording gives the impression that the\\nformulation was modified to suit the narrative of the paper, rather than the needs of the original empirical question being asked.\\n\\nWe thank the reviewer for this observation. We regret the confusion caused by overlooking it. The proposed framework investigates the homomorphism from the input space to the embedding space learned by the ViTs. We have rectified the mistake in the\\nrevised draft (L228-L234). To clarify, the model is indeed taking inputs from the input space (which is the image space), whose derivatives (wavelet decompositions) are fed to the model $f$ . To check for the compositionality of layer $l$, the encoder representations of that layer (transformation from the input to that layer) are fed to the composition function $\\\\hat{f_{\\\\eta}}$\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Official Response to Reviewer iA5z : Part One\", \"comment\": \"We thank the reviewer for their constructive feedback. We hope to answer their queries point by point.\\n\\n### **Weaknesses**\\n\\n> **1)** My main concern is that the presented results do not convincingly demonstrate compositionality. Rather than defining true combinations of wavelet primitive representations, it appears that the learned weights mainly select the low-pass filtered image (Table 3). Indeed, it is not particularly surprising that the images in Figure 5 perform similarly to the original images. \\n\\nWe thank the reviewer for this comment. As mentioned in section 3.1 of the draft, our motivation for a learned composition of the\\nwavelet primitives instead of the true combination (summing each primitive) of primitives in the representation space is based on our\\ninitial experiments of SSIM scores and CKA plots. These results show that the \\u201ctrue\\u201d combination does not demonstrate compositional\\nbehavior. We then speculate that if any compositional behavior does exist, could it be learned? Our results confirm that such a composition exists for the final encoder layer for level 1 DWT decomposition. To answer the reviewer\\u2019s concern as to the selection of mainly the low-pass filtered image, we conducted an experiment to see how the low-pass filtered image\\u2019s representation performs compared to the original image\\u2019s representation. We take 10 images per class from all the 1000 classes in ImageNet-1K and present the results. \\n\\n| Model | DWT Level | Original Accuracy | Low-pass filtered Image Accuracy | Learned Accuracy |\\n| ---------------------- | --------------- | ------------------------ | ---------------------------------------------- | ------------------------- |\\n| ViT Base (Unconstrained) | Level 1 | 0.792 | 0.494 | 0.771 |\\n\\nAlthough the learned composition pays more attention to the low-filtered component, the other components are also important. This clearly signifies that a composition of all these components does indeed approximate the original representation much better than the low-pass component.\\n\\n> **2)** Compositionality is typically more valuable when components are semantic rather than appearance-based. It is doubtful that wavelets would exhibit compositional properties in the final layers of a model, where higher-level concepts are typically captured; instead, this is more likely to occur in lower-level layers. Furthermore, the idea that wavelets are a good basis for compositional representations is not really explored, and no other decomposition methods are considered or compared.\\n\\nWe understand that probing compositionality is done by identifying simpler concepts that can be composed into complex ideas. In the\\nNLP setting, it is easy to find this analogy, which is that words can form simpler concepts composed into complex sentences. However,\\nit is difficult to break down in the context of vision when the complex ideas are images. The primitive set required for the framework is very\\nchallenging to identify since the image space is continuous. We chose the DWT to extract the primitives because of the sound mathematical proof that wavelet decompositions have perfect reconstruction. We check for a homomorphism between the input and representation space on the foundation that the DWT has perfect reconstruction in the input space. To the best of our knowledge, we are the first to probe for compositionality with the framework presented in our paper using wavelets. Other decomposition methods, such as Fourier transform or Discrete Cosine Transform (DCT), also exhibit lossless decomposition and reconstruction, but they do not preserve spatial information, i.e., the frequency spectrum of an image is not visually meaningful. Hence, we chose DWT as the method for decomposition for our framework.\"}", "{\"metareview\": [\"(a) Summary\", \"This paper investigates how to test compositionality in ViT encoder. It presents a framework with compositionality setting initially proposed by Andreas 2019, and employs the Discrete Wavelet Transform (DWT) for analyzing the compositional structure of the vision input. The experiments indicate that the primitives with a one-level DWT decomposition as input leads to a certain degree of compositionality in the last layer of ViT encoder.\", \"(b) Strengths\", \"It is well motivated: the paper investigates an important problem in learning compositional representation.\", \"It is based on well-established work by Andreas.\", \"Wavelets are a common & natural way to encode image into primitives.\", \"(c) Weaknesses\", \"The experimental setup is weak: this experiments need a much larger sample size over images/models in order to make such a broad claim on compositionality.\", \"The experimental results do not support the claims: the presented results do not convincingly demonstrate compositionality.\", \"Wavelets are for low-level representation not for higher-level layers compositionality.\", \"It does not compare with other decompostion methods.\", \"The clarity of the paper could be improved.\", \"(d) Decision\", \"The investigation in this paper for understanding compositionality in ViT encoder layers is well-motivated. However, the experimental setting and results are not strong enoughful to support the claims on compositionality of encoders. I think the paper is not ready for publication in its current form due to the weaknesses listed in the summary.\", \"Please keep the reviewer comments in mind when preparing a future version of the manuscript.\"], \"additional_comments_on_reviewer_discussion\": \"The reviewers agreed that the paper's proposal for employing DWT as primitives for studying compositionality in ViT is novel and interesting. They also shared the same concerns on the weak experimental setup, the upsupported claims from the experimental results, and the clarity of the paper. Although the authors' rebuttal and updated manuscript addressed some concerns, the reviewers still think the paper needs another round of revision and review to make it stronger.\"}", "{\"title\": \"Official comment to Reviewer cTeu : Part Two\", \"comment\": \"### **Questions**\\n\\n> **1)** Could the authors clarify how compositionality with discrete wavelet transforms leads to better explainability? Some examples or related work would help. I can see some intuitive argumentation but I think it has not been articulated in the paper.\\n\\nGiven the challenging nature of composing 'semantic' primitives for object recognition, the paper presents DWT as a potential framework for analyzing compositionality in ViTs.\\n* The analysis presented in the paper explains the relative importance given to the individual frequency bands by the ViT models. This may lead to understanding the object recognition task from the frequency domain perspective.\\n* The presented compositionality framework enables the incorporation of any specific domain knowledge with respect to the primitives (in this case, the DWT sub-bands) towards the downstream task. In other words, one can train models that can give specific weights to the sub-bands while learning to solve the task if needed.\\n* In the future, the community may bridge the frequency-to-semantics gap with the availability of sophisticated tools so that the presented framework can directly compose to the semantics\\n\\n\\n> **2)** The observation that convex combination of sub-bands in image space alters pixel values significantly is interesting. Why does it not degrade accuracy? Is it because of normalization?\\n\\nThis is definitely interesting. Probing the combinations themselves revealed that the $l2$ norm between the convex combination and the original image is lower than the other two combinations. We hesitate to say it is because of normalization and this would require further analysis to answer.\"}", "{\"summary\": \"This paper presents experiments exploring the use of the Discrete Wavelet Transform (DWT) for evaluating compositionality within ViTs.\\n\\nThe work is motivated by prior work (Andreas 2019) which proposed a general framework (Tree Reconstruction Error, TRE) for measuring compositionality as a homomorphism between the input space and a representation space. \\n\\nBecause images do not have an obvious set of primitives in the same way that tokenized word vocabularies do in language spaces, this paper proposes to use DWT components as the primitive representation. \\n\\nIn the first set of experiments, the paper argues that the naive addition of the DWT components will not necessarily yield a compositional representation, and instead proposes to learn a composition function, represented as a weighted sum of the components. \\n\\nExperiments applying DWT to the final layer of a ViT are also presented, showing that approximating the ViT representation as the weighted sum (under the learned composition function) of its inverse DWT components yields comparable performance to the model\\u2019s output under the original representation. \\n\\nI have concerns about the soundness of the experimental setup (see Weaknesses). Given the relaxation introduced in L214-215, it\\u2019s not clear to me that the provided experimental results can actually soundly support claims about compositionality since the homomorphism is no longer defined in relation to the input space. \\n\\n**Original Recommendation**:\\nI think the paper presents a compelling idea, but the paper could be greatly strengthened by improvements to clarity and substantiation of claims. I would value hearing the authors\\u2019 response to my concerns before finalizing my score, but in its current form I am recommending that the paper be rejected.\\n\\n**Revised Recommendation (11/29)**: \\nThe authors have largely addressed my concerns, and so I have revised my overall score to 5. To conclude my score, I agree with some of the concerns that iA5z raised related to how compositionality is generally understood in the community through the lens of semantics rather than appearance, which I don't believe were completely sufficiently addressed. I think the impact of the paper could be stronger with some following changes which I believe would likely constitute a new submission rather than minor revisions: \\n \\n* (1) The framing of the paper could modified to better motivate why studying the type of compositionality presented is useful. Something that comes to mind is that text-based natural language is *already* an abstracted signal (and which is tied to semantics), perhaps it would make sense to contextualize this paper within compositionality literature for audio signals rather than text. \\n* (2) The introduction of further experiments showing how this type of compositionality can be useful in downstream tasks would greatly help to further motivate this approach. For semantic based tasks, compositional representations tend to yield improvements in generalization capabilities of models -- can something analogous to this be shown for DWT based compositional representations? \\n\\nIn other words, I think the motivation/contribution of the paper would be strongest if it could be shown either that the DWT representations are tied to semantic compositionality, or if not, that this kind of compositionality is still explicitly useful downstream.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper presents a compelling idea \\u2014 that DWT components could serve as primitives through which to study compositionality in ViTs.\"], \"weaknesses\": [\"Weaknesses:\", \"**Clarity**: The paper can be unclear at times over specifics of what was done or how, I think that further elaboration from the authors throughout could help strengthen the paper.\", \"Figure 1: What does it mean here for the original image\\u2019s representation to be compared to the composed image representation? Does it mean that the maps we see in the figure are the result of of the comparison, or are these just the composed representations and the comparison was done outside of the figure?\", \"Section 3.1 ViT: What was the ViT model used for this experiment? What was it pre-trained on? The section does not explicitly specify these experimental conditions.\", \"L214-L215: What is the purpose of f_eta ? I understand that this is defined in Andreas 2019, but its definition is missing from this paper. I think explicitly defining it and its role would improve the clarity of this section.\", \"Figure 3: What do the C\\u2019s and a\\u2019s represent?\", \"Figure 4: What are LL, LH, HL, and HH?\", \"In general, I think the clarity of the paper could be improved by an additional grammar/phrasing pass.\", \"**Soundness**: I have a few concerns regarding the soundness of the experimental setup, and I am unsure if the paper\\u2019s claims are supported. I would value hearing the authors' response to the following points/questions:\", \"Section 3.1: I believe this section aims to show that simply adding the DWT wavelets with equal weights does not yield a correct composition. However, this claim is supported by showing results over a single image from a single ViT model. I believe this experiment would need a much larger sample size over images/models in order to make such a broad claim.\", \"L216-217: The original compositionality formulation from Andreas 2019 is modified to shift the application of the encoder function from the input space to an arbitrary intermediate representation space within the model. If I\\u2019m understanding correctly, doesn\\u2019t this violate the core premise of the problem statement? The purpose of compositionallity tests is to find homomorphisms between the **input space** and the representation space, this is important because the input space comes from the data generating function. I\\u2019m not sure if it makes sense for this test to be defined for the transformation from one hidden layer to another. Minor, but I would suggest modifying the sentence \\u201cinstead of drawing exact parallels, we tweak this statement to suit our analysis\\u201d, since the wording gives the impression that the formulation was modified to suit the narrative of the paper, rather than the needs of the original empirical question being asked.\", \"**Conclusions/Takeaways**: I believe that the provided experimental results show that the representations of the final ViT layer can be represented as a linear combination of its inverse DWT components. However, given my concerns stated above with soundness, it\\u2019s not clear to me what this entails about compositionality.\"], \"questions\": \"Please see questions under Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper investigates whether representations of pretrained ViT models are compositional. The authors use DWT as the primitives (that are composed), and learn the composition (g) rather than assuming additive composition. Experiments are conducted on the final layer's output representation, and a subset of ImageNet-1k is used.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper investigates a novel approach to a significant problem. Importantly the authors show that learning a composition is significantly better than composition by addition.\\n2. The authors investigate the learning of composition to some extent - trying variant such as conic, convex and unconstrained.\", \"weaknesses\": \"1a. I believe the author's experimental setup is not using sufficient data. For reference, papers such as ViT-NeT which explores interpretability of ViT uses three datasets each of which is 10-20k images total. I'd suggest authors scale up their datasets - if we choose to use ImageNet-1k, a test set of at least 10 images per class would be more convincing. Currently, with a 15% test fraction, each class gets 1-2 images. I hesitate to make conclusions based on such a small test set per class.\\n\\n1b. No significant analysis is done on the test dataset, nor of the errors that composition leads to. It would be beneficial to provide some examples where composition lead to large error and some examples where the error is minimal. For example, Haar DWT is great when we have sudden transitions in signal - like sharp images. Perhaps blurrier examples or classes would cause larger error?\\n\\n2. Authors do not inspect intermediate layers of ViT, which is a glaring hole to me - while it is true that most downstream tasks will use the penultimate layer of the ViT, I am still left wondering if all intermediate representations are composable, if its just some of them, is compositionality lost at some levels of the stack? I would recommend repeating the same analyses using intermediate layers\\u2019 representations.\", \"questions\": \"1. Could the authors clarify how compositionality with discrete wavelet transforms leads to better explainability? Some examples or related work would help. I can see some intuitive argumentation but I think it has not been articulated in the paper.\\n\\n2. The observation that convex combination of sub-bands in image space alters pixel values significantly is interesting. Why does it not degrade accuracy? Is it because of normalization?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you to the authors for addressing my concerns and for clearing up confusion. I have revised my overall score to 5 (see update in my review for more details).\"}", "{\"title\": \"Major Changes made in the revised draft\", \"comment\": [\"We thank all the reviewers for their constructive feedback. Based on their comments we have made some changes in the revised draft. They are listed as follows:\", \"The experiments have been repeated with a significantly larger dataset (50k images from ImageNet-1k dataset with a train:val:test split of 60:20:20, previous results were for 10k images) for reliable results. Due to the limited time, we could only repeat the\", \"Level 1 experiments and we aim to complete the Level 2 results soon. **Only the level 1 results have been updated** in the revised draft.\", \"The analogy regarding the compositional approximation has been improved (L228-L234)\", \"LL ,LH , HL ,HH notations have been defined (L153 - L154)\", \"A new experiment has been added (Section 4.4) to analyze the errors of the composition approximation.\"]}", "{\"title\": \"Official Response to Reviewer iA5z : Part two\", \"comment\": \"### **Questions**\\n\\n> **1)** Could the authors clarify why they believe their results demonstrate compositionality, why the DWT is considered a suitable feature decomposition, and why compositional behavior would be expected in the last layer?\\n\\nThe framework presented in the paper divides the input image into its DWT primitives. The original image and its primitives are passed\\nthrough the model separately. The individual primitives are linearly combined using the weights of the learned compositional model and is\\nthen passed through the final classification layer. The results demonstrate that composed representations perform similarly to the original image representation. We chose the DWT to extract the primitives because of the sound mathematical foundation that wavelet decompositions have perfect reconstruction. On the basis that a perfect composition exists in the input space, we investigate if a homomorphism exists between the input and representation space learned by the ViTs. While our analysis demonstrates the notions of compositionally (as per the framework introduced by Andreas 2019) with respect to the DWT primitives, more needs to be understood\\nabout why it is manifested.\\n\\n> **2)** l.299: \\\"The target is the final classification layer output of the original image\\\" does this mean using an L2 loss?\\n\\nWe use the Cross Entropy loss between the classification layer output of the original image\\u2019s representation (i.e., predicted soft labels)\\nand the classification layer output of the composed representation.\\n\\n> **3)** l.445: Why is the analysis conducted across different settings?\\n\\nWe regret this confusion. The analysis was conducted using the same test set. At the time of submission, we could only run a part of the\\ntest set for some of the results. We have now revised the draft such that the updated results follow the same setting.\"}", "{\"title\": \"Official comment to Reviewer cTeu: Part One\", \"comment\": \"We thank the reviewer for their assessment and we hope to answer their questions point by point.\\n\\n### **Weaknesses**\\n\\n> **1.a)** I believe the author's experimental setup is not using sufficient data. For reference, papers such as ViT-NeT which explores interpretability of ViT uses three datasets each of which is 10-20k images total. I'd suggest authors scale up their datasets - if we choose to use ImageNet-1k, a test set of at least 10 images per class would be more convincing. Currently, with a 15% test fraction, each class gets 1-2 images. I hesitate to make conclusions based on such a small test set per class.\\n\\nWe thank the reviewer for this comment. Please note that the parameters we are training for the compositional model are merely a handful. Also, we believe ImageNet is one of the most complex datasets that offers a lot of variety among the samples. In the revision, we scaled up the dataset to 50000 samples (1000 classes,50 images/class) with a 60:20:20 train:val:test split. So the test set contains 10000 samples (each class gets 10 images) and repeated some of the experiments. Due to time constraints, we could only carry out the experiments for level 1 decomposition. We plan to carry out the rest of the level 2 experiments with the scaled-up dataset. The results for Level 1 have been updated in the revised draft. \\n\\n**Comparing the accuracy of different composition models $g^{*}$ on the testset:**\\n\\n| Model | Original | Average | Unconstrained | Conic | Convex |\\n| ------------------- | ------------- | ------------ | --------------------- | ---------- | ------------ |\\n| ViT-B(haar level-1) | 0.792 | 0.13 | 0.775| 0.775 | 0.771 |\\n|ViT-L(haar level - 1) | 0.809 | 0.18 | 0.797 | 0.795 | 0.795|\\n|ViT-B (db4 level - 1) | 0.792 | 0.13 | 0.777 | 0.775 | 0.772 |\\n\\n**Relative accuracy of the learned composition models. Note that the target for the composed representation is the output predicted by the original image classifier (note the ground truth label).**\\n\\n| Model | Unconstrained | Conic | Convex |\\n| ------------------- | --------------------- | ---------- | ------------ |\\n| ViT-B(haar - level 1) | 0.875 | 0.873 | 0.862 |\\n| ViT-B( db4 -level 1) | 0.90 | 0.904 | 0.898 | \\n| ViT-B( haar - level 1) | 0.918 | 0.916 | 0.911|\\n\\n> **1.b)** No significant analysis is done on the test dataset, nor of the errors that composition leads to. It would be beneficial to provide some examples where composition lead to large error and some examples where the error is minimal. For example, Haar DWT is great when we have sudden transitions in signal - like sharp images. Perhaps blurrier examples or classes would cause larger error?\\n\\nWe appreciate the reviewer's point. We have conducted a preliminary experiment (Section 4.4 in the revised draft) to identify the samples on which the learned model fails. However, a deeper analysis to test the robustness of compositionality is currently out of scope of this paper. We would like to emphasize that the primary goal of this work is to present a framework to check if notions of compositionality are present in the representation space and the level 1 decomposition results provide strong support.\\n\\n> **2)** Authors do not inspect intermediate layers of ViT, which is a glaring hole to me - while it is true that most downstream tasks will use the penultimate layer of the ViT, I am still left wondering if all intermediate representations are composable, if its just some of them, is compositionality lost at some levels of the stack? I would recommend repeating the same analyses using intermediate layers\\u2019 representations. \\n\\nThe intermediate layer representation of each image is of size 197x768. The cls token is of size 1x768. Since the input to the classification layer is just the cls token, performing this analysis on the final encoder layer is easier. There is no trivial way to classify the intermediate layer's output. To perform the analysis on intermediate layers would require storing the entire 197x768 representation for each image and its subsequent wavelet components in order to train the model. Due to memory and computational limitations we could not perform the analysis for the intermediate representations.\"}", "{\"summary\": \"The paper examines a type of compositionality in the representations of vision transformers (ViT). Building on the compositionality concept proposed by Andreas (2019), the authors apply it to Discrete Wavelet Transform (DWT) representations. Empirical results suggest that the last layer of a transformer displays a certain degree of compositionality for a one-level DWT of the input.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is generally well-written.\", \"It builds on the well-established framework by Andreas (2019) for compositional representations.\", \"Wavelets are a common and natural basis for image representations in signal processing.\"], \"weaknesses\": [\"My main concern is that the presented results do not convincingly demonstrate compositionality. Rather than defining true combinations of wavelet primitive representations, it appears that the learned weights mainly select the low-pass filtered image (Table 3). Indeed, it is not particularly surprising that the images in Figure 5 perform similarly to the original images.\", \"Compositionality is typically more valuable when components are semantic rather than appearance-based. It is doubtful that wavelets would exhibit compositional properties in the final layers of a model, where higher-level concepts are typically captured; instead, this is more likely to occur in lower-level layers. Furthermore, the idea that wavelets are a good basis for compositional representations is not really explored, and no other decomposition methods are considered or compared.\"], \"minor\": [\"l.237 I believe $E_L$ should be $E_l$.\"], \"questions\": [\"Could the authors clarify why they believe their results demonstrate compositionality, why the DWT is considered a suitable feature decomposition, and why compositional behavior would be expected in the last layer?\", \"l.299: \\\"The target is the final classification layer output of the original image\\\" does this mean using an L2 loss?\", \"l.445: Why is the analysis conducted across different settings?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
9dBBq2ehY5
A Phase Transition Induces Catastrophic Overfitting in Adversarial Training
[ "Elias Abad Rocamora", "Noam Itzhak Levi", "Volkan Cevher" ]
We derive the implicit bias of Projected Gradient Descent (PGD) Adversarial Training (AT). We show that a phase transition in the loss structure of as a function of the adversarial budget $\epsilon$ manifests as Catastrophic Overfitting (CO). Below a critical threshold $\epsilon_c$, single step methods efficiently provide an increase in robustness, while above this critical point, additional PGD steps and/or regularization are needed. We show that high curvature solutions arise in the implicit bias of PGD AT. We provide analytical and empirical evidence for our arguments by appealing to a simple model with one-dimensional inputs and a single trainable parameter, where the CO phenomenon can be replicated. In this model, we show that such high curvature solutions exist for arbitrarily small $\epsilon$. Additionally, we can compute the critical value $\epsilon_c$ in single-step AT for bounded parameter norms. We believe our work provides a deeper understanding of CO that aligns with the intuition the community has built around it.
[ "Adversarial Training", "FGSM", "Catastrophic Overfitting" ]
Reject
https://openreview.net/pdf?id=9dBBq2ehY5
https://openreview.net/forum?id=9dBBq2ehY5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tqmeipZKB5", "t6I7tgfxUx", "s8zToj11G8", "pREKPXKHak", "oSxw7Kih6j", "lsNz85bg3Q", "lnGtC9wk8Q", "iQmPx9Yr4H", "ekQOO7i6D8", "dX2W8Bdq1M", "aTfOjJ3jFU", "Zmfg6fCs8i", "YJKXCQiGra", "WQyxH1InVi", "V0iNHPdPSL", "SrBySm8Acy", "SWCVN2faic", "NiCPfYgdf9", "HtU0mCgOYt", "GXaltW6y4D", "EdLZRKiKSA", "DbUG0lYhqj", "6wcSf8ngDH", "6R5pAQu26g", "0A4yV6x75L" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1737524171333, 1732709973736, 1732714704444, 1732681256456, 1732470048121, 1730343854138, 1733077596074, 1732363737676, 1732545178822, 1732749807144, 1732627045100, 1732363442234, 1732671966398, 1729080589628, 1732608097617, 1732363948742, 1732749700161, 1732363675839, 1733077240325, 1732708002659, 1732627567903, 1734708351896, 1729492925274, 1730531167872, 1733133275735 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12180/Reviewer_bJoQ" ], [ "ICLR.cc/2025/Conference/Submission12180/Authors" ], [ "ICLR.cc/2025/Conference/Submission12180/Reviewer_UJp8" ], [ "ICLR.cc/2025/Conference/Submission12180/Authors" ], [ "ICLR.cc/2025/Conference/Submission12180/Reviewer_UJp8" ], [ "ICLR.cc/2025/Conference/Submission12180/Reviewer_UJp8" ], [ "ICLR.cc/2025/Conference/Submission12180/Authors" ], [ "ICLR.cc/2025/Conference/Submission12180/Reviewer_bJoQ" ], [ "ICLR.cc/2025/Conference/Submission12180/Authors" ], [ "ICLR.cc/2025/Conference/Submission12180/Authors" ], [ "ICLR.cc/2025/Conference/Submission12180/Authors" ], [ "ICLR.cc/2025/Conference/Submission12180/Reviewer_bJoQ" ], [ "ICLR.cc/2025/Conference/Submission12180/Reviewer_bJoQ" ], [ "ICLR.cc/2025/Conference/Submission12180/Reviewer_4U7J" ], [ "ICLR.cc/2025/Conference/Submission12180/Authors" ], [ "ICLR.cc/2025/Conference/Submission12180/Authors" ], [ "ICLR.cc/2025/Conference/Submission12180/Authors" ], [ "ICLR.cc/2025/Conference/Submission12180/Reviewer_UJp8" ], [ "ICLR.cc/2025/Conference/Submission12180/Authors" ], [ "ICLR.cc/2025/Conference/Submission12180/Authors" ], [ "ICLR.cc/2025/Conference/Submission12180/Area_Chair_1zJV" ], [ "ICLR.cc/2025/Conference/Submission12180/Reviewer_4U7J" ], [ "ICLR.cc/2025/Conference/Submission12180/Reviewer_vMYY" ], [ "ICLR.cc/2025/Conference/Submission12180/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for the reply. I have no further questions.\\n\\nAfter several rounds of discussions, I think the paper has improved a lot, especially with a clear description of the concepts involved and a formal description of the results. Regarding the simplicity of the toy example and the limited contribution of the rescaling results, I have no strong willingness to accept this paper, so I increase my score to \\\"weak accept\\\".\\n\\nThank the authors again for the discussions.\"}", "{\"title\": \"Thanks for your increase\", \"comment\": \"Dear reviewer bJoQ,\\n\\nThanks a lot for helping us improve the quality of our paper and for being responsive during this rebuttal period. We remain available in case further questions appear.\\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Remaining concerns and follow-up questions\", \"comment\": \"Thank you for making effort to address my concerns. I have reviewed the revised manuscript and appreciate the adjustments made to improve its clarity. However, several key issues remain unresolved in the current version:\\n\\n**Concerns Regarding Section 3:**\\n\\nIn line 211, the authors raised three questions:\\n\\n(1) *Can we have PGD AT solutions where curvature is high?*\\n\\n(2) *Can we have solutions where CO appears for any* $S$ *and* $\\\\epsilon$ *?*\\n\\n(3) *Can we understand why such solutions do not appear in practice?*\\n\\n\\n\\nFirstly, the motivation of raising the three questions are not super clear, especially the connection between question(1), (3) with the central theme of the paper\\u2014CO in fast AT. Due to unclear motivations and relevances, the appearance of section 3 interrupts the logical flow of the paper.\\n\\nSecondly, the analysis for question (1) appears highly heuristic, lacking theoretical supports. The analysis for question (2) relies on a trivial construction that the training data and the perturbation radius $\\\\epsilon$ are scaled at the same time such that the effect of reducing $\\\\epsilon$ is cancelled by the scaling of the data $x$. For question (3), the discussion provided in Lines 238\\u2013246 is vague, leaving the main takeaway unclear.\\n\\nOverall, Section 3 occupies a significant portion of the paper, yet its key messages are not effectively conveyed. Furthermore, the discussion in this section lacks clear motivation and does not establish strong connections to the CO problem, which the paper aims to address. Could the authors summarize the main purpose and key takeaways of Section 3? Additionally, Section 3 feels isolated from subsequent sections. How does it connect to the rest of the paper, particularly the follow-up sections?\\n\\n\\n\\n**Concerns about dataset rescaling:**\\n\\nThe motivation for inducing CO through dataset rescaling is also not well explained. As a result, the experimental results presented in Section 5.5 fail to provide meaningful insights.\\n\\n\\n\\n**Follow-up questions for the revised manuscript:**\\n\\nIn Theorem 4.1, the construction of $\\\\theta_k$ appears somewhat artificial and seems unrelated to the context of fast AT. I was expecting $\\\\theta_k$ to represent parameters learned through fast AT, but this does not seem to be the case. Could the authors provide additional clarification on the motivation and relevance of this construction?\\n\\nTheorem 4.1 provides an upper bound, but it is unclear how this bound \\\"present sufficient conditions to observe CO\\\" as stated in Remark 4.4. Could the authors elaborate on the connection between this bound and CO?\\n\\nIn Line 461, the authors claim that the empirical observations in Figure 3 \\\"align with our results in the toy model despite the simplistic assumptions of Theorem 4.1 and Corollary 4.3.\\\" However, it is not clear how Theorem 4.1 and Corollary 4.3 relate to the observation that \\\"an abrupt decay of PGD-20 accuracy to zero occurs with a small change in $\\\\epsilon$.\\\" Could the authors clarify this relationship?\\n\\nAt line 463, what do you mean by \\\"Our analysis in the toy model covers the existence of a CO solution with lower loss than the robust solution for larger $\\\\epsilon$\\\" ? \\n\\nAt line 465, what do you mean by \\\" longer schedules might converge to the CO solutions shorter schedules did not.\\\"? Figure 10 in the paper indicates that on MNIST and SVHN, FGSM with shorter training epochs encountered CO at smaller $\\\\epsilon$ values compared to FGSM with longer training epochs. This contradicts the statement at Line 465.\"}", "{\"title\": \"Rebuttal to reviewer 4U7J\", \"comment\": \"Dear Reviewer 4U7J,\\n\\nThanks for your careful reading of our manuscript. We address your questions and weaknesses in the following:\\n\\n- **P1: Authors assume that $\\\\mathcal{L}$ is at least twice continuously differentiable. This assumption is missing.**\\n\\nThank you for pointing this out. We have include this in the statement of Proposition 3.1.\\n\\n- **P2: [1,2] previously studied the implicit bias of AT, how does your analysis connect with theirs?**\\n\\nThank you for bringing this work to our attention, they offer an interesting point of comparison with our results. In essence, these work studied the implicit bias of the solutions that a neural network trained adversarially arrive to, extending the SVM results of Soudry (2018) and others, from standard gradient descent training to the adversarial setup, finding the equivalent SVM solution in these cases. These works focus on a specific question: given a constraint (separable data, homogeneous networks), which set of equations must be satisfied by the network parameters at the infinite time limit under gradient flow dynamics, concluding that the adversarial margin must be maximized. We ask a much simpler question in a very general setting: what effective objective is being optimized by a neural network given a fixed adversarial perturbation budget. While these questions are certainly connected, our focus was geared towrd implicit regularization and a loss landscape approach, rather than the max margin formulation. We believe it is possible to rephrase our results in the SVM language. We have included this explanation as well as the references in the revised manuscript.\\n\\n\\n- **P3: Your theoretical insights are extracted from the toy model, the connection with more practical scenarios is unclear.**\\n\\nWhile it is true that our insights are extracted from the toy model, the implicit bias/regularization analysis is valid in general, with the toy model serving as a fully tractable example in which the phenomenology can of CO as a phase transition can be seen. The empirical results given in Fig.3 and Fig.4 show that our results qualitatively extend to real world cases, while the quantitative predictions must depend on the task/architecture and data, we further discuss this point in Section 5.3.\\n\\n\\n- **P4: How does Proposition 3.1 characterize the properties of the converged solution?**\\n\\nProposition 3.1 charecterizes the effective loss being optimized, meaning that the solution must satisfy the constraints imposed by the new terms, proportional to powers of $\\\\epsilon$. We show that both in the toy model and in real world cases, the CO solutions are the ones which satisfy these constraints by setting the Hessian to large negative values, which results from a transition from the previous minimum of the network (on the original loss) to a new, overfitting minimum on the effective loss.\\n\\n- **P5: When and how will the term promotional to $\\\\epsilon^{2}$ become more significant than the the term promotional to $\\\\epsilon$ as discussed in line 192-193?**\\n\\nWe can approximate this point by assuming the first and second term are bounded as $R_1,R_2$ respectively, and so we can simply equate their absolute values (or traces) as $\\\\frac{\\\\epsilon}{S} R_1 = \\\\frac{\\\\epsilon^2}{2S^2} R_2$ to obtain a threshold value at $\\\\tilde{\\\\epsilon} = 2 R_1 S/R_2$, at this point the second term is equivalent in contribution to the first, and should be accounted for in the effective loss.\"}", "{\"summary\": \"This work investigates the phenomenon of catastrophic overfitting arised in the fast adversarial training (AT), where the multi-step PGD attack is replaced by a single-step PGD (also known as FGSM) to reduce AT\\u2019s training time. To explore the causes of catastrophic overfitting, the paper constructs a toy model in which the adversarial loss has a closed-form solution when adversarial examples are generated by FGSM. Through analysis of this toy model, the paper identifies a phase transition with respect to the perturbation radius $\\\\epsilon$: when $\\\\epsilon$ exceeds a certain threshold, the local minima of the FGSM-induced adversarial loss (referred to as the \\\"effective loss\\\" in this work) exhibits higher curvature and this local minima has a clearly mismatch with the minima of the adversarial loss induced by multi-step PGD. Consequently, a model that minimizes the FGSM-induced adversarial loss tends to have a high loss under multi-step PGD attacks, which the paper suggests as an explanation for catastrophic overfitting.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper constructs a simple one-dimensional toy model to examine catastrophic overfitting. This model enables clear visualization of the loss landscape, illustrating the effects of varying the perturbation radius on this landscape. By plotting and comparing the landscapes of two types of adversarial losses, the model provides a clear demonstration of the factors contributing to catastrophic overfitting.\", \"weaknesses\": \"The paper constructs a toy model to demonstrate a scenario where catastrophic overfitting provably occurs. This approach may have limited practical utility for addressing catastrophic overfitting in fast AT. A more valuable direction would be identifying the conditions under which catastrophic overfitting does not occur, allowing for improvements to fast AT by regularizing the model to meet these conditions.\\n\\nAdditionally, the toy model settings differ significantly from those in deep learning models, suggesting that the theoretical insights derived from the toy model may have limited applicability to real-world deep learning scenarios.\\n\\n\\n\\n**Potential techincal issues**\\n\\nThe derivation of Proposition 3.1 appears to contain inaccuracies. At line 812 in the appendix, the entire derivation is based on the recurrence relation that $\\\\delta_{s} = \\\\delta_{s-1} + \\\\frac{\\\\epsilon}{S}{\\\\rm sign}(g_{\\\\theta}(x+\\\\delta_{s-1}))$ where $g_{\\\\theta}(x)=\\\\nabla_{x}{\\\\cal L}(f_{\\\\theta}(x), y)$ as defined in Proposition 3.1. However, there are issues with this recursion:\\n\\n1. it omits the projection operation used in AT (see Algorithm 1, step 7.) \\n2. when updating $\\\\delta_{s-1}$ in PDG-AT, the gradient should be taken w.r.t $\\\\delta$ rather than w.r.t $x$ . Specifically, the update should use the gradient $\\\\nabla_{\\\\delta}{\\\\cal L}(f_{\\\\theta}(x+\\\\delta), y)$ rather than $g_{\\\\theta}(x+\\\\delta_{s-1})$. The same error also appears in (Algorithm 1, step 6).\\n\\n\\n\\nRegarding Corollary 3.3, it seems to be derived from Proposition 3.2, but the derivation is unclear. Proposition 3.2 simply establish that $\\\\max\\\\limits_{\\\\|\\\\delta\\\\|\\\\le \\\\epsilon}{\\\\cal L}(f_{\\\\theta}(W(\\\\alpha x+\\\\delta)), y)=\\\\max\\\\limits_{\\\\|\\\\delta\\\\|\\\\le \\\\hat{\\\\epsilon}}{\\\\cal L}(f_{\\\\theta}(\\\\hat{W}( x+\\\\delta)), y)$ with $\\\\hat{W}= \\\\alpha W$ and $\\\\hat{\\\\epsilon} = \\\\epsilon/\\\\alpha$ (based on the derivations at line 893 in Appendix ). How this result leads to the claims in Corollary 3.3 is not immediately clear.\\n\\n\\n\\n **Writing**\\n\\nThe writing in the paper is not particularly reader-friendly. For instance, the insights provided by Theorem 4.1 and Corollaries 4.2 and 4.3 are not clearly explained. Additionally, the connection between the results from the toy model analysis and their implications for understanding deep learning models is not effectively conveyed.\", \"questions\": [\"In the loss landscape shown in the top panel of Figure 1, why is catastrophic overfitting attributed to the increased curvature of the local minima?\", \"In Theorem 4.1, what does $\\\\theta^{*} _ {k}$ represent? Why do we choose $\\\\theta_{k}$ as $b_{k}$?\", \"In Corollary 4.2, why are the classification results for the points $x_{i}\\\\pm \\\\epsilon_{k/2S}$ considered? How does this relate to catastrophic overfitting?\", \"What are the main takeaway messages of this paper? Is catastrophic overfitting attributed to that the loss landscape has local minima with high curvatures? If true, could you provide empirical evidence on deep learning models to validate this conclusion?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to your comments (Part 2)\", \"comment\": \">- For a connection between Theorem 4.1 and CO please check Corollary 4.2. In Theorem 4.1, the Upper bound in the optimality gap vanishes to $0$ when increasing $a$. This shows that the optimal loss is achieved. In Corollary 4.2 we show that this results in a correct classification of the PGD points and misclassification of the points $x_i \\\\pm \\\\frac{\\\\epsilon_k}{2\\\\cdot S}$. Given our characterization of CO in Definition 2.1, this is exactly $(0,0)$-CO.\\n\\n\\n\\nThe upper bound of Theorem 4.1 is in the form of $\\\\pi \\\\frac{b _k}{a}$. Taking $b_k = \\\\frac{1+ 4k}{1- \\\\frac{1}{a}}$, the upper bound turns into $\\\\pi \\\\frac{1+ 4k}{a- 1}$ . Corollary 4.2 states that \\\"by increasing $a$ and $k$ we can take $\\\\epsilon_k$ arbitrary close to zero with arbitrary accurate solution $\\\\theta_k$\\\": increasing $a$ and $k$ at the same time, although drives $b_k$ to infinity and thus drives $\\\\epsilon_k$ to zero, this does not make the upper bound $\\\\pi \\\\frac{1+ 4k}{a- 1}$ go to zero. Therefore $\\\\theta_k$ is not \\\"arbitrary accurate\\\" as stated in Corollary 4.2. The statement of Corollary 4.2 is therefore misleading. \\n\\nAnother primary concern is that the construction of $\\\\theta_k$ appears to be artificial and far from the solutions obtained by AT. The claim that \\\"CO exists at arbitrarily small $\\\\epsilon$\\\" based on the artificial construction of $\\\\theta_k$ lacks practical relevance and does not provide meaningful insights into the occurrence of CO in practical scenarios.\\n\\n\\n\\n>- In Figure 3, the critical value $\\\\epsilon_c$ is larger when increasing $S$, this is exactly the theoretical insight from Corollary 4.3 in the toy model and the answer to Question (iii).\\n\\n \\n\\n I think the observation that \\\"an abrupt decay of PGD-20 accuracy to zero occurs with a small change in $\\\\epsilon$.\\\" is interesting, as it highlights the existence of a critical value $\\\\epsilon_c$ that triggers the onset of CO.\\n\\nHowever, as the authors have clarified, the theoretical analysis focuses solely on the relationship between $\\\\epsilon_c$ and $S$ without explicitly addressing the existence of $\\\\epsilon_c$ itself. This disconnect causes the theoretical analysis to diverge from the key empirical observations presented in the paper.\\n\\n\\n\\nI appreciate that the authors make efforts to answer my questions. However, multiple concerns remain unresolved, and the theoretical contributions are limited. Therefore, I believe the paper is not yet ready for publication.\"}", "{\"title\": \"Rebuttal (2/2)\", \"comment\": \"- **P9: In Corollary 4.2, why are the classification results for the points $x_i \\\\pm \\\\frac{\\\\epsilon_k}{2S}$ considered? How does this relate to catastrophic overfitting?**\\n\\nNote that the points $x_i + \\\\delta_{S}^{i}$ are well classified, the points $x_i \\\\pm \\\\frac{\\\\epsilon_k}{2S}$ are not. This is clearly CO as the PGD-attacked points $x_i + \\\\delta_{S}^{i}$ are well classified, but some other points inside the $\\\\ell_{\\\\infty}$ ball are not.\\n\\nThis phenomenon is better visualized in Figure 2 (a), where the PGD-3 attacks ($x_i + \\\\delta_3^{i}$) are at the minimum loss value $\\\\mathcal{L}^{\\\\star}$, but the points in between the PGD-3 trajectory $x_i \\\\pm \\\\frac{\\\\epsilon_k}{6}$ have a high loss and are not well classified.\\n\\n- **P10: What are the main takeaway messages of this paper? Is catastrophic overfitting attributed to that the loss landscape has local minima with high curvatures? If true, could you provide empirical evidence on deep learning models to validate this conclusion?**\\n\\nIn order to confirm our intuition from Proposition 3.1, we have trained PreActResNet18-swish on SVHN with $S=2$ and $\\\\epsilon \\\\in \\\\{4,12\\\\}/255$. For every epoch, we measure the implicit bias terms in Proposition 3.1 ($\\\\mathcal{R}_1$ and $\\\\mathcal{R}_2$), the training PGD-2 accuracy and the test PGD-20 accuracy. We observe that indeed, when CO appears, curvature at the PGD trajectory notably increases. \\n\\nThanks again for your review and for helping us improve the quality of our work. Please let us know if you have any remaining concerns.\"}", "{\"comment\": \"Thank you for the reply.\\n\\n**For P5.** the definition uses $\\\\approx$, however, such a definition is also not clear since $\\\\approx$ is ambiguous. Maybe a definition with an error bound is better, i.e., we define $(\\\\alpha, \\\\beta)$-CO if the PGD accuracy is $\\\\ge 1-\\\\alpha$ and the robust accuracy is $\\\\le \\\\beta$, where $0 \\\\le \\\\alpha, \\\\beta \\\\le 1$. Such a description is more clear than using $\\\\approx 1$ and $\\\\approx 0$.\\n\\n**For P6.** Maybe you misunderstand me. My question is what value of $|\\\\mathcal{R} _2|$ means high curvature? If $|\\\\mathcal{R} _2| \\\\ge \\\\tau$ means high curvature, then what value should $\\\\tau$ take? So the definition of high curvature is ambiguous, which means that the definition of $\\\\epsilon _c$ is ambiguous.\\n\\n**For P9.** I read the added blue sentences in Proposition 2.2; however, they do not answer my question. The problem is that such an equivalence does not seem to be meaningful and can not provide us with more insight. In practice, perturbating $(x, y)$ with $\\\\delta$ where $\\\\Vert \\\\delta \\\\Vert \\\\le \\\\epsilon_c$ is equivlent to perturbating $(\\\\alpha\\\\cdot x,y)$ with $\\\\delta$ where $\\\\Vert \\\\delta \\\\Vert \\\\le \\\\alpha \\\\cdot \\\\epsilon_c$. For example, for a picture with an RGB range of 0 to 255, we usually transfer them into $[0, 1]$, then $x$ has range $[0,1]$. Then a perturbation with budget $\\\\epsilon$ for $x$ and a perturbation with budget $\\\\alpha \\\\epsilon$ for $\\\\alpha x$ are all equivalent to a perturbation with budget $256 \\\\epsilon$ for the original image. Similarly, the significance of Corollary 3.3 is also not clear.\"}", "{\"title\": \"Thanks for your response (2/2)\", \"comment\": \"- **P5: Regarding short/long schedules**\\n\\nAs mentioned in **P1**, the theoretical analysis in the toy model characterizes the solutions of AT solved with Algorithm 1. Nevertheless, the convergence to such solutions is not guaranteed. [1,2] observe that when training for longer, $\\\\epsilon_c$ can sometimes be observed earlier. Our results in Appendix F.2 show that for CIFAR10, this is the case. Please check that this is not against line 465.\\n\\n---\\n\\nThanks again for the detailed feedback and for helping us improve the quality of our work. Please let us know if further questions appear. If you are satisfied with our responses, we would appreciate an increase in the score.\\n\\n\\n**References**\\n\\n[1] Kim et al., Understanding catastrophic overfitting in single-step adversarial training, AAAI 2021\\n\\n[2] Abad Rocamora et al., Efficient local linearity regularization to overcome catastrophic overfitting, ICLR 2024.\"}", "{\"title\": \"Thanks for your response\", \"comment\": \"Dear reviewer bJoQ,\\n\\nThanks for your response and useful suggestions. We answer to your points as follows:\\n\\n- **P1: The desired accuracies in CO could be defined more formally via $\\\\alpha$ and $\\\\beta$.**\\n\\nThanks for this suggestion. We have updated the definition of CO and the Proof of Corollary 4.2 to contemplate these accuracy levels. Instead of $(\\\\alpha,\\\\beta)$, we employed $(\\\\beta,\\\\eta)$ as $\\\\alpha$ was used for the PGD step sizes.\\n\\n- **P2: How high is a high curvature? How do you define $\\\\epsilon_c$?**\\n\\nTo improve the clarify, have added a definition of $\\\\epsilon_c$ in Definition 2.2. Please note that $\\\\epsilon_c$ is defined based on for which $\\\\epsilon$ values $(\\\\beta,\\\\eta)$-CO appears, not on the curvature. We apologize if the definition of $\\\\epsilon_c$ was not clear. Since the explosion in curvature is an experimental observation and our analysis does not rely on the curvature values, we believe defining how high is high curvature overcomplicates the analysis. \\n\\n- **P3: Regarding the significance of Proposition 3.2 and Corollary 3.3**\\n\\nThe intuition presented by the reviewer is valid, re-scaling the perturbations and images simultaneously, does not change our perception of them. The result in Proposition 3.2 captures exactly this intuition, with the addition that solving AT on rescaled datasets and perturbations is exactly the same as solving AT in the original setting.\\n\\nDespite the result in Proposition 3.2 being simple and intuitive, it had not been shown before. Additionally, in the setting of this work, it is particularly interesting as it allows us to show that $\\\\epsilon_c$ is not only an artifact arising from the AT hyperparameters, but a quantity that is scaled with the scale of the data. We have added a remark in lines 234-236.\\n\\nWe are thankful for the discussion. Please let us know if you have any remaining concerns.\\n\\nRegards,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer vMYY,\\n\\nThanks for your review. At the moment your review seems incomplete, we would like to ask if the reviewer has any other concerns. Regarding the concerns raised in your review, our answer is as follows:\\n\\n- **P1: \\\"Adversarial Training (AT) (Madry et al., 2018) and its variants ...\\\" It is better to replace this descirbtion with \\\"one of the most\\\".**\\n\\nWe have incorporated your suggestion in the manuscript and referenced [RobustBench](https://robustbench.github.io/).\\n\\n- **P2: \\\"According to this sentence, the motivation of studying underlying phenomenon resulting in CO is insuficient. Moreover, there is little logical connection before and after.\\\"**\\n\\nWe do not understand the point of the reviewer, Could you please be more specific?\\n\\n- **P3: \\\".It is redundant to introduce the known PGD algorithm 1, if you don\\u2019t bring in additional important ideas. Besides, the initialization of perturbation is random, not just $0$.\\\"**\\n\\nWe believe it is necessary for our analysis to present the PGD AT algorithm. It is true that the initialization in practice is uniform as we discuss in lines 221-222 of the original submission. We have updated line 4 of Algorithm 1 to contemplate this possibility and clarified that our analysis follows with $\\\\sigma=0$.\"}", "{\"comment\": \"Thank you for the reply.\\n\\nNow the definition of CO and $\\\\epsilon_c$ is clear, so the CO in the following theorems (e.g. Corollary 3.3) should be replaced by $(\\\\beta,\\\\eta)$-CO and it is better to describe the relationship between $\\\\epsilon_c$ and $\\\\epsilon_c^\\\\alpha = \\\\alpha \\\\epsilon_c$. Furthermore, $(\\\\beta,\\\\eta)$-CO involves robust accuracy and PGD accuracy, however, in the proofs, the authors only consider the AT (i.e. robust accuracy) but not Algorithm 1 (i.e. PGD accuracy). So the proofs should be more rigorous\\n.\"}", "{\"summary\": \"This paper analyzes the Catastrophic Overfitting (CO) phenomenon in Projected Gradient Descent (PGD) Adversarial Training (AT). This paper shows that a phase transition in the loss structure of as a function of the adversarial budget $\\\\epsilon$ manifests and provides analytical and empirical evidence for the arguments by appealing to a simple model with one-dimensional inputs and a single trainable parameter. Experiments are conducted to validate the findings.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper investigates the CO phenomenon and carefully analyzes the properties of CO through a toy example, which is interesting and instructive.\"], \"weaknesses\": [\"Although interesting, the toy example used in this paper is too simplified, which is far away from what we meet in deep learning.\", \"Some notations are not consistent, for example, in section 2.1, the paper uses $c$ to represent the number of classes, while in section 3 $o$ is used.\", \"For **Proposition 3.1**, there are some issues that should be fixed.\", \"In line 161, the dataset is $\\\\\\\\{ (x_ i, y_ i) \\\\\\\\}_ {i=1}^n$ but not $\\\\\\\\{ (x_ i, y_ i) \\\\\\\\}_ {i=1}^i$\", \"The condition that $\\\\alpha_ s = \\\\frac{1}{S}$ is used in the proof, so it should be stated in the conditions in Proposition 3.1. Furthermore, such an assumption is not usually used in practice, we usually use $\\\\alpha_ s > \\\\frac{1}{S}$ in practice.\", \"**[important]** The writing is very bad, especially about the formulation of the studied problem.\", \"Since the paper studies CO, the authors should define CO formally. CO is not defined although it appears many times in this paper, including the theorems. For example, in Corollary 3.3, the statements involve CO, which makes the theorem informal since CO is not formally defined. Additionally, CO is also used in Corollary 4.2 and Corollary 4.3.\", \"In lines 194-195, the paper writes: \\\"we define the perturbation threshold at which the effective loss is minimized at a high negative curvature solution as the critical $\\\\epsilon_ c$\\\". The expression is ambiguous, what do you mean by \\\"at a high negative curvature solution\\\"? How do you quantify \\\"high\\\"? So the definition of $\\\\epsilon_ c$ is unclear.\", \"In lines 187-189, the second question, the paper writes: \\\"Can we have solutions where CO appears for any $S$ and $\\\\epsilon$?\\\" What do you mean by solutions? Does it mean PGD AT solutions in the first question? Or the FGSM solutions? This should be clarified.\", \"I can not find the proof for Corollary 3.3, perhaps the reason that the authors can not provide proof for Corollary 3.3 is that the definition of CO is not clear. I think the proof of Corollary 3.3 should be included after you show a clear definition of CO. Similar problem occurs in Corollaries 4.2 and 4.3.\", \"I can not find the significance of Proposition 3.2. In practice, perturbating $(x, y)$ with $\\\\delta$ where $\\\\Vert \\\\delta \\\\Vert \\\\le \\\\epsilon_c$ is equivlent to perturbating $(\\\\alpha\\\\cdot x,y)$ with $\\\\delta$ where $\\\\Vert \\\\delta \\\\Vert \\\\le \\\\alpha \\\\cdot \\\\epsilon_c$. Moreover, in line 215: \\\"With Proposition 3.2 and Corollary 3.3, we have a mechanism to re-scale the dataset and produce smaller \\u03b5c that applies to modern deep architectures like ResNet and any training dataset\\\". Yes, we can do this, but it is meaningless to simultaneously rescale $\\\\epsilon_ c$ and $x$.\", \"In conclusion, the paper is somewhat interesting. However, the problem is not properly formulated and some of the results lack proof. I think this paper is not prepared to be published.\"], \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the response.\\n\\nThough I still think it would be better to directly discuss how the toy model considered in this paper can be connected to the realistic models from a theoretical perspective rather than only relying on experimental results, I understand the difficulties of conducting theoretical analysis. Anyway, most of my concerns are addressed, and I will raise my score to 6.\"}", "{\"comment\": \"Dear Reviewer bJoQ,\\n\\nThanks for your review and for helping us improve the quality of our work. We respond to your points as follows:\\n\\n- **P1: Although interesting, the toy example used in this paper is too simplified, which is far away from what we meet in deep learning.**\\n\\nThat is true, the model that we employ to replicate CO is overly simplistic. But we believe that is the reason why it is a good model. The fact is that we can replicate the CO phenomena ocurring in large scales with a single parameter model. The simplicity of this model allows us to deeply analyze the problematic behaviours of AT leading to CO.\\n\\n- **P2: In section 2.1, the paper uses $c$ to represent the number of classes, while in section 3 $o$ is used.**\\n\\nThanks for highlighting this inconsistency, we have updated the notation to use $o$ everywhere.\\n\\n- **P3: Typo: the dataset should be $\\\\{x_i,y_i\\\\}^{n} _{i=1}$ not $\\\\{x_i,y_i\\\\}^{i} _{i=1}$.**\\n\\nThanks for pointing out this typo, it has been corrected.\\n\\n- **P4: $\\\\alpha_S=1/S$ should be stated in the conditions of Proposition 3.1. In practice $\\\\alpha_S > 1/S$ is employed.**\\n\\nThanks for highlighting this issue, we have included it in the statement of Proposition 3.1\\n\\n- **P5: The authors should formally define CO.**\\n\\nThanks for the suggestion, we have included a formal definition of CO in Definition 2.1. We believe this definition contributes to an easier understanding of our paper.\\n\\n- **P6: In lines 194-195, how do you quantify \\\"high\\\" curvature, please be more clear.**\\n\\nFor curvature, we refer to $|\\\\mathcal{R}_{2}|$ in Proposition 3.1, i.e., the absolute value of second order directional derivatives.\\n\\n\\n- **P7: In lines 187-189, do you refer to FGSM solutions or AT solutions?**\\n\\nWe refer in general when applying AT to any problem. The fact that we can show that such solutions exist in Theorem 4.1 means that for certain datasets and models, degenerate solutions with CO for any $S$ and $\\\\epsilon$ do exist.\\n\\n- **P8: The proofs of the Corollaries are missing, please clearly state them.**\\n\\nWe apologize for initially not including our proofs in the manuscript. Our corollaries are a very straightforward derivations from Proposition 3.2 and Theorem 4.1. We have included very detailed proofs in Appendix E.\\n\\n- **P9: What is the significance of Proposition 3.2?**\\n\\nWe have rewritten the statement in Proposition 3.2 to be more direct and clear. The proposition simply states that training in a re-scaled dataset is equivalent to training in a standard dataset with re-scaled $\\\\epsilon$. This has consequences as being able to produce CO for any $\\\\epsilon$ by re-scaling the data.\\n\\nPlease let us know if any aspect of the paper remains unclear.\"}", "{\"title\": \"Thanks for your response (1/2)\", \"comment\": \"Dear reviewer UJp8,\\n\\nThanks for your response and the thorough feedback on the revised version of the manuscript. We answer to your remaining concerns as follows:\\n\\n## Regarding the writing of Section 3\\n\\nWe appreciate your detailed feedback about the writing of this section. We apologize if the main ideas are not easy to extract at the moment. We believe that given your feedback, it is a good idea to integrate Sections 3 and 4 together. We can first pose the three questions an then combine Sections 3 and 4, pointing at how each insight from our theory connects to the questions. Moreover, this way, the connection between the toy model and large scale results can be made more clear. However, given that there are less than 13h remaining to update the manuscript, we believe the time is not enough to provide the desired quality. We can work on making sections 3 and 4 more reader friendly for the camera ready version.\\n\\nOverall, leaving the readability aside, we believe the three questions we point out are covered in the paper, let us explain:\\n\\n- **Question (i) Can we have PGD AT solutions where curvature is high?**\\n\\nProposition 3.1 and the results in Section 5.4 answer this question affirmatively both theoretically and practically. Moreover, the theoretical results in Theorem 4.1, show that this is also the case in the toy model, where curvature along the PGD trajectory is proportional to $\\\\theta_k^{2}$. Given that $\\\\theta_k$ is proportional to $1/\\\\epsilon_k$, this results in higher curvature solutions the smaller $\\\\epsilon_k$ is.\\n\\n- **Question (ii) Can we have solutions where CO appears for any $S$ and $\\\\epsilon$?**\\n\\nAs you pointed out, via re-scaling the data, Corollary 3.3 gives us a mechanism to induce CO for arbitrarily small $\\\\epsilon$. Nevertheless, this is not our strongest result. Check that Theorem 4.1 provides analytical solutions to the AT problem in the toy model for arbitrarily small $\\\\epsilon_k$ and arbitrarily large $S$. \\n\\n- **Question (iii) Can we understand why these solutions do not appear in practice?**\\n\\nIn lines 238\\u2013246, we argue that adding noise prior the PGD attack ($\\\\sigma > 0$) and weight decay can help avoid CO. The noise argument is an intuition arising from our theory and since it is not fully understood, it has been included in the limitations of our work. Regarding weight decay, our result in Corollary 4.3 in the toy model and the experimental results in Figure 3 confirm that: Constraining the parameter norm and increasing the number of PGD steps $S$ can avoid CO.\\n\\nWe will make sure these aspects are covered more clearly in the revised version of the manuscript. Thanks for helping us improve the readability of the paper.\\n\\n## About dataset re-scaling\\n\\nAs mentioned in our response to Reviewer bJoQ, our target with this theoretical result and experiment, was to show that the scale of $\\\\epsilon_c$ is not only dependent on the hyperparameters of Algorithm 1. Our result in Corollary 3.3 shows that $\\\\epsilon_c$ can be re-scaled independently of Algorithm 1 just by re-scaling the data. This result is somewhat expected, as re-scaling the data and perturbations is perceptually invariant to the human eye. Nevertheless, we believe that the independence of Algorithm 1 and data re-scaling, is a valuable and interesting result.\\n\\n## Further questions\\n\\n- **P1: On the construction of $\\\\theta_k$ through Theorem 4.1**\\n\\nIn Theorem 4.1 we characterize the possible solutions of the AT problem when solved with Algorithm 1. Nevertheless, the convergence of Algorithm 1 to one of these solutions is not guaranteed. As other local minima exist (See Figure 1 top row), convergence will depend on the initialization of the weights, the learning rate and the order of data samples seen during training. This is not covered in Theorem 4.1.\\n\\n- **P2: Connection between Theorem 4.1. and CO**\\n\\nFor a connection between Theorem 4.1 and CO please check Corollary 4.2. In Theorem 4.1, the Upper bound in the optimality gap vanishes to $0$ when increasing $a$. This shows that the optimal loss is achieved. In Corollary 4.2 we show that this results in a correct classification of the PGD points and misclassification of the points $x_i \\\\pm \\\\frac{\\\\epsilon_k}{2\\\\cdot S}$. Given our characterization of CO in Definition 2.1, this is exactly $(0,0)$-CO.\\n\\n- **P3: What is the relationship between Theorem 4.1, Corollary 4.3 and the observations in Figure 3?**\\n\\nIn Figure 3, the critical value $\\\\epsilon_c$ is larger when increasing $S$, this is exactly the theoretical insight from Corollary 4.3 in the toy model and the answer to Question (iii).\\n\\n- **P4: \\\"Our analysis in the toy model covers the existence of a CO solution with lower loss than the robust solution for larger $\\\\epsilon$\\\"**\\n\\nThe robust solution is given by $\\\\theta=1$ in the toy model. In Figure 1, we can see that the FGSM loss at $\\\\theta=1$ is higher than the optimal loss $\\\\mathcal{L}^{\\\\star}$ attained at $\\\\theta_{\\\\text{FGSM}}$ for $\\\\epsilon > \\\\epsilon_c$.\"}", "{\"title\": \"Rebuttal (1/2)\", \"comment\": \"Dear Reviewer UJp8,\\n\\nThanks for your thorough review and carefully checking the theoretical details. We respond to your concerns as follows:\\n\\n- **P1: \\\"A more valuable direction would be identifying the conditions under which catastrophic overfitting does not occur\\\"**\\n\\nThere are many works on this direction, such as GradAlign, LLR, CURE or ELLE, that we cover in the introduction. In most works, local linearity is assumed. In this case it is impossible for CO to happen as the single-step solution is the global solution of the inner maximization problem, therefore, the network cannot overfit to only classify well the single-step adversarial examples.\\n\\nIn this work, we address the question of understanding why CO appears. Concretely, we discover that a phase transition induces CO in single-step AT, we show that CO can ocur with arbitrarily small $\\\\epsilon$ and provide a mechanisms to induce CO for smaller $\\\\epsilon$ in any dataset or model by simply re-scaling the data.\\n\\n- **P2: \\\"the toy model settings differ significantly from those in deep learning models, suggesting that the theoretical insights derived from the toy model may have limited applicability to real-world deep learning scenarios\\\"**\\n\\nWhile it is true that some of our insights are extracted from the toy model, all of our insights match the experimental observations in large scale models in the literature, e.g., the sudden appearance of CO for $\\\\epsilon$ avobe a critical value $\\\\epsilon_c$, or the appearance of CO for multi-step AT.\\n\\n- **P3: Proposition 3.1 omits the projection operator in line 7 of Algorithm 1.**\\n\\nNote that since $\\\\alpha = 1/S$, and $\\\\sigma=0$, it is guaranteed that all the perturbations along the PGD attack will be inside the ball, i.e., $||\\\\mathbf{\\\\delta}_{s}^{i}|| _{\\\\infty} \\\\leq \\\\epsilon ~~ \\\\forall s \\\\in [S]$, and the projection results in the identity mapping. We have clarified this in the proof of Proposition 3.1.\\n\\n- **P4: When updating $\\\\mathbf{\\\\delta}_{s}^{i}$ in PDG-AT, the gradient should be taken w.r.t. $\\\\mathbf{\\\\delta}$ rather than w.r.t. $\\\\mathbf{x}$.**\\n\\nPlease note that $\\\\nabla_{\\\\mathbf{x}}h(\\\\mathbf{x} + \\\\mathbf{\\\\delta}) = \\\\nabla_{\\\\mathbf{\\\\delta}}h(\\\\mathbf{x} + \\\\mathbf{\\\\delta})$ for any differentiable function $h : \\\\mathbb{R}^{d} \\\\to \\\\mathbb{R}$. We have added a remark in lines 103-105.\\n\\n- **P5: How is Corollary 3.3 derived from Proposition 3.2?**\\n\\nIntuitively, Proposition 3.2 states that performing AT in re-scaled dataset with adversarial budget $\\\\alpha\\\\cdot \\\\epsilon$ is effectively the same as performing AT in the standard dataset with adversarial budget $\\\\epsilon$. This means that every phenomenon observed with adversarial budget $\\\\epsilon$ in the standard dataset, will be observed with adversarial budget $\\\\alpha\\\\cdot \\\\epsilon$ for the re-scaled dataset. That includes CO. We have included a detailed proof in Appendix E.\\n\\n- **P6: The writting could be improved. Theorem 4.1 and Corollaries 4.2 and 4.3 are not clearly explained. The connection between the theory and practical implications is not clear.**\\n\\nWe apologize for not conveying our message clearly, we have improved our writting by:\\n1. Formally defining CO in Definition 2.1.\\n2. Rewritting the statement of Proposition 3.2 and Theorem 4.1.\\n3. Including the proofs of all the Corollaries.\\n\\nWith respect to the relationship between the insights in the small model, we believe it is clear that similarly to the toy model:\\n\\n1. **Agreeing with Corollary 4.2:** There is a clear phase transition with a critical value $\\\\epsilon_c$ for every studied dataset and various $S$ values. Leading to nearly zero PGD-20 accuracy for $\\\\epsilon > \\\\epsilon_c$. This is observed in practically in Figure 3.\\n3. **Agreeing with Corollary 4.3:** When the norm is constrained (Practically with weight decay), larger number of steps $S$ produce larger $\\\\epsilon_c$. Observed for SVHN in Figure 3.\\n \\n\\n- **P7: In the top panel of Figure 1, why is catastrophic overfitting attributed to the increased curvature of the local minima?**\\n\\nPlease note that CO is related to a high curvature of the loss **with respect to the input**. In the top panel we simply select the solutions for both the FGSM and AT objectives. Curvature is displayed in the bottom panel, where for larger $\\\\epsilon$, the sinusoidal classifiers obtained with FGSM present a very high frequency, i.e., curvature.\\n\\n- **P8: In Theorem 4.1, what does $\\\\theta_{k}^{\\\\star}$ represent? Why do we choose $\\\\theta_k$ as $b_k$?**\\n\\n$\\\\theta^{\\\\star}$ is a solution that attains the minimum loss $\\\\mathcal{L}^{\\u22c6} = \\\\log(1 + e) \\u2212 1 \\u2248 0.3133$. As explained in the proof of Theorem 4.1, $\\\\mathcal{L}^{\\u22c6}$ is attained when $\\\\sin(\\\\theta_{k}^{\\\\star}\\\\cdot(x_i + \\\\delta_{S}^{i})) = y_i$. We believe that introducing $\\\\theta_{k}^{\\\\star}$ in the analysis is not necessary. We have rewritten Theorem 4.1 to compare against the optimal loss value $\\\\mathcal{L}^{\\u22c6}$ to simplify the presentation.\"}", "{\"title\": \"Response to your comments (Part 1)\", \"comment\": \"Thank you for your response. Regarding your comments:\\n\\n>- Proposition 3.1 and the results in Section 5.4 answer this question affirmatively both theoretically and practically. Moreover, the theoretical results in Theorem 4.1, show that this is also the case in the toy model, where curvature along the PGD trajectory is proportional to $\\\\theta_k^{2}$. Given that $\\\\theta_k$ is proportional to $1/\\\\epsilon_k$, this results in higher curvature solutions the smaller $\\\\epsilon_k$ is.\\n\\n\\nThe authors might misunderstand my original question. My concern is about the relevance and significance of the proposed question: *\\\"Can we have PGD AT solutions where curvature is high?\\\"* Why is this question interesting or worth investigating in the context of the paper?\\n\\n\\nThe authors acknowledge that, in practice, *\\\"multi-step AT converges to have small curvature in the neighborhood of training points\\\"* (Line 209). Analyzing the scenario where AT converges to high-curvature solutions does not align with what is observed in practice. Consequently, the theoretical analysis based on this scenario is unlikely to uncover the true reasons behind CO in AT in real-world settings.\\n\\nAdditionally, I do not agree that the results in Proposition 3.1 can be interpreted as \\\"answering this question affirmatively.\\\" There is a clear gap between the solutions obtained through AT and those derived by minimizing the second-order Taylor expansion of the adversarial loss. The claims made based on Proposition 3.1 are speculative and should be treated as hypotheses rather than rigorous theoretical results.\\n\\n>- As you pointed out, via re-scaling the data, Corollary 3.3 gives us a mechanism to induce CO for arbitrarily small $\\\\epsilon$. Nevertheless, this is not our strongest result. Check that Theorem 4.1 provides analytical solutions to the AT problem in the toy model for arbitrarily small $\\\\epsilon_k$ and arbitrarily large $S$.\\n\\nThe result presented in Corollary 3.3 lacks meaningful insight, as the construction involving the rescaling of the dataset is inherently trivial. Unfortunately, this response does not address my primary concerns regarding the significance of this result.\\n\\n>- In lines 238\\u2013246, we argue that adding noise prior the PGD attack ($\\\\sigma > 0$) and weight decay can help avoid CO. The noise argument is an intuition arising from our theory and since it is not fully understood, it has been included in the limitations of our work. Regarding weight decay, our result in Corollary 4.3 in the toy model and the experimental results in Figure 3 confirm that: Constraining the parameter norm and increasing the number of PGD steps $S$ can avoid CO.\\n\\nThe techniques that \\\"adding noise prior the PGD attack ($\\\\sigma > 0$) and weight decay\\\" have already been used in PGD-AT and in fast AT, but in practice fast AT equipped with these techiques still suffers from CO. Therefore the statement that \\\" we argue that adding noise prior the PGD attack ($\\\\sigma > 0$) and weight decay can help avoid CO.\\\" is misleading.\\n\\n\\n\\n>- Our result in Corollary 3.3 shows that $\\\\epsilon_c$ can be re-scaled independently of Algorithm 1 just by re-scaling the data. This result is somewhat expected, as re-scaling the data and perturbations is perceptually invariant to the human eye. Nevertheless, we believe that the independence of Algorithm 1 and data re-scaling, is a valuable and interesting result.\\n\\nThe authors themselves acknowledge that *\\\"this result is somewhat expected,\\\"* which underscores its triviality. As such, the result fails to provide any meaningful or novel insights.\"}", "{\"title\": \"Thanks for your acknowledgement\", \"comment\": \"Dear reviewer bJoQ,\\n\\nThanks for your acknowledgment. We have updated Corollary 3.3 and its proof with the new notation. \\n\\nRegarding the robust and PGD accuracies, please note that Theorem 4.1 is built around showing that perfect PGD loss can be obtained for arbitrarily small $\\\\epsilon$ and arbitrarily large $S$. This covers the first need for $(0,0)$-CO: perfect PGD accuracy. For the robust accuracy, in Corollary 4.2, we show that the points $x_i \\\\pm \\\\frac{\\\\epsilon_k}{2\\\\cdot S}$ are not well clasified in the solutions given by Theorem 4.1. This covers the second need for $(0,0)$-CO: zero robust accuracy. This is shown rigorously in the proof of Corollary 4.2.\\n\\nThanks again for the discussion. We appreciate the responsiveness. If you are satisfied with our clarifications, we would appreciate an increase in the rating.\\n\\nRegards,\\n\\nAuthors\"}", "{\"title\": \"Thanks for your response\", \"comment\": \"Dear reviewer 4U7J,\\n\\nThanks for your response and for increasing the score. We agree that it would be nice to be able to theoretically connect our toy model with large scale models and datasets. In our work the connection is empirical, but we hope that in future work this connection can be strengthened.\\n\\nRegards,\\n\\nAuthors\"}", "{\"metareview\": \"This paper studies the phenomenon of catastrophic overfitting in the context of adversarial training with a single step of PGD (FGSM.)\", \"pros\": [\"the authors have conducted a comprehensive theoretical analysis,\"], \"cons\": [\"the model investigated in the paper and the techniques are 'toy models' and far from being realistic.\", \"The original paper was lacking clarity.\", \"In the end I believe that the cons slightly outweighs the pros. I suggest to the authors to investigate how to extend their theory to higher dimensional model in order to connect the theory with the 'toy model' to more practical models.\"], \"additional_comments_on_reviewer_discussion\": \"Reviewer vMYY did not engage in the discussion so I disregarded their review.\\n\\nReviewer UJp8 engaged significantly in the discussion with the authors and did not raise their score.\\n\\nEven if Reviewer 4U7J and Reviewer bJoQ increased their scores as the quality of the presentation of the paper improved with the discussion, the concerns regarding the theory remained: the 1D model considered in the theory is very far from any practical machine learning model.\"}", "{\"summary\": \"This paper aims to uncover various mysterious properties of the Catastrophic Overfitting (CO) through investigating the implicit bias of Adversarial Training (AT). In particular, the authors design a toy example where the model has only one trainable parameter and the dataset is composed of only two data points. In this example, the authors reveal the existence of a cutoff $\\\\epsilon_c$ such that the adversarially trained model is biased towards solution with high curvature when $\\\\epsilon > \\\\epsilon_c$, leading to the CO phenomenon. In addition, the authors also design several numerical experiments to support their theoretical claims.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"In general, this paper is well written and organized, making understanding the main idea and logic of this paper fairly easy. The study of CO from the perspective of phase transition by showing the existence of a cutoff $\\\\epsilon_c$ is indeed novel. Additionally, the study of the proposed toy model is comprehensive and aligns with the goal and questions raised in the introduction.\", \"weaknesses\": \"Despite the aforementioned strengths and advantages, I have several concerns regarding the overstatement of contribution that makes me unwilling to give a higher score and I will discuss them as follows.\\n \\n- My first concern is about the insufficient discussion of the implicit bias of AT. The authors attempt to build the implicit bias of AT to link with the appearance of CO. However, Proposition 3.1 is only a Taylor expansion of the adversarial training loss: there is no explicit characterization of the \\u201cimplicit bias\\u201d of AT as it is unclear what type of solution that AT will converge to. Besides, the use of Taylor expansion to the second-order actually assumes that $L(\\\\theta)$ is at least $C^2$-smooth, which is also neglected by the authors. It is also rather odd to say that the higher order terms of a Taylor expansion could be more significant than the lower order ones (line 192-193). As this property is important for deriving the existence of a cutoff $\\\\epsilon_c$, it is crucial to explain this point in detail, e.g., does it contradict the essence of Taylor expansion?\\n\\n On the other hand, as this paper discusses implicit bias of AT, the connection and difference between this work and related works for implicit bias of AT, e.g., Li et al, 2019; Lyu & Zhu, 2022, should be discussed. In particular, Lyu & Zhu, 2022 established the implicit bias of AT for homogeneous deep neural networks by showing that the solution converges to a KKT point of adversarial margin maximization problem. Therefore, I think the authors overstate their contribution regarding the implicit bias of AT. As Lyu & Zhu, 2022 also unified FGSM and PGD perturbations as scale invariant perturbations, I think it would be better to discuss how this property can be connected with the CO phenomenon since the implicit bias of AT is discussed more precisely there. \\n- My second concern is regarding the lack of connection between the proposed toy model and practical deep neural networks. Though the characterization of the proposed toy model is somewhat comprehensive, the model is far from being realistic, as it has only one trainable parameter and the dataset has only two points. Almost all the theoretical claims are made for this toy model, which, however, are not connected to any type of realistic deep neural networks. It is unclear to me what properties are special to this toy model and what conclusions can be generalized to other models and why such generalization can be made.\\n\\n----\\nReference\\n\\nLi et al, 2019. Implicit Bias of Gradient Descent based Adversarial Training on Separable Data.\\n\\nLyu & Zhu, 2022. Implicit bias of adversarial training for deep neural networks.\", \"questions\": \"1. How does Proposition 3.1 characterize the properties of the converged solution? Are there any additional conditions or assumptions for making the Taylor expansion eligible?\\n2. When and how will the term promotional to $\\\\epsilon^2$ become more significant than the the term promotional to $\\\\epsilon$ as discussed in line 192-193?\\n\\n To me it is rather odd to say that the higher order terms of a Taylor expansion are more significant than the lower order ones. Please explain this point carefully.\\n3. Can previous results for the implicit bias of AT be connected with results in the current paper?\\n4. Which theoretical claims derived from the proposed toy model can be generalized to other models and why?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Little is known about why multi-step AT converges to locally linear solutions or which is the underlying phenomenon resulting in CO. This work fills this gap by connecting the empirical observations with a theoretical framework.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The authors show that a phase transition in the loss structure of as a function of the adversarial budget $\\\\epsilon$ manifests as Catastrophic Overfitting (CO).\\n\\n2. The authors show that high curvature solutions arise in the implicit bias of PGD AT. The authors provide analytical and empirical evidence by appealing to a simple model with one-dimensional inputs and a single trainable parameter, where the CO phenomenon can be replicated.\\n\\n3. The authors compute the critical value $\\\\epsilon_c$ in single-step AT for bounded parameter norms.\", \"weaknesses\": \"1.Adversarial Training (AT) (Madry et al., 2018) and its variants have proven to be the most effective strategy towards achieving adversarially robust models. Where is this inference from. It is better to replace this descirbtion with \\u201cone of the most\\u201d\\n\\n2.Despite the success of these methods and the efforts in understanding CO, little is known about why multi-step AT converges to locally linear solutions or which is the underlying phenomenon resulting in CO. According to this sentence, the motivation of studying underlying phenomenon resulting in CO is insufficient. Moreover, there is little logical connection before and after.\\n\\n3.It is redundant to introduce the known PGD algorithm 1, if you don\\u2019t bring in additional important ideas. Besides, the initialization of perturbation is random, not just $0$.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer UJp8,\\n\\nThank you for your responsiveness during this rebuttal period. We answer to your remaining and new concerns as follows:\\n\\n## Remaining concerns\\n\\n- **P1: Why is it interesting to analyze question (iii): \\\"Can we have PGD AT solutions where curvature is high?\\\" Your setup is very far away from real AT.**\\n\\nAs the reviewer points out, multi-step AT is understood to converge to locally linear solutions and avoids CO. This is shown empirically by Andriushchenko and Flammarion in their seminal paper [1]. Nevertheless, some other works have observed CO in multi-step AT [3]. \\n\\nBefore our paper, it was not clear if CO was avoided because of an implicit local linearity regularization in multi-step AT. In this work, we show a negative result. As intuitively devised from Proposition 3.1, by increasing curvature along the PGD trajectory, we can have high curvature solutions with any $S$. This is shown theoretically in the toy model in Theorem 4.1 with a closed form solution for any $S$ and $\\\\epsilon_k$. Additionally, we can understand that such solutions are harder to achieve the more PGD steps are involved and the more we constrain the norm of our parameters (Corollary 4.3). We apologize if our previous responses were not clear enough.\\n\\n- **P2: Your result on Corollary 3.3 regarding data re-scaling are trivial**\\n\\nAs we said in our previous response, the result is intuitive and \\\"somewhat expected\\\". Nevertheless, let us emphasize again that $\\\\epsilon_c$ is re-scaled linearly without being affected by any other hyper-parameter in AT.\\n\\nMoreover, even though the result is easy to show, we are the first to do so. While it is not our main result, we believe it suits well within the context our paper and it should be conveyed in our work.\\n\\n\\n- **P3: Adding noise ($\\\\sigma \\\\geq 0$) does not help avoid CO**\\n\\nThis is false. As the reviewer says, it is true that CO can still happen when adding noise, as observed in [1] for RS-FGSM and in [2] for N-FGSM. Nevertheless, the empirical $\\\\epsilon_c$ for these methods is significantly larger, i.e., $\\\\epsilon_c = 16/255$ for N-FGSM v.s. $\\\\epsilon_c = 8/255$ for FGSM on CIFAR10 [2]. This shows that noise does help avoid CO.\\n\\n## New concerns\\n\\n- **P4: The upper bound in Theorem 4.1 does not decay to $0$ when increasing $k$ and $a$**\\n\\nThis statement is only true if scaling both of these quantities together, keeping the ratio fixed. However, $k$ and $a$ are independent quantities. Given a value for $k$, we can choose $a$ to obtain the desired accuracy. \\n\\nAs the reviewer points out, the upper bound can be simplified to $\\\\pi\\\\frac{1+4k}{a-1}$. A simple case where this upper bound can be made zero by taking $k$ and $a$ to infinity is taking $a=k^{2}$.\\n\\n- **P5: The $\\\\epsilon_c$ values at larger scales are only empirical**\\n\\nThis is true. Unfortunately we were not able to provide a theoretical demonstration of the existence of $\\\\epsilon_c$ for larger scales. However, in the toy model we could and we believe the empirical observations at larger scales are strong enough.\\n\\nPlease let us know if your remaining concerns have been addressed or new ones appear, we will be happy to answer.\\n\\n\\n**References**\\n\\n[1] Andriushchenko and Flammarion. Understanding and Improving Fast Adversarial Training. NeurIPS 2020\\n\\n[2] Abad Rocamora et al., Efficient local linearity regurlarization to overcome catastrophic overfitting, ICLR 2024.\\n\\n[3] He et al., Investigating catastrophic overfitting in fast adversarial training: A self-fitting perspective. arXiv, 2023.\"}" ] }
9d6RcViazd
RALL-E: Robust Codec Language Modeling with Chain-of-Thought Prompting for Text-to-Speech Synthesis
[ "Detai Xin", "Xu Tan", "Kai Shen", "Zeqian Ju", "Dongchao Yang", "Yuancheng Wang", "Shinnosuke Takamichi", "Hiroshi Saruwatari", "Shujie LIU", "Jinyu Li", "sheng zhao" ]
We present RALL-E, a robust language modeling method for text-to-speech (TTS) synthesis. While previous codec language modeling methods have demonstrated impressive performance in zero-shot TTS, they often struggle with robustness issues, such as unstable prosody (irregular pitch and rhythm/duration) and high word error rates (WER), largely due to their autoregressive prediction style. RALL-E addresses these issues through chain-of-thought (CoT) prompting, which breaks the task into simpler steps to improve the stability of TTS. First, RALL-E predicts prosody tokens (pitch and duration) from the input text and uses them as intermediate conditions to guide the prediction of speech tokens in a CoT manner. Second, RALL-E utilizes the predicted duration prompt to guide the computing of self-attention weights in Transformer, enforcing the model to focus on the corresponding phonemes and prosody tokens during speech token prediction. Comprehensive objective and subjective evaluations show that RALL-E significantly improves robustness in zero-shot TTS compared to the baseline method VALL-E, reducing WER from $5.6\\%$ to $2.5\\%$ without reranking, and from $1.7\\%$ to $1.0\\%$ with reranking. Furthermore, RALL-E outperforms several prior approaches aimed at improving the robustness of codec language models, and successfully synthesizes challenging sentences that VALL-E struggles with, lowering the error rate from $68\\%$ to $4\\%$.
[ "robust text-to-speech synthesis", "codec language models", "chain-of-thought prompting" ]
https://openreview.net/pdf?id=9d6RcViazd
https://openreview.net/forum?id=9d6RcViazd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uyTP8drNQV" ], "note_type": [ "comment" ], "note_created": [ 1730941364924 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"desk_reject_comments\": \"This paper is desk rejected because the Github URL reveals the author's identity (https://ralle-demo.github.io/RALL-E/), which is linked in the introduction. This breaks double blind review.\", \"title\": \"Submission Desk Rejected by Program Chairs\"}" ] }
9chRqsPOGL
SPaR: Self-Play with Tree-Search Refinement to Improve Instruction-Following in Large Language Models
[ "Jiale Cheng", "Xiao Liu", "Cunxiang Wang", "Xiaotao Gu", "Yida Lu", "Dan Zhang", "Yuxiao Dong", "Jie Tang", "Hongning Wang", "Minlie Huang" ]
Instruction-following is a fundamental capability of language models, requiring the model to recognize even the most subtle requirements in the instructions and accurately reflect them in its output. Such an ability is well-suited for and often optimized by preference learning. However, existing methods often directly sample multiple independent responses from the model when creating preference pairs. Such practice can introduce content variations irrelevant to whether the instruction is precisely followed (e.g., different expressions about the same semantic), interfering with the goal of teaching models to recognize the key differences that lead to improved instruction following. In light of this, we introduce SPaR, a self-play framework integrating tree-search self-refinement to yield valid and comparable preference pairs free from distractions. By playing against itself, an LLM employs a tree-search strategy to refine its previous responses with respect to the instruction while minimizing unnecessary variations. Our experiments show that a LLaMA3-8B model, trained over three iterations guided by SPaR, surpasses GPT-4-Turbo on the IFEval benchmark without losing general capabilities. Furthermore, SPaR demonstrates promising scalability, greatly enhancing models like GLM-4-9B and LLaMA3-70B. We also identify how inference scaling in tree search would impact model performance. Our code and data are publicly available at https://github.com/thu-coai/SPaR.
[ "large language model", "instruction-following", "self-improvement" ]
Accept (Poster)
https://openreview.net/pdf?id=9chRqsPOGL
https://openreview.net/forum?id=9chRqsPOGL
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zEfye1NU6b", "v234nRUgQg", "uDYTFuHKf1", "tnBEz4rv2u", "s0GsBgKV93", "iPjfM9EWje", "hzp92EhhTo", "cnPh0Sh9pO", "TUui16Jz43", "RtI5tMXdlF", "MNPuig0Ul1", "E5dlCPmx4y", "8ByEtSqwQ3", "21scXypaTB" ], "note_type": [ "official_review", "decision", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1730446769191, 1737524038480, 1730728112343, 1732503407079, 1732083523098, 1729148478190, 1732023626962, 1732023468824, 1732023754980, 1732024094381, 1735040113299, 1732247111683, 1730721553374, 1732024370175 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10278/Reviewer_7aGY" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10278/Reviewer_pYV7" ], [ "ICLR.cc/2025/Conference/Submission10278/Reviewer_pYV7" ], [ "ICLR.cc/2025/Conference/Submission10278/Reviewer_AshF" ], [ "ICLR.cc/2025/Conference/Submission10278/Reviewer_AshF" ], [ "ICLR.cc/2025/Conference/Submission10278/Authors" ], [ "ICLR.cc/2025/Conference/Submission10278/Authors" ], [ "ICLR.cc/2025/Conference/Submission10278/Authors" ], [ "ICLR.cc/2025/Conference/Submission10278/Authors" ], [ "ICLR.cc/2025/Conference/Submission10278/Area_Chair_ocwR" ], [ "ICLR.cc/2025/Conference/Submission10278/Reviewer_7aGY" ], [ "ICLR.cc/2025/Conference/Submission10278/Reviewer_2gma" ], [ "ICLR.cc/2025/Conference/Submission10278/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a novel self-play framework (called SPAR) integrating tree-search self-refinement to yield valid and comparable preference pairs free from distractions, so as to teach models to recognize the key differences that lead to improved instruction following. To gain a good start for Actor and Refiner, the authors construct a high-quality dataset with 43K complex instruction-following prompts and an SFT dataset for improving the instruction-following capabilities of LLMs. Through extensive experiments on several LLMs, the authors demonstrate the effectiveness of proposed SPAR.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed approach is intuitive and has strong motivation.\", \"This paper is well-written and presents clear ideas.\", \"The authors conduct extensive experiments to validate the effectiveness of proposed SPAR.\"], \"weaknesses\": [\"**SPAR introduces additianal training overheads.** SPAR initially requires constructing additional data to train the actor and trainer. Building on this, it needs to incorporate iterative training with tree search and self-consistency, which greatly increases the training cost compared to self-rewarding.\", \"**Some crucial information is missing in the experiment section.** For example, what is the average number of search nodes in the tree search, and does it decrease with the iterations? How does LLaMA3-70B perform at different iterations (SPAR-70B-SFT, SPAR-70B-DPO-iter1, SPAR-70B-DPO-iter2)?\"], \"questions\": \"See weaknesses.\", \"in_addition\": \"(1) line 527, GPT-4-Turbo or GPT-4o-mini?\\n\\n(2) Can you compare the training cost of SPAR, Self-Rewarding and Meta-Rewarding?\\n\\n(3) Does more iteration brings higher performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This study presents SPAR, a self-play framework that enhances LLM\\u2018s instruction-following capabilities by training with refined preference pairs. Unlike traditional methods that rely on independent response sampling, SPAR refines pairs to reduce irrelevant factors, thereby emphasizing critical distinctions, leading to notable improvements in instruction adherence. SPAR\\u2019s iterative process enhances instruction-following, judgment, and refinement, offering a pathway for continuous model improvement.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The motivation makes sense. The fine-grained refinement is essential for further improving the model's instruction-following abilities.\\n2. The design of the proposed method is sound, allowing it to effectively achieve its intended motivation.\\n3. The experiments are comprehensive, demonstrating relatively strong performance.\", \"weaknesses\": \"1. The applicability of the method may be limited. It might be suitable primarily for further improvement of models that already possess strong instruction-following capabilities, as the experiments were conducted on models that had already undergone instruction fine-tuning. Additionally, a strong LLM is required for warm-up training before iteration (This also raises concerns about the fairness of comparisons.), and one of the goals of dataset construction is to introduce more complex instructions.\\n2. Missing comparison with a key baseline\\uff1aSelf-Alignment with Instruction Backtranslation.\", \"questions\": \"1. Why are judgment and refinement performed by the same model? What would happen if they were separated, or combined with the actor model, using a single model for all tasks?\\n2. I haven't closely checked the details of the baselines. Do they use a strong LLM, or do they rely solely on the model being evolved?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"My concerns have been addressed, so I have increased my score.\"}", "{\"comment\": \"The direct comparison with Self-rewarding under controlled cost conditions has enhanced my confidence in this work, which is suggested to be added in the revised version. Thank the author for addressing my concerns and I will maintain my rating.\"}", "{\"summary\": \"The authors introduce SPAR, an automated and scalable approach designed for self-improvement in instruction-following tasks through self-play. The core idea is to create paired responses with minimal irrelevant variations, allowing for precise training of the model's instruction-following capabilities. In the SPAR framework, the authors fully leverage test-time scaling: using tree search to obtain higher-quality data for training the model's instruction-following abilities, and using self-consistency to acquire higher-quality data for training the model's discriminative and refinement abilities. Experimental results show that the SPAR framework significantly outperforms various self-critique baselines.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Constructing tailored and distinct instruction-following response pairs for the model by eliminating irrelevant content is a strong motivation for enhancing the model's instruction-following abilities.\", \"The SPAR framework's proposal to use test-time scaling during the training phase to obtain high-quality data for training the model's\", \"The experimental setup is reasonable, and the results appear promising.\", \"The writing in the paper is clear and easy to understand.\"], \"weaknesses\": \"Using test-time scaling (more accurately, inference-time scaling) during the training phase to obtain high-quality data for self-critique is well-motivated, but it undoubtedly introduces significant training overhead. Therefore, providing a detailed comparison of the training costs of different methods, or comparing the gains when the costs are aligned, would make the paper's conclusions more convincing.\", \"typo\": \"line 239 needs a blank after 'refiner'.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer pYV7\", \"comment\": \"Dear Reviewer pYV7,\\n\\nThank you for your thoughtful and constructive feedback and comments! We deeply appreciate your suggestions and spare no effort during this response stage to make improvements accordingly. Below are our responses:\\n\\n**1. On Model Capability for Self-improvement**\\n> Weakness 1: The applicability of the method may be limited. It might be suitable primarily for further improvement of models that already possess strong instruction-following capabilities.\\n\\nThank you for your insightful comment! Self-improvement methods are typically designed to enhance models that already possess reasonable capabilities. \\nCompetitive baselines, including Meta-rewarding and AutoIF, rely on strong base models. Meta-rewarding uses LLaMA3-8B-Instruct, while AutoIF uses LLaMA3-70B-Instruct.\\n\\nHowever, our method also demonstrates its applicability to less powerful models, such as Mistral-7B-Instruct, as shown in Table 9. For these models, we provide a dataset to bootstrap their capabilities before our iterative training. To ensure fair comparisons, we also used our constructed dataset to initialize the baseline methods.\\n\\n\\n**2. New Bseline**\\n> Weakness 2: Missing comparison with a key baseline: Self-Alignment with Instruction Backtranslation.\\n\\nThank you for your valuable suggestion! Initially, the lack of this method's official implementation and dataset deterred us from including it. We have since reproduced this method based on the original paper and available unofficial resources (https://github.com/Spico197/Humback). Here are the results: \\n\\n\\n| Model | Method | IFEval | | | | | Follow|Bench | | | | |\\n|-|-|-|--|-|-|-|-|-|-|-|-|-|\\n| | | P (L) | I (L) | P (S) | I (S) | Avg. | Lv-1 | Lv-2 | Lv-3 | Lv-4 | Lv-5 | Avg. |\\n| LLaMA3-8B | Humpback | 72.5 | 80.2 | 70.1 | 78.1 | 75.2 | 66.8| 66.1| 67.2| 60.2| 62.6| 64.6|\\n| | SPaR| 79.9 | 85.4 | 78.0 | 83.7 | 81.8 | 73.0 | 72.3| 70.0| 64.1| 64.7| 68.8|\\n| Mistral-7B | Humpback | 60.4 | 71.0 | 56.6 | 67.6 | 63.9 | 70.7| 63.9| 63.8| 59.8| 57.9| 63.2|\\n| | SPaR| 74.1 | 80.9 | 69.7 | 77.1 | 75.5 | 74.6| 63.8| 66.1| 61.0 | 58.0 | 64.7 |\\n\\nOur method has demonstrated stronger performance compared to this baseline.\\n\\n**3. On Task Combination**\\n> Question 1: Why are judgment and refinement performed by the same model? What would happen if they were separated, or combined with the actor model, using a single model for all tasks?\\n\\nWe have conducted additional experiments using LLaMA3-8B to explore the impact of separating judgment and refinement tasks.\\n\\n| Method | IFEval | | | | | Follow|Bench | | | | |\\n|-|-|--|-|-|-|-|-|-|-|-|-|\\n| | P (L) | I (L) | P (S) | I (S) | Avg. | Lv-1 | Lv-2 | Lv-3 | Lv-4 | Lv-5 | Avg. |\\n| Separate | 77.8 | 84.2| 75.4 | 82.5| 80.0 | 73.7 | 67.0 | 67.7 | 62.7 | 64.6 | 67.1 |\\n| Combined | 78.0 | 84.7| 75.8 | 82.6| 80.3 | 75.3 | 67.7 | 67.6 | 64.7 | 62.3 | 67.5 |\\n\\nThe results showed that the performance of combining these tasks sightly outperforms that of separating them. Additionally, since judgment and refinement naturally constitute a two-turn task, it is efficient to handle them using a single model.\\n\\nOur preliminary experiments indicate that combining all tasks into a single model slightly reduces performance on instruction-following benchmarks (also observed in Figure 3, as mentioned in line 377), likely due to the judgment and refinement tasks differing notably from the instruction-following task.\\n\\n**4. Clarification of Baseline Details**\\n> Question 2: I haven't closely checked the details of the baselines. Do they use a strong LLM, or do they rely solely on the model being evolved?\\n\\nAll baselines used in our experiments require bootstrapping or seed datasets, which can be generated either through a strong LLM or curated by human experts. We have maintained uniform settings across all baselines and our method to ensure a fair comparison.\"}", "{\"title\": \"General Response\", \"comment\": \"We sincerely thank all the reviewers for their thoughtful comments and constructive suggestions, which significantly helped us strengthen our paper.\\nWe are encouraged to see that the reviewers recognize the novelty and significance of our proposed SPAR framework (Reviewer pYV7, 2gma, 7aGY, AshF), its sound design and comprehensive experimental validation (Reviewer pYV7, 2gma, 7aGY, AshF), and the clarity of our presentation (Reviewer pYV7, 7aGY, AshF).\\n\\nIn response to the reviewers' feedback, we have submitted an updated version of our paper, which now includes more experimental details, the Humpback baseline comparison, additional experiments with more models such as GLM-4-9B, and a more detailed analysis of the cost associated with our method.\\n\\nSeveral reviewers suggested including more information regarding the cost of our method. We would like to emphasize that leveraging inference-time scaling for training to improve model performance is well-motivated (as recognized by Reviewer AshF) and is actually more efficient compared to traditional pretraining solutions or producing high-quality data with human annotators.\\nFurthermore, our overall cost\\u2014averaging around 3.7 expanded tree nodes\\u2014is within a reasonable range, ensuring the feasibility and scalability of the approach.\"}", "{\"title\": \"Response to Reviewer 2gma\", \"comment\": \"Dear Reviewer 2gma,\\n\\nThanks for your comprehensive and detailed suggestions for our work! We really value your comment on experiments with additional models, compute resources and baseline details. We hope our detailed response could address your concerns:\\n\\n**1. Experiments with Additional Models**\\n> Weakness 1: Limited validation across models. The effectiveness of the method was validated on only three models. \\n\\nThank you for your valuable suggestion! We have added additional experiments using GLM-4-9b, a popularly used open-source LLM in the community. The results are shown below (Cf. Table 9 for full results): \\n\\n| Method | IFEval | | | | | Follow|Bench | | | | |\\n|-|-|--|-|-|-|-|-|-|-|-|-|\\n| | P (L) | I (L) | P (S) | I (S) | Avg. | Lv-1 | Lv-2 | Lv-3 | Lv-4 | Lv-5 | Avg. |\\n| GLM-4-9B | 71.5 | 79.9| 68.0 | 77.2| 74.15 | 80.8 | 75.1 | 67.4 | 64.3 | 65.4 | 70.6 |\\n| SPaR-9B | 77.3 | 84.1| 73.6 | 81.4| 79.1 | 82.7 | 76.7 | 67.9 | 68.3 | 64.2 | 72.0 |\\n\\nSPaR significantly improves the instruction-following capabilities of GLM-4-9B. We believe this further validates the effectiveness of our method.\\n\\n\\n**2. On Computational Resources**\\n> The framework's iterative training, including tree-search refinement and multiple model roles, may require significant computational resources.\\n\\nThank you for your insightful comment! As the ultimate goal is to enhance the model's performance in instruction following, inference-time scaling is actually more efficient than traditional pretraining solutions or creating high-quality data with human annotators.\\n\\nWe have calculated the average number of expanded tree nodes in our framework and included these details in Appendix C. Specifically, with the LLaMA3-8B model, the average number of expanded tree nodes is around 3.8, which is within an acceptable range of added computation. \\n\\nMoreover, to effectively understand the costs and gains, we have conducted an additional series of experiments controlling the maximum number of response generations (inference times) during the training data construction process. We perform these experiments on two models, LLaMA3-8B and Mistral-7B, and report the average performance on IFEval. Here are the results:\\n\\n| Model | Method | 5 | 10 | 15 | 20 |\\n|---|-|--|--|--|--|\\n| LLaMA3-8B | Self-rewarding | 79.3 | 79.2 | 79.2 | 78.9 |\\n| | SPaR | 79.0 | 79.5 | 80.1 | 80.3 |\\n| Mistral-7B | Self-rewarding | 66.5 | 65.5 | 66.5 | 66.2 |\\n| | SPaR | 67.0 | 68.3 | 70.0 | 70.8 |\\n\\nThe results indicate that SPaR surpasses the Self-rewarding method when additional computational resources are allocated for inference. This highlights the advantage of our method in scaling inference-time costs to achieve superior performance.\\n\\n**3. Clarification of Comparative Details**\\n> Weakness 3: Lack of comparative details. The paper lacks sufficient details in its comparisons with other methods, such as how each baseline initializes the model.\\n\\nThank you for your valuable suggestion! We have made every effort to ensure that the comparisons between baselines are fair, including the model initialization. We have expanded detailed information in Appendix C.\\n\\n\\n**4. On Iterative Improvement**\\n> Question 1: The paper only lists results from the first three iterations, and the data indicate that the model's performance still has room for improvement.\\n\\nThank you for your insightful comment! Additional iterations can generally improve the model's performance but with diminishing returns.\\nFor instance, in our experiments with LLaMA3-8B, extending to a fourth iteration showed smaller improvements of an average of 0.3.\\nThis can be caused by model capacity limitations or challenges in iterative DPO training. \\n\\nFurthermore, most self-training methods, such as Self-rewarding and SELF, typically use three iterations. We follow this practice to make it easier to compare our results with these methods.\\n\\n\\n**5. On Base Model Capability**\\n> Question 2: Does the framework heavily depend on the model's initial performance? Can it be directly applied to the raw models provided officially?\\n\\nOur method can be applied directly to capable open-source models like LLaMA3-70-Instruct, as shown in the experiments. \\n\\nMoreover, for weaker models, like Mistral-7B-Instruct, we can use a small-sized curated dataset to bootstrap their capabilities, after which iterative training can be effectively applied. \\n\\n\\n**6. Clarification of Experimental Details**\\n> Question 3: It is suggested to directly specify the strong LLM used in Section 2.2.2\\n\\nThank you for your suggestion! We have mentioned this in Appendix C Implementation Details.\"}", "{\"title\": \"Response to Reviewer 7aGY\", \"comment\": \"Dear Reviewer 7aGY,\\n\\nThanks for your comprehensive and detailed suggestions for our work! We really value your comments on the training overheads and clarification of experimental details. We hope our responses below could address your concerns:\\n\\n**1. On Training Overheads**\\n> Weakness 1: SPaR introduces additianal training overheads.\\n\\nThank you for your insightful comment! As the ultimate goal is to enhance the model's performance in instruction following, inference-time scaling is actually more efficient than traditional pretraining solutions or creating high-quality data with human annotators.\\n\\nWe have calculated the average number of expanded tree nodes in our framework and included these details in Appendix C. Specifically, with the LLaMA3-8B model, the average number of expanded tree nodes is around 3.8, which is within an acceptable range of added computation. \\n\\nMoreover, to effectively understand the costs and gains, we have conducted an additional series of experiments controlling the maximum number of response generations (inference times) during the training data construction process. We perform these experiments on two models, LLaMA3-8B and Mistral-7B, and report the average performance on IFEval. Here are the results:\\n\\n| Model | Method | 5 | 10 | 15 | 20 |\\n|---|-|--|--|--|--|\\n| LLaMA3-8B | Self-rewarding | 79.3 | 79.2 | 79.2 | 78.9 |\\n| | SPaR | 79.0 | 79.5 | 80.1 | 80.3 |\\n| Mistral-7B | Self-rewarding | 66.5 | 65.5 | 66.5 | 66.2 |\\n| | SPaR | 67.0 | 68.3 | 70.0 | 70.8 |\\n\\nThe results indicate that SPaR surpasses the Self-rewarding method when additional computational resources are allocated for inference. This highlights the advantage of our method in scaling inference-time costs to achieve superior performance.\\n\\n**2. Clarification of Experimental Details**\\n> Weakness 2: Some experimental details are not shown in paper.\\n\\nThank you for your valuable suggestions! We have included the average number of search nodes in our experiments in Appendix C. For LLaMA3-8B, the number of average expanded nodes is 4.3, 3.7, and 3.4 across different iterations, demonstrating a decreasing trend as the model becomes better. The performance of LLaMA3-70B at each iteration, previously omitted due to space constraints, has now been added to Appendix D.1 Table 9.\\n\\n> Question 1: line 527, GPT-4-Turbo or GPT-4o-mini?\\n\\nThe model is GPT-4-Turbo. SPaR-trained LLaMA3-8B-Instruct outperforms the GPT-4-Turbo on the IFEval benchmark. The performance of GPT-4-Turbo is derived from the original benchmark [1].\\n\\n[1] Zhou, Jeffrey, et al. \\\"Instruction-following evaluation for large language models.\\\" arXiv preprint arXiv:2311.07911 (2023).\\n\\n\\n> Question2: Can you compare the training cost of SPAR, Self-Rewarding and Meta-Rewarding?\\n\\nWe would like to clarify that the training costs for all three methods are nearly identical, as we have controlled the number of training samples to ensure fairness. \\n\\nThe inference times required for data construction vary among these methods. For instance, in the case of LLaMA3-8B, the average number of responses generated by Self-Rewarding and Meta-Rewarding methods is 5, whereas for SPaR, it is approximately 8.8. This increase is within an acceptable range. As mentioned in our response to Weakness 1, we have conducted experiments to compare the costs and gains.\\n\\n\\n**3. On Iterative Improvement**\\n> Question3: Does more iteration bring higher performance?\\n\\nThank you for your insightful comment! Additional iterations can generally improve the model's performance but with diminishing returns.\\nFor instance, in our experiments with LLaMA3-8B, extending to a fourth iteration showed smaller improvements of an average of 0.3.\\nThis can be caused by model capacity limitations or challenges in iterative DPO training. \\n\\nFurthermore, most self-training methods, such as Self-rewarding and SELF, typically use three iterations. We follow this practice to make it easier to compare our results with these methods.\"}", "{\"metareview\": \"## Summary:\\nThe paper introduces SPaR, a self-play framework that enhances instruction-following capabilities in language models by refining preference pairs through tree-search self-refinement. Unlike traditional methods that rely on independent response sampling, SPaR minimizes irrelevant factors and emphasizes critical distinctions, leading to improved instruction adherence. Through an iterative training process, SPaR guides LLMs to recognize key differences crucial for enhanced instruction. Experimentation demonstrates the effectiveness of SPaR, with an LLaMA3-8B model trained using SPaR surpassing GPT-4-Turbo on the IFEval benchmark without compromising general capabilities. Additionally, SPaR shows promising scalability and greatly improves the performance of LLaMA3-70B. The framework sheds novel insights into continuous model improvement. \\n\\n## Strengths:\\n1. The paper's motivation for fine-grained refinement of responses for instruction following is well-founded and crucial for continuous model improvement.\\n1. The tree search-based negative refinement is effective by minimizing content variations, allowing the model to focus on essential elements for enhancing instruction-following accuracy.\\n1. Comprehensive ablation experiments validate the impact of interfering factors on preference learning and the rationality of each component in the framework.\\n1. The approach maintains generalization without compromising overall language model capabilities, indicating a balanced enhancement.\\n1. The paper presents comprehensive experiments with clear writing.\\n\\n## Weaknesses:\\n1. Lack of some experiments: the initial draft does apply the proposed method to finetuning weaker base models or other different models; Comparison to more baselines are needed; Using the same vs. separate models for judgment and refinement tasks; Training for more iterations. The rebuttal addressed these concerns by reporting the corresponding experimental results. \\n1. Concerns about the extra overhead introduced by the data exploration process (tree search and decoding). An analysis and measurement of the cost is needed. A comparison with baselines under the same computation budget is necessary. It would be better to compare the trade-off between data exploration cost and the instruction-following improvement. \\n1. BFS and DFS as two basic tree search algorithms have been studied. It would be more interesting to try other search algorithms such as greedy search, A*, and MCTS, and compare their performance. \\n1. There are several concurrent works adopting the LLM + tree search idea to generate better data for LLM finetuning. It would be helpful to include discussions with them in the related work section. \\n\\n## Decision:\\nThe reviewers raised several concerns mainly regarding experimental comparisons and computational time overhead. The authors provided further clarifications and additional experimental results in the rebuttal, which addressed most concerns, as confirmed by three out of the four reviewers. The efforts of the authors convinced the reviewers, resulting in all positive ratings (8666). The meta-reviewer is familiar with the field and carefully reads the paper, all the comments, the rebuttals, and the discussions. The paper addresses an important open challenge for the self-improvement of LLMs: how to generate more informative and relevant preference pairs. Using tree search to refine the negative responses is an effective and intuitive idea, as demonstrated by the comprehensive experiments in the paper. By strengthening the paper with all the results presented in the discussion, the meta-reviewer believes this paper is valuable and inspirational to the community, hence acceptance is recommended.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised several concerns mainly regarding experimental comparisons and computational time overhead. The authors provided further clarifications and additional experimental results in the rebuttal, which addressed most concerns, as confirmed by three out of the four reviewers. The efforts of the authors convinced the reviewers, resulting in all positive ratings (8666). The meta-reviewer is familiar with the field and carefully reads the paper, all the comments, the rebuttals, and the discussions.\"}", "{\"comment\": \"Thanks for the detailed response. Currently, most of my concerns are resolved and I maintain my score.\"}", "{\"summary\": \"The paper introduces SPAR (Self-Play with Tree-Search Refinement), a self-improvement framework that enhances the instruction-following capabilities of LLMs by minimizing extraneous factors and highlighting key differences in preference pairs. This method involves an iterative training process where a model (actor) performs tasks, and a paired model (refiner) evaluates and refines the imperfect responses using a tree-search algorithm through structured feedback loops. The authors evaluate SPAR with two LLMs on the IFEval and FollowBench benchmarks. Additionally, they contribute a dataset with 43k complex instruction-following prompts and an SFT dataset that can improve the instruction-following capabilities of LLMs.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Effective in reducing noise. By minimizing content variations in preference pairs, SPAR helps the model focus on essential elements, which improves its instruction-following accuracy.\\n\\n2. Comprehensive ablation experiments. The authors conducted extensive ablation studies to verify the impact of interfering factors on preference learning and to assess the rationality of each component in the framework.\\n\\n3. Generalization without degradation. The approach does not degrade general language model capabilities, suggesting a balanced enhancement in alignment without compromising overall functionality.\\n\\n4. Contribution of datasets. The authors provide valuable datasets that benefit the development of this research area.\", \"weaknesses\": \"1. Limited validation across models. The effectiveness of the method was validated on only three models. Further exploration is needed to assess the framework's applicability to other models.\\n\\n2. Reliance on complex setup and compute resources. The framework's iterative training, including tree-search refinement and multiple model roles, may require significant computational resources. Therefore, the performance-cost trade-off needs further clarification. \\n\\n3. Lack of comparative details. The paper lacks sufficient details in its comparisons with other methods, such as how each baseline initializes the model.\", \"questions\": \"1. The paper only lists results from the first three iterations, and the data indicate that the model's performance still has room for improvement. Could you provide a simple analysis of when the model might reach optimal performance?\\n\\n2. Does the framework heavily depend on the model's initial performance? Can it be directly applied to the raw models provided officially?\\n\\n3. It is suggested to directly specify the strong LLM used in Section 2.2.2\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer AshF\", \"comment\": \"Dear Reviewer AshF,\\n\\nWe are deeply thankful for your positive feedback and for acknowledging the novelty and significance of our contributions. Your recognition of our method's scalability, the motivation behind leveraging test-time scaling for training, the promising experimental results, and the clarity of our writing are genuinely encouraging. We are especially honored by your clear and strong support!\\n\\n\\n**1. On Higher Costs**\\n> Weakness 1: Inference-time scaling is well-motivated for model improvement but will introduce higher costs.\\n\\nThank you for your insightful comment! As the ultimate goal is to enhance the model's performance in instruction following, inference-time scaling is actually more efficient than traditional pretraining solutions or creating high-quality data with human annotators.\\n\\nWe have calculated the average number of expanded tree nodes in our framework and included these details in Appendix C. Specifically, with the LLaMA3-8B model, the average number of expanded tree nodes is around 3.8, which is within an acceptable range of added computation. \\n\\nMoreover, to effectively understand the costs and gains, we have conducted an additional series of experiments controlling the maximum number of response generations (inference times) during the training data construction process. We perform these experiments on two models, LLaMA3-8B and Mistral-7B, and report the average performance on IFEval. Here are the results:\\n\\n| Model | Method | 5 | 10 | 15 | 20 |\\n|---|-|--|--|--|--|\\n| LLaMA3-8B | Self-rewarding | 79.3 | 79.2 | 79.2 | 78.9 |\\n| | SPaR | 79.0 | 79.5 | 80.1 | 80.3 |\\n| Mistral-7B | Self-rewarding | 66.5 | 65.5 | 66.5 | 66.2 |\\n| | SPaR | 67.0 | 68.3 | 70.0 | 70.8 |\\n\\nThe results indicate that SPaR surpasses the Self-rewarding method when additional computational resources are allocated for inference. This highlights the advantage of our method in scaling inference-time costs to achieve superior performance.\\n\\n**2. Clarification of Writing Issue**\\n> Weakness 2: Typo in line 239\\n\\nThank you for your careful and detailed reading! We have corrected this issue and thoroughly reviewed the entire paper to address any other errors.\"}" ] }
9ccZzuix2D
Distilling the Knowledge in Data Pruning
[ "Emanuel Ben Baruch", "Adam Botach", "Igor Kviatkovsky", "Manoj Aggarwal", "Gerard Medioni" ]
With the increasing size of datasets used for training neural networks, data pruning has gained traction in recent years. However, most current data pruning algorithms are limited in their ability to preserve accuracy compared to models trained on the full data, especially in high pruning regimes. In this paper we explore the application of data pruning while incorporating knowledge distillation (KD) when training on a pruned subset. That is, rather than relying solely on ground-truth labels, we also use the soft predictions from a teacher network pre-trained on the complete data. By integrating KD into training, we demonstrate significant improvement across datasets, pruning methods, and on all pruning fractions. We first establish a theoretical motivation for employing self-distillation to improve training on pruned data. Then, we empirically make a compelling and highly practical observation: using KD, simple random pruning is comparable or superior to sophisticated pruning methods across all pruning regimes. On ImageNet for example, we achieve superior accuracy despite training on a random subset of only 50\% of the data. Additionally, we demonstrate a crucial connection between the pruning factor and the optimal knowledge distillation weight. This helps mitigate the impact of samples with noisy labels and low-quality images retained by typical pruning algorithms. Finally, we make an intriguing observation: when using lower pruning fractions, larger teachers lead to accuracy degradation, while surprisingly, employing teachers with a smaller capacity than the student's may improve results. Our code will be made available.
[ "Data pruning", "Knowledge distillation" ]
Reject
https://openreview.net/pdf?id=9ccZzuix2D
https://openreview.net/forum?id=9ccZzuix2D
ICLR.cc/2025/Conference
2025
{ "note_id": [ "v3f2JAeBhN", "tzcr8S5elD", "sUV9E9rYVf", "qYeNUJb66w", "lasnNOl2O6", "gEDCjmxgRv", "f65sCz3Vq1", "d8tT4FTcYP", "cQnpyxFbg0", "bCqSrHBx1v", "a50zcepZrQ", "ZiFRSepQ6Z", "XNErTnq7Rr", "XIWt1uoOLS", "SYK0DYKcYS", "S7VPfTnEao", "RG2dDekpSU", "IMPrMo2xlf", "H5ZycFYNRe", "5m87IIix7M", "4lrKoUzWrO", "4DCrdCzDGw", "3ruMAFcXMu", "0veMXrxLUx" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732522208233, 1732873891078, 1733212441397, 1731950929835, 1732203151785, 1730624491484, 1734772533902, 1732203085141, 1737523462025, 1731951095422, 1731950721264, 1731948600055, 1731993785113, 1733188951826, 1731768968887, 1732784288602, 1729262019167, 1730537750181, 1732039543717, 1731489552749, 1733251550875, 1732522301132, 1732203531149, 1732524324416 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1638/Authors" ], [ "ICLR.cc/2025/Conference/Submission1638/Authors" ], [ "ICLR.cc/2025/Conference/Submission1638/Authors" ], [ "ICLR.cc/2025/Conference/Submission1638/Authors" ], [ "ICLR.cc/2025/Conference/Submission1638/Authors" ], [ "ICLR.cc/2025/Conference/Submission1638/Reviewer_wDnx" ], [ "ICLR.cc/2025/Conference/Submission1638/Area_Chair_WFfc" ], [ "ICLR.cc/2025/Conference/Submission1638/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1638/Authors" ], [ "ICLR.cc/2025/Conference/Submission1638/Authors" ], [ "ICLR.cc/2025/Conference/Submission1638/Authors" ], [ "ICLR.cc/2025/Conference/Submission1638/Reviewer_wDnx" ], [ "ICLR.cc/2025/Conference/Submission1638/Reviewer_jiaU" ], [ "ICLR.cc/2025/Conference/Submission1638/Reviewer_wDnx" ], [ "ICLR.cc/2025/Conference/Submission1638/Authors" ], [ "ICLR.cc/2025/Conference/Submission1638/Reviewer_jiaU" ], [ "ICLR.cc/2025/Conference/Submission1638/Reviewer_5t8M" ], [ "ICLR.cc/2025/Conference/Submission1638/Authors" ], [ "ICLR.cc/2025/Conference/Submission1638/Authors" ], [ "ICLR.cc/2025/Conference/Submission1638/Authors" ], [ "ICLR.cc/2025/Conference/Submission1638/Authors" ], [ "ICLR.cc/2025/Conference/Submission1638/Authors" ], [ "ICLR.cc/2025/Conference/Submission1638/Reviewer_5t8M" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer, thank you once again for your time and valuable feedback.\\nWe hope our responses have addressed your concerns. Based on your suggestions, we have added an experimental section (Section 4.4 and Figure 7) and expanded the discussion in the main paper. The revised version has been uploaded, with the new additions highlighted in blue.\\n\\n\\nWe would greatly appreciate it if you could revisit your evaluation and consider raising your score.\"}", "{\"comment\": \"Thank you again for your thoughtful feedback; we highly appreciate your effort and time.\\n\\n\\nPlease note that in response to your comment regarding prior works on KD [1, 2], we have added experiments analyzing easy, moderate, and hard pruning with and without KD, across various fraction ratios. These results are detailed in a new section in the Appendix, with a snapshot provided in the link below:\\n\\n[Add_experiments_easy_moderate_hard_pruning](https://i.imgur.com/4gGKrU5.png)\\n\\nSpecifically, [1] addresses undistillable classes, and [2] explores the role of \\u201cdifficulty\\u201d in KD (TCKD). Accordingly, we have included an experimental section that investigates pruning at different levels of difficulty.\\n\\n&nbsp; \\n\\n\\nWe also believe that our novel observations\\u2014such as the finding that smaller teachers can outperform larger ones, and the counterintuitive result that simple random pruning is superior when paired with KD\\u2014offer insights that open up new directions in understanding the interplay between KD and data pruning. Further exploration of advanced KD approaches is left as future work.\\n\\n&nbsp; \\n\\n\\nWe would be delighted to provide additional clarifications if needed. \\n\\nThank you again for your time and consideration.\\n\\n\\n&nbsp; \\n\\n\\n**Reference**\\n\\n[1] Teach Less, Learn More: On the Undistillable Classes in Knowledge Distillation, NeurIPS 2022.\\n\\n[2] Decoupled Knowledge Distillation, CVPR 2022.\"}", "{\"comment\": \"We sincerely thank the reviewer for the response and greatly appreciate the clarification provided in the comment.\\n\\n&nbsp; \\n\\n\\nWe would like to emphasize that the theorem and proof presented in our paper pertain specifically to the scenario where both the teacher and the student share identical architectures. Addressing the case where both the teacher's architecture capacity and the size of its dataset are jointly reduced would indeed require further exploration and additional theoretical insights. As this goes beyond the scope of our current work, we will make this distinction clearer in the final version of the paper.\\n\\n\\n&nbsp; \\n\\nOur paper introduces, for the first time, several intriguing and practical observations about dataset pruning with knowledge distillation. If the reviewer finds these insights and the supporting experiments valuable for the community, we kindly request you to consider revising your score.\\n\\nThanks again! :)\"}", "{\"title\": \"Regarding additional works\", \"comment\": \"We find the reviewer\\u2019s references highly relevant. The work in [1] is particularly related to the capacity gap problem, and we will include it in our paper. Additionally, we will cite the works in [2, 3, 4] as recommended to further enrich our discussion.\\n\\n\\n**References**\\n\\n[1] Teach Less, Learn More: On the Undistillable Classes in Knowledge Distillation, NeurIPS 2022.\\n\\n[2] Decoupled Knowledge Distillation, CVPR 2022.\\n\\n[3] Knowledge Distillation from a Stronger Teacher, NeurIPS 2022.\\n\\n[4] Data Pruning via Moving-one-Sample-Out, NeurIPS 2023.\"}", "{\"comment\": \"In response to the reviewer\\u2019s comments, we have made several additions to the paper. A revised version has been uploaded, with the new text highlighted in blue for clarity.\\n\\nSpecifically, we have included additional experiments in the Appendix (F), which provide insights into the impact of easy, moderate, and hard pruning on the KD process and the student\\u2019s performance in the context of data pruning. Additionally, we have included citations to the recent papers mentioned by the reviewer.\"}", "{\"summary\": \"This paper presents an in-depth investigation into the use of knowledge distillation (KD) for training models on pruned datasets. It provides a comprehensive analysis of the performance of models trained using various dataset pruning strategies and pruning factors across multiple datasets, both with and without the application of KD from their pretrained teachers. Based on the experimental findings, the authors demonstrate that employing a teacher model trained on the full dataset can effectively enhance the performance of student models trained on pruned datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The experiments in this are comprehensive and sufficient.\\n\\n2. The theoretical motivation is well-written and reasonable.\\n\\n3. The paper is easy to read and follow.\", \"weaknesses\": \"1. The paper has a reference formatting issue. According to the ICLR submission guidelines, `\\\\citep{}` should be used instead of `\\\\cite{}`.\\n\\n2. The motivation is confusing and needs more clarification. Specifically, this study proposes training a model on a pruned dataset using knowledge distillation from the same model that has already been trained on the full dataset. However, if a well-performing model has already been obtained through training on the complete dataset, it raises the question of __why it is necessary to train the same model from scratch on a pruned dataset__. Providing a stronger rationale for this approach would enhance the clarity and relevance of the study.\\n\\n3. Although this paper has provided a large volume of experimental results, there lack of insightful analysis. For instance, how different dataset pruning approaches behave differently under the same self-distillation schema can be explained.\\n\\n4. In Figures 4 and 5, only hard dataset pruning approaches have been compared, while the easy (such as herding) and moderate (such as MoDS) pruning methods are not compared.\\n\\n5. In Section 4.2, the experiments on adaptive KD weights and pruning factors are only conducted on the CIFAR-100 dataset. The results indicate varying optimal KD weight choices for different pruning factors: a smaller KD weight is preferred when the pruning factor is close to 1, whereas a larger KD weight is favoured when the factor is around 0.1. However, there does not appear to be a consistent pattern underlying this behaviour, making it less practical in real-world datasets. Furthermore, the accuracy gap between different KD weights can exceed 8% from 0.5 to 1.0. It is reasonable to anticipate this accuracy gap escalates on a larger dataset, such as ImageNet-1k. In this case, it is unclear how to choose the optimal weight for a large dataset.\", \"questions\": \"1. According to W2 and W3, the authors should provide more concrete motivation on why this study is contributive and in what real-world scenarios this analysis can be applied.\\n\\n2. According to W4 and W5, the authors should provide more experiments to demonstrate the generalizability and practicality of the proposed self-distillation approach.\\n\\nI'm willing to raise my rating to positive if the authors can provide convincing motivations and necessary experimental results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper investigates the use of knowledge distillation (KD) to improve the training of neural networks on pruned datasets. It demonstrates that incorporating KD from a teacher model trained on the full dataset can enhance model performance across various datasets, pruning methods, and pruning fractions. The study also establishes a relationship between the pruning factor and the optimal KD weight, offering practical insights, such as the potential benefits of using smaller teacher models in lower pruning regimes. Experiments on datasets like CIFAR, SVHN, and ImageNet support these findings.\\n\\nThe strengths of this paper include its clear and well-organized writing, recognized by all reviewers, as well as its comprehensive experimental evaluation and practical insights. The main weakness of this paper is its limited novelty, as it primarily combines existing methods without sufficient theoretical validation. \\n\\nThis paper received borderline reviews leaning towards negative (one 6 with a confidence of 3, and two 5s with a confidence of 4). During the AC-reviewer discussion after the rebuttal, Reviewer wDnx adopted a neutral stance, Reviewer jiaU maintained a negative position, and Reviewer 5t8M did not respond to the discussion and kept the original score of 5. Therefore, this paper is decided to be rejected.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer wDnx asked for more motivation and additional experimental results. The authors provided explanations and references, leading to wDnx increasing their score from negative to positive. However, the other two reviewers did not change their scores during the rebuttal, citing the lack of novelty as the main issue.\"}", "{\"comment\": \"Thank you for your response to our rebuttal and for raising your score.\\n\\nFollowing your suggestion, we have further added experiments in the Appendix (F) comparing easy, moderate, and hard pruning, along with the corresponding insights. Additionally, we have addressed the reference formatting issue\\u2014thank you for bringing it to our attention.\\n\\nPlease note that we have uploaded a revised version of the paper, with the added text highlighted in blue.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Regarding paper's novelty\", \"comment\": \"Thank you for the valuable and constructive comments.\\n\\nAs the reviewer noted, knowledge distillation (KD) is a well-known technique for enhancing a model\\u2019s performance. However, to the best of our knowledge, this work is the first to investigate the application of KD in the context of dataset pruning where the teacher is trained on the entire dataset, and the student is trained on a smaller pruned subset (we highlight the paper's scenario in figure attached in the link below).\\n\\n* As demonstrated in the paper, this setup reveals several novel and valuable findings, including:\\n (1) Simple random pruning outperforms more sophisticated pruning algorithms when KD is applied.\\n (2) Distilling knowledge to a student trained on pruned data exacerbates the capacity gap problem.\\n (3) The selection of the KD weight, which balances the cross-entropy loss and the KD loss, significantly impacts performance in the specific scenario of dataset pruning.\\n\\n\\n\\n* Please note that reference [a] addresses *dataset distillation*, which focuses on generating *synthetic data* to represent the original dataset (specifically, the work in [a] aims to enhance the realism and diversity of the synthetic samples generated). In contrast, dataset pruning seeks to select the most informative samples from the original dataset. This distinction is also highlighted in our related works section (lines 152\\u2013161).\\n\\n\\n**References**\\n\\n[a] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm, Sun et al., CVPR 2024.\\n\\n[Figure_scheme](https://i.imgur.com/WcUv4Xw.png)\"}", "{\"title\": \"Regarding paper\\u2019s novelty\", \"comment\": \"We thank the reviewer for the valuable comments and suggestions.\\n\\nWhile knowledge distillation (KD) is a well-established technique commonly used to enhance model performance, to the best of our knowledge, this work is the first to explore the application of KD in the context of dataset pruning, where the teacher is trained on the entire dataset while the student is trained on a smaller pruned subset. We highlight the paper's scenario in the Figure attached in the link below.\\n\\nAs demonstrated in the paper, this scenario reveals several novel and valuable insights. For instance, we observe that simple random pruning outperforms more sophisticated pruning algorithms when KD is applied. Additionally, as noted by the reviewer, we highlight how distilling knowledge to a student trained on pruned data exacerbates the capacity gap problem. These findings provide new perspectives on the interplay between KD and data pruning.\\n\\n[Figure_sheme](https://i.imgur.com/WcUv4Xw.png)\"}", "{\"comment\": \"Thanks for your response.\\n\\nRegarding the first scenario, we agree that the problem relates to continual learning, as mentioned. However, some algorithms and insights from data pruning may also be applicable to continual learning (as also noted in [1, 2]).\\nRegarding (2), one challenge in directly fine-tuning a pre-trained model on a new dataset is balancing its ability to adapt to new data while retaining prior knowledge. Fine-tuning alone may struggle to effectively update for new samples without risking overfitting or forgetting. In contrast, leveraging pseudo-labels from a teacher model can preserve the knowledge captured from the entire dataset while enabling updates to accommodate new samples.\\n\\nFollowing our discussion on **hyper-parameter selection/optimization (HPO)**, we agree that HPO is indeed one of the significant applications where data pruning plays a crucial role [1].\\nWe agree that the optimal hyper-parameters for a small subset are not necessarily the same as those for the full dataset. However, based on our experience with large datasets, a practical approach could involve performing a coarse search for hyper-parameters on a small data subset (dramatically reducing the search space) followed by a fine-grained search on the full dataset to identify the final optimal hyper-parameters (this process can also be iterative: starting with a very coarse hyper-parameter search using a small data subset, refining the search space based on these results, and then repeating the process with progressively larger subsets until the full dataset is used).\\n\\nAnother related application of dataset pruning is **neural architecture search (NAS)** (see for example in [1, 3, 4, 5]). These works aim at reducing the search time by performing training on a small subset of the data in each step through the bi-level optimization (as used for example in DARTS). \\n\\nIf we may add, **active learning** is another highly relevant application where data pruning can make a significant contribution (e.g., [6, 7, 8]).\\n\\n\\n\\n**References**:\\n\\n[1] Shuo Yang et al. Dataset Pruning: Reducing Training Data by Examining Generalization Influence, ICLR 2023\\n\\n[2] Yihua Zhang et al. Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning, NeurIPS 2023\\n\\n[3] Xiyang Dai et al. DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search, 2020\\n\\n[4] Chongjun Tu et al. Efficient Architecture Search via Bi-level Data Pruning, 2023\\n\\n[5] Vishak Prasad C et al. Speeding up NAS with Adaptive Subset Selection, 2022\\n\\n[6] Ravi S Raju, Accelerating Deep Learning with Dynamic Data Pruning, 2021\\n\\n[7] Abdul Hameed Azeemi, Language Model-Driven Data Pruning Enables Efficient Active Learning, 2024\\n\\n[8] Yichen Xie et al. Active Finetuning: Exploiting Annotation Budget in the Pretraining-Finetuning Paradigm, 2023\"}", "{\"comment\": \"I thank the authors for the prompt response, which partially addresses my doubts concerning the significance of this paper. I'm willing to raise my score to positive.\"}", "{\"comment\": \"Thanks for the responses and for conducting new experiments with respect to $f_t$. Through the new results and the original experiments in the manuscript, the authors conclude that a teacher model weak in architecture but strong in training data is the most suitable one. The authors have validated the latter but fail to provide insights for the former, which makes this conclusion quite confusing to me.\\n\\nAlso, it seems that similar conclusions have been explored in recent works like [a].\\n\\n[a] A Label is Worth A Thousand Images in Dataset Distillation, Qin et al., NeurIPS 2024.\"}", "{\"comment\": \"Thank the authors for the response.\\n\\nHowever, I still have some doubts regarding the scenario described. For instance, if a large portion of data is removed from the dataset while some new data is added, then:\\n\\n1. The new dataset is no longer a subset of the original dataset, so the conditions outlined in the paper may not apply.\\n\\n2. While the pre-trained model can still be used, why can't we directly fine-tune the pre-trained model on the new dataset? Training a new model with the same architecture from scratch on the new dataset seems inefficient and time-consuming.\\n\\nThe other example of hyperparameter tuning is interesting and seems useful. However, I am curious about how the authors could prove, or potentially explain, that a model's performance using a specific set of hyperparameters on a small portion of the dataset aligns with its performance on the full dataset. In other words, how can we ensure that a model achieving better performance with hyperparameter set A on a subset will also achieve better performance with the same hyperparameters on the entire dataset?\\n\\nIn fact, based on my past empirical results of training ViT models on ImageNet-1K, the searched hyperparameters that yield the best performance on a subset of the dataset are often not the optimal ones for training the same model on the full dataset.\\n\\nI'm hoping to hear from the authors.\"}", "{\"comment\": \"We sincerely appreciate your time and effort in reviewing our work. In our previous rebuttal, we have made every effort to appropriately address your concerns. Specifically, based on your valuable feedback, we conducted the suggested experiments to analyze the impact of $f_t$ on the student\\u2019s accuracy. These results have been incorporated into the main paper (see Section 4.4, [Add_experiments_impact_of_ft](https://i.imgur.com/QhBf9lh.png))\\n\\nAs the deadline for submitting a revised manuscript approaches, we kindly request your feedback on our rebuttal. We are happy to address any further questions or concerns during the remaining discussion period.\"}", "{\"summary\": \"This paper proposes to apply knowledge distillation techniques when a model is trained on a pruned dataset. The authors provide theoretical analysis stating that error distilling from a teacher model trained with full data will be smaller than that from a teacher trained with pruned data. Experiments on CIFAR, SVHN, and ImageNet demonstrate that applying distillation can largely enhance the performance.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The proposed solution is simple but effective. Merely applying dataset distillation can result in a lot of benefit on performance.\\n2. The writing is logical and clear. I am able to follow the method descriptions, experiments, and theoretical validation.\", \"weaknesses\": \"1. Although the solution is indeed simple yet effective, I do not find it surprising. After all, it is a well-known trick to apply knowledge distillation to enhance the performance, especially in the industrial community when there are insufficient data. In the academic community, there are indeed works putting forward similar insights, like [a], which also focuses on data pruning. The difference is only on sample-wise pruning in this paper and patch-wise pruning in [a].\\n2. It seems that the experiments are not closely aligned with the theoretical analysis. Theorem 1 would like to convey that, error distilling from a teacher model trained with full data will be smaller than that from a teacher trained with pruned data. Therefore, I expect the experiments would try to validate this point by changing $f_t$. However, almost all the experiments currently are conducted with respect to $f$.\\n3. Following 2, according to the proof of Theorem in the appendix, the error would monotonically decrease with the increasing of $f_t$, the data amount used for training teacher models. It seems that it cannot support the experimental finding that using a teacher model with limited capacity is better. I understand that $f_t$ indicates the data portion, which may be different from the perspective of model architecture used in experiments for various capacity. Anyway, more experimental validation with respect to $f_t$ is necessary here to support Theorem 1.\\n\\n[a] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm, Sun et al., CVPR 2024.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Accompanied by a theoretical motivation, the major goal of the paper is to incorporate knowledge distillation to boost the model trained on the pruned dataset. With experiments conducted in image classification, the authors make some observations regarding, e.g., the connection between the pruning factor and the KD weight.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper is well-organized and easy to follow.\\n2.\\tSome empirical observations are intriguing, e.g., distilling with pruned data will exacerbate the gap problem [1], which may provide some insights for further research regarding data pruning and knowledge distillation.\\n3.\\t[1] Improved Knowledge Distillation via Teacher Assistant, 2019, AAAI.\", \"weaknesses\": \"1.It seems the paper is a simple combination of two well-established techniques, and KD has been successfully utilized to boost the model performance. In this way, the overall novelty and contribution are limited. Do the authors have deeper insights regarding the interplay between KD and data pruning, e.g., pruning certain data leads to a better distillation performance (such as [1]), or if can KD be leveraged to identify important samples in data pruning.\\n[1] Teach Less, Learn More: On the Undistillable Classes in Knowledge Distillation, 2022, NeurIPS.\\n2.The evaluated KD and data pruning methods are rather outdated. It's necessary to include the latest methods, e.g., [2][3][4].\\n[2] Decoupled Knowledge Distillation, 2022, CVPR.\\n[3] Knowledge Distillation from A Stronger Teacher, 2022, NeurIPS. \\n[4] Data Pruning via Moving-one-Sample-Out, 2023, NeurIPS.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Regarding the question on Theorem 1\", \"comment\": \"Using the same setting from [1] in Theorem 1, we assume that knowledge distillation (specifically self-distillation) improves the performance compared to vanilla training (where no teacher is used to guide the student model), as shown by [1]. Consequently, Theorem 1 states that training a student on a pruned dataset with a teacher trained on the entire dataset (i.e., ft\\u200b=1) results in a lower error bias compared to training directly on the pruned dataset without knowledge distillation (the regular data-pruning strategy). This conclusion follows from the inequality:\\n\\n$$E[e(\\\\alpha, f, f_t)]^2 \\\\leq E[e(\\\\alpha, f, f)]^2 \\\\leq E[e(0, f)]^2$$\\n\\nwhere the right-most term indicates vanilla training (without knowledge distillation)\\nIn summary, we already know that self-distillation improves results compared to vanilla training [1]. Our theoretical analysis further demonstrates that employing a teacher trained on a larger dataset (e.g., the entire dataset) leads to additional performance gains.\\nWe will definitely add this clarification to the paper to better explain the motivation.\\n\\n\\nFollowing the reviewer\\u2019s suggestion, we are conducting experiments to evaluate the effect of $f_t$\\n (the data fraction used to train the teacher) on the student's accuracy. We plan to share the results of this analysis soon.\\n\\n\\n**References**\\n\\n[1] Rudrajit Das and Sujay Sanghavi, Understanding Self-Distillation in the Presence of Label Noise, ICML 2023\"}", "{\"title\": \"Response regarding the motivation\", \"comment\": \"We thank the reviewer for the constructive feedback. We will try to address the reviewer's concerns.\\n\\n1. Regarding your question about the motivation of using KD with a teacher that was already trained using the entire data: \\\"why it is necessary to train the same model from scratch on a pruned dataset?\\\". First, please note that this question may relate to the topic of data pruning in general since typical algorithms for data pruning first train a model (and sometimes even multiple models) on the entire data for obtaining scores for the samples and prune the lowest scores accordingly. However, the question is certainly valid and we will make sure to clarify it in the paper. Specifically, there are some cases where one wishes to re-train a model but cannot use the entire data for some reasons. One practical example (that we have mentioned in the paper in section 3.1, line 252-255) is that the entire dataset is not available anymore, for example due to privacy issues, a large percentage of the data may completely removed from the dataset, while a few more samples are added. Thus, we would like to re-train on the few samples while preserving the knowledge captured by the teacher that was trained on the entire (previous) dataset. This example is related to continual learning. Note that in the paper we show in the first time that using simple random pruning, we can achieve superior performance for all the pruning factors when incorporating KD in the loss. This suggests a highly practical application for cases where portions of the data are gradually removed over time. Another example is HPO - one would like to run with a large number of experiments with different hyper-parameters. Instead of running on the entire data which may require a large computational effort, one can run with only a small subset of the data (e.g. 10%) while using not only the original labels but also the pseudo-labels obtained by the teacher. We will be happy to hear your opinion on these two examples. \\n\\n2. Regarding experiments with adaptive KD weights, please note that our experiments were not limited to the CIFAR-100 dataset. In the appendix, we have provided additional experiments on other datasets. Specifically, we observed and presented a similar trend on both CIFAR-100 and SVHN, demonstrating how setting the KD weight affects accuracy.\"}", "{\"comment\": \"Additionally, we wish to emphasize that while the recent work in [a] is somewhat relevant to ours, their work focuses on dataset distillation, which is a fundamentally distinct field of study from ours (dataset pruning). In their work they show that one simple way to achieve dataset distillation is by randomly selecting a subset of real (i.e., not synthetic) samples, and then soft-labeling them. While this baseline is somewhat similar to our random pruning + KD configuration, they discuss and compare this baseline strictly in the context of dataset distillation. On the other hand, our work focuses on the dynamics of data pruning with KD, and provides a variety of interesting observations in this context. As part of our exploration, one of the things we highlight is the effectiveness of utilizing KD with random pruning by comparing it to a variety of sophisticated data pruning methods. Additionally, we go even further and provide a theoretical motivation for our intriguing observation that employing self-distillation can improve training on pruned data. However, we thank the reviewer for their keen observation, and promise to better address these distinctions in the final version of the paper.\\n\\n\\n**Reference**\\n\\n[a] A Label is Worth A Thousand Images in Dataset Distillation, Qin et al., NeurIPS 2024.\"}", "{\"comment\": \"Dear Reviewer, thank you once again for your time and valuable feedback.\\n\\nWe hope our responses have addressed your concerns. Based on your suggestions, we have revised the paper and uploaded the updated version, with the new additions highlighted in blue.\\n\\n\\nWe would greatly appreciate it if you could revisit your evaluation and consider raising your score.\"}", "{\"title\": \"Additional experiments\", \"comment\": \"Following the reviewer\\u2019s comments, we have run the suggested experiments exploring the impact of f_t on the student\\u2019s accuracy. The results highlight two key findings: (1) increasing f_t consistently enhances accuracy beyond SD; (2) in every scenario, SD surpasses standard training without KD. These observations align with the theoretical insights discussed in the theoretical section.\\n\\nWe have incorporated these experiments into the main paper (see section 4.4). \\nPlease note that we have uploaded a revised version of the paper, with the added text highlighted in blue.\\n\\nYou can also find the results of the additional experiments in the attached photo link.\\n\\n[Add_experiments_impact_of_ft](https://i.imgur.com/QhBf9lh.png)\\n\\n&nbsp; \\n&nbsp; \\n\\nPlease kindly let us know if you have any follow-up questions or areas needing further clarification.\"}", "{\"comment\": \"Thanks for the response. After going through the other reviewers\\u2019 comments and corresponding responses, I decide to keep my original recommendation of 5. I give credit to some intriguing observations in this paper. However, the absence of deeper insights regarding the interplay between KD and data pruning weakens the overall novelty and contributions. Besides, the suggested experiments are not provided.\"}" ] }
9ca9eHNrdH
Sparse Autoencoders Do Not Find Canonical Units of Analysis
[ "Patrick Leask", "Bart Bussmann", "Michael T Pearce", "Joseph Isaac Bloom", "Curt Tigges", "Noura Al Moubayed", "Lee Sharkey", "Neel Nanda" ]
A common goal of mechanistic interpretability is to decompose the activations of neural networks into features: interpretable properties of the input computed by the model. Sparse autoencoders (SAEs) are a popular method for finding these features in LLMs, and it has been postulated that they can be used to find a canonical set of units: a unique and complete list of atomic features. We cast doubt on this belief using two novel techniques: SAE stitching to show they are incomplete, and meta-SAEs to show they are not atomic. SAE stitching involves inserting or swapping latents from a larger SAE into a smaller one. Latents from the larger SAE can be divided into two categories: novel latents, which improve performance when added to the smaller SAE, indicating they capture novel information, and reconstruction latents, which can replace corresponding latents in the smaller SAE that have similar behavior. The existence of novel features indicates incompleteness of smaller SAEs. Using meta-SAEs - SAEs trained on the decoder matrix of another SAE - we find that latents in SAEs often decompose into combinations of latents from a smaller SAE, showing that larger SAE latents are not atomic. The resulting decompositions are often interpretable; e.g. a latent representing "Einstein" decomposes into "scientist", "Germany", and "famous person". To train meta-SAEs we introduce BatchTopK SAEs, an improved variant of the popular TopK SAE method, that only enforces a fixed average sparsity. Even if SAEs do not find canonical units of analysis, they may still be useful tools. We suggest that future research should either pursue different approaches for identifying such units, or pragmatically choose the SAE size suited to their task. We provide an interactive dashboard to explore meta-SAEs: https://metasaes.streamlit.app/
[ "sparse autoencoders", "mechanistic interpretability" ]
Accept (Poster)
https://openreview.net/pdf?id=9ca9eHNrdH
https://openreview.net/forum?id=9ca9eHNrdH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z8MpI8OaUE", "wRbrFtxcOH", "w4InaxPumN", "tVbi5J7Qvf", "tNl6CEA9or", "oaOAPow323", "knDQtxT6nq", "kf5tifXzFU", "jLrHqGut9d", "fkNmzVURvt", "cxaO4d7Ut6", "cCOu6s1CUu", "a7KS3in6H7", "a5UGXyj6U8", "XmqvYhE5Ze", "XDRj6rLWGj", "OtXVqMTSMj", "Ngu8EwnlYX", "MYzT7PFvQe", "M4wGcVdrYR", "LiBKLM5IQS", "LSQyX1ke5s", "JgQh9dNYuc", "FZYO4qZukt", "EWOEFyVpLz", "BHJtlCwuvo", "AdrgCkqyra", "9ADWGqMG7z", "5N34xmFYrg", "4e8ppGmfcZ", "3UExQd5dBL", "0kCSvbWmd5" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1732116971027, 1732635791756, 1732799425537, 1732117634448, 1732530392550, 1732278514875, 1732555763349, 1732790429851, 1732118216872, 1737523861661, 1732582381294, 1732790176268, 1732555372899, 1732117705122, 1732118032995, 1732769883172, 1732544713618, 1730389550656, 1732635818724, 1732710917510, 1732117497681, 1730594196724, 1734852179974, 1732794678003, 1732530447808, 1732555825106, 1732682533613, 1732118015221, 1732569257364, 1732540359428, 1730381252853, 1730566545679 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Submission7770/Reviewer_9gei" ], [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Submission7770/Reviewer_HcSs" ], [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7770/Reviewer_9gei" ], [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Submission7770/Reviewer_9gei" ], [ "ICLR.cc/2025/Conference/Submission7770/Reviewer_9gei" ], [ "ICLR.cc/2025/Conference/Submission7770/Reviewer_HcSs" ], [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Submission7770/Reviewer_pzvM" ], [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Submission7770/Reviewer_jKs2" ], [ "ICLR.cc/2025/Conference/Submission7770/Area_Chair_CsmQ" ], [ "ICLR.cc/2025/Conference/Submission7770/Reviewer_pzvM" ], [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Submission7770/Reviewer_jKs2" ], [ "ICLR.cc/2025/Conference/Submission7770/Authors" ], [ "ICLR.cc/2025/Conference/Submission7770/Reviewer_pzvM" ], [ "ICLR.cc/2025/Conference/Submission7770/Reviewer_pzvM" ], [ "ICLR.cc/2025/Conference/Submission7770/Reviewer_9gei" ], [ "ICLR.cc/2025/Conference/Submission7770/Reviewer_pzvM" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer jKs2\", \"comment\": \"Thank you for your positive and insightful feedback, in particular for recognizing that our work \\u201caddresses an important question about SAE features\\u201d and that \\u201cthe presentation in the paper is clear and the experiments are original and well-executed\\u201d. We address your main points and questions below:\\n\\n> I could not find reported values of the reconstruction loss that meta-SAEs obtain in reconstructing the base 49k-latent SAE latents. If they are only a very weak approximation, what would that say about the hypothesis that large-SAE latents are linear combinations of more atomic latents?\\n\\nThanks for bringing up this point. The meta-SAE with 2304 meta-latents (and an average L0 of 4) explains 55.47% of the variance of the 49k latents in the SAE. We have included this information in Section 5 of the updated manuscript. Although this only indicates partial linear decomposability, it still provides concrete evidence against the hypothesis that SAEs converge to atomic units. With meta-SAEs, as with SAEs in general, we think the most important result is that any kind of decomposition is possible, not that it is perfect. The fraction of activation variance unexplained for our SAEs ranges from 50% to 10%, in comparison with 45% to 10% for our meta-SAEs, depending on choice of L0 and number of (meta-)latents.\\n\\n> Some minor grammatical and presentation issues: \\\"vertexes\\\" -> vertices, the left quotation marks in the meta-SAE section should be fixed, etc.\\n\\nThank you for pointing these out, they have been fixed in the updated version.\\n\\n> It's interesting that the reconstruction error falls as much as it does when the reconstruction latents are stitched in.\\n\\nThis is a good observation. The reason that the reconstruction tends to fall more steeply during reconstruction latent stitching vs. novel latent stitching has to do with the order in which we do these. If we swap reconstruction latents before adding in the novel latents, the reconstruction MSE tends to go up a little bit when swapping the reconstruction latents (see Appendix A.7 Figure 17 for the individual effect of swapping reconstruction latents). The steeper fall in MSE during reconstruction latent stitching in Figure 5 occurs because we've already added in the novel latents, which are optimized to work best in combination with the other latents of the larger SAE.\\n\\n> I wonder if a feature-manifold explanation might also be worth considering here (Engels et al. 2024), where reconstruction features are more densely covering a feature manifold that corresponding latents in the small-SAE are more coarsely covering.\\n\\nYour hypothesis about the feature manifold explanation from Engels et al. 2024 is interesting. It aligns with our finding that sometimes multiple reconstruction latents have high cosine similarity with multiple small-SAE latents. This could mean that the small SAE makes a lower-dimensional approximation of a higher-dimensional feature manifold that the large SAE represents. We would be excited to see future work in this direction.\"}", "{\"comment\": \"Thanks! We\\u2019re glad you find the effect of dictionary size on MSE in our stitching experiments is clear now.\\n\\n> However, figure 5 says \\\"every insertion or switch results in a strict improvement in reconstruction (MSE)\\\". This doesn't make sense to me. A switch (which by definition therefore maintains the dictionary size) cannot cause a decrease in an optimal MSE.\\n\\nThank you for raising the point about \\u201cevery switch\\u201d resulting in a \\u201cstrict improvement\\u201d. This was a mistake, and we should have said that on average swaps result in an improvement of reconstruction as is shown in Figure 5. We have updated the caption and text to be more precise. However, in stitching we switch groups of latents between the SAEs where the group of latents from the larger SAE is generally larger than the group of latents from the smaller SAE, see Section 4.2 and examples provided in Appendix A.5. We have modified the caption of Figure 5 to clarify the many-to-many nature of the switches.\\n\\n> Also, the argument that larger dictionary sizes tend to have lower MSEs is specious. This is only (ultimately) guaranteed when the bias b1 of the larger SAE1 is used as discussed above. However, since the bias of the smaller SAE0 is used in the \\\"stitching\\\" method, there is no guarantee of improvement or even expectation of improvement. I feel \\\"stitching\\\" is comparing two unrelated entities and has no obvious significance. This is why I was so curious why you kept the bias b0. The asymmetry of the approach renders comparison of features problematic and is at the heart of my misgivings.\\n\\nThe bias terms of the SAEs of different sizes are very similar, and we have previously updated the text of Section 4.1 to clarify this. However, we understand that the cosine similarity of 0.997 indicates some degree of misalignment, and it\\u2019s unclear how this might affect stitching experiments. We demonstrate this in Section A.7 Figure 20 by evaluating SAEs of different sizes with their bias terms switched, and observing that this has negligible impact on the performance of the SAEs. As such, the specific SAE from which the bias term in our stitching experiments is sourced will not affect the results of our stitching experiments.\\n\\nAs to the concern that stitching is comparing unrelated entities, our two SAEs are two neural networks of the same architecture, trained by the same training procedure, on the same dataset, and differentiated only by their number of latents. We would argue that these are related entities. Comparing even more dissimilar representations with model stitching is common in the literature, e.g. Bansal et al., in which model stitching between neural network model layers is used to compare the representations of different models. \\n\\nBansal, Yamini, Preetum Nakkiran, and Boaz Barak. \\\"Revisiting model stitching to compare neural representations.\\\" Advances in neural information processing systems 34 (2021): 225-236.\"}", "{\"title\": \"Thank you very much for the resolving the concern\", \"comment\": \"Thank you very much for trying out the best to resolve the concern. It was very important to me that the experiment is being done on the benchmark that are available for all reviewers (including myself) to see. I am raising the score. Please notify to all reviewers that this modification on your manuscript has been done (including the github link).\"}", "{\"title\": \"Response 2 to Reviewer pzvM\", \"comment\": \"> Isn't it clear that (local optima aside) a larger SAE will always find features that are missed by smaller SAE? I'm not sure I follow the argument from line 355 onwards.\\n\\nWhile it might seem intuitive that larger SAEs would always find novel features, it wasn't clear whether additional capacity would be used to find new features versus just representing existing features more sparsely. In Figure 2, we provide the \\u201cblue square\\u201d example of a small SAE that has learned the 6 ground truth features in a dataset, and a large SAE with 9 latents that has learned compositions of those features to further reduce sparsity. Our results empirically demonstrate both effects occur.\\n\\n> Please add more clarity around treating the latents W_i^{dec} (why is W in boldface)? W_i^{dec} is a scalar quantity, the ith component of the d-dimensional vector W^{dec}. What does it mean to take scalars \\\"training data for our meta-SAE\\\"? The directions W don't convey information about which directions are simultaneously activated. I would have thought it would make more sense to treat the feature vectors f(x) as the entities for learning a meta SAE.\\n\\nEach decoder direction W_i^{dec} is actually a vector, not a scalar. W^{dec} is a matrix, and we are taking the i'th column, the decoder vector for the i'th SAE latent. The meta-SAE learns to reconstruct these vectors. We have added further clarification of the shapes of these entities in Section 2. \\n\\n> The BatchTopK function is fully defined. In line 465 it states that the function selects the top K activations, suggesting that this is a mapping from activations to indices. I suspect that the authors mean that the function should return zero for all activations that are not in the top K highest positive values, and is the identity function otherwise.\\n\\nYes, your interpretation is correct - BatchTopK zeroes out all activations except the top K highest positive values, for which it acts as the identity function. We've added this explicit definition to Section 6.\\n\\n> The introduced batch method is used only during training, with a different non-linearity used during \\\"inference\\\". This seems quite strange and it's not clear to me how to justify this. A potential issue not mentioned is that it's quite possible that the batch approach means that some input sequences will have entirely zero activation.\\n\\nDuring training, we estimate the threshold on single random batches for efficiency purposes. However, this means that the threshold varies depending on the samples in the batch, resulting in dependencies between samples in the batch. During inference, in order to break this dependency, we use a single threshold estimated over many training batches. This also lets us do inference on small batches (eg a single prompt), where an estimate for the threshold would be very noisy. Our experiments show that this works well. Using different functions during training/inference is common, for example in dropout and batch normalization. If all activations are zero, the reconstruction is equal to the decoder bias, but we have not seen this in practice (see Figure 9). This is the same for ReLU and JumpReLU SAEs, but not for TopK SAEs.\\n\\n> It's hardly surprising that the BatchTopK SAE has lower SAE than TopK SAE since BatchTopSAE imposes fewer constraints on the objective. I'm not sure why this would be seen as \\\"outperformance\\\".\\n\\nThe goal of papers introducing new SAE training methods (Gao et al. 2024, Rajamanoharan et al. 2024, etc) is to find a method that is better at finding sparse reconstructions, to act as a useful tool for researchers. The standard method is showing better reconstruction performance at similar sparsity levels and number of latents. Our contribution with BatchTopK is to provide a novel method with better performance. We agree that, once thought of, it's unsurprising that BatchTopK is better. Though, ReLU SAEs also impose fewer constraints on the objective than top-k SAEs, but attain worse reconstruction performance Gao et al. 2024\\n\\n> I'm also unclear as to why BatchTopK SAE is being discussed. Is this method used in all the previous experiments in the paper, or is this a separate piece of work orthogonal to the other contributions of the paper?\\n\\nBatchTopK was developed specifically to enable training meta-SAEs with very low sparsity (4 active latents per input on average), which existing methods struggled with. It is only used for the meta-SAE experiments and to compare it to existing methods, not in the earlier stitching experiments. \\n\\n> Figure 11 isn't well explained. Please explain what is being shown here.\\n\\nFigure 11 (now Figure 12) shows paired examples of latents from GPT2-768 and GPT2-1536 that have high cosine similarity (0.99). For each latent, we show their top activating inputs and the logits they influence most strongly. This demonstrates that similar latents in different sized SAEs capture similar semantic features. We've expanded the text in Appendix A.4 to clarify the figure.\"}", "{\"comment\": \"Before this phase of the discussion period ends, we wanted to check in with the reviewer on whether we have addressed your concerns with our work?\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thank you for your response! I believe that this paper makes a positive contribution to our understanding of SAEs, which could ultimately inspire new research directions to address the main limitations identified (i.e. SAEs not identifying canonical units of analysis and the challenge of determining the appropriate dictionary size for a given task). I also appreciate the improvements made in the revised version - it is now easier to read, more self-contained thanks to the glossary, and the experimental section is more comprehensive. I am maintaining my rating, as I believe this is a good paper worth sharing with the community.\"}", "{\"comment\": \"Thank you for your response!\\n\\n> I still don't understand figure 5 and I feel this needs better explanation. The x-axis suggests that the number of features is always increasing, yet the text suggests that features are also swapped (which means a non increase)? For an optimally trained SAE, by definition, using any other features must increase the reconstruction error. I therefore don't understand why say swapping a feature from a small SAE with one from a larger SAE would bring any useful information -- by definition it has to increase the reconstruction error (unless the training got trapped in a suboptimal local optimum). I feel like I'm missing something important.\\n\\nAs you say, it is a priori unclear why introducing a latent from a larger SAE into a smaller SAE would improve the reconstruction of that SAE. In particular, prior to our research, it was not known that larger SAEs have entirely novel latents, with respect to a smaller SAE, that do not interfere with the existing latents. \\n\\nThe reason that reconstruction can improve when modifying a close-to-optimal solution is as follows. While the smaller SAE achieves good performance within its fixed dictionary size, the performance is constrained by its size. Larger SAEs tend to have better reconstruction, all other things being the same. Adding latents from the larger SAE effectively relaxes this dictionary size constraint:\\n\\n* Novel latents add previously uncaptured information that the smaller SAE simply didn't have capacity to represent\\n* During swapping, each swap typically replaces N SAE latents from the smaller SAE with M \\u2265 N latents from the larger SAE. This larger SAE achieves better reconstruction by having more features available to represent similar information\\n\\nSo while modifying a solution may worsen performance if we maintained the same dictionary size, here we're actually relaxing the dictionary size constraint. This explains why reconstruction continues improving as we incorporate more latents from the larger SAE's more expressive dictionary.\\n\\nThe above argument does not imply that any intervention that increases dictionary size can improve reconstruction, just that it is possible. We find that inserting novel latents improves reconstruction while inserting reconstruction latents worsens it. This difference in effects gives us evidence for there being a difference between the two categories of latent. \\n\\n> For example there is still no explanation early on as to how to associate features with linguistic concepts (which is potentially problematic in itself).\\n\\nThanks for noting this, we have added the following sentence to the related work section:\\n\\n\\u201cAfter training, researchers often interpret the meaning of SAE latents by examining the dataset examples on which they are active, either through manual inspection using features dashboards (Bricken et al., 2023) or automated interpretability techniques (Gao et al., 2024).\\u201d\\n\\n> Overall, I like the message of the paper, namely that SAEs don't find atomic linguistic units. However, this isn't surprising to me (why should they? -- an LLM is not trained to respect the SAE structure and represent atomic linguistic units).\\n\\nWe appreciate your enthusiasm for the message of the paper. While there's no a priori reason to expect SAEs to find atomic linguistic units, we believe this assumption has become prevalent in the interpretability community and is popular enough for rigorous evaluation to be of value. SAEs have generated tremendous recent interest, with numerous high-profile papers from top industry and academic labs [1,2,3,4,5,6], media coverage [7, 8], and new startups [9,10]. The influential \\\"Towards Monosemanticity\\\" [1] put forward a vision of SAEs uncovering canonical linguistic features that has since become widespread. Although not all researchers may have believed this initial framing, we still think that explicitly refuting such ideas is valuable and important work. We also believe that the concrete empirical findings we provide through refuting it will advance the community's collective understanding of this popular tool.\"}", "{\"comment\": \"We understand your concern, and really appreciate your commitment to conducting a fair review of our work. We haven\\u2019t managed to find other examples of this, as it seems quite hard to search for. In order to help you validate the reference, we have been granted permission to unofficially publish the paper and code on Github. We hope that this is an acceptable temporary solution for you in conducting your review, and we apologise for the additional complexity this has created for you. We have updated the citation to reflect this, and the paper and code are available here: https://github.com/anonymous664422/sae_bench\"}", "{\"title\": \"Response to Reviewer 9gei\", \"comment\": \"Thank you for your thoughtful review and helpful suggestions. In particular, we were happy with your assessment that our work \\u201cprovides thorough and solid experiments\\u201d and \\u201canswers an important question\\u201d. We appreciate your feedback on the positioning of the paper within the literature, and have updated the manuscript to be more precise about the modality of the experiments, and added experiments to clarify the relevance of our results to interpretability. We address your main points and questions below. If you have further comments or questions we are keen to respond to them, and otherwise appreciate an increase in your support for our paper.\\n\\n> In the introduction and in the abstract, the paper is presented as if the research topic pertains to an analysis of a decomposition of the activations of a \\\"general\\\" neural network into features, while in fact the SAE introduced by [Bricken et al] and [Cunningham et al] is a probing tool for a specific 'organism' that is LLM.\\n\\nSAEs have been used on a range of modalities, including image and audio models. However, you are correct that our work specifically advances the understanding of SAEs as interpretability tools for LLMs. We have updated the abstract and introduction to specify this.\\n\\n> The research does not relate the claimed novelty to the SAE's probing \\\"capability\\\"\\n\\nPlease see the top-level comment.\\n\\n> Figure 4, it is said that \\\"Features with cosine similarity less than 0.7 tend to improve MSE\\\". Do you mean 0.6 or less? In the similar regard, this tendency is much less clear in Figure 18.\\n\\nWe found that different model architectures and training regimes lead to different optimal thresholds (0.7 for GPT-2, 0.4 for Gemma). By \\\"work well\\\" we mean that using these thresholds allows qualitatively similar results (such as the interpolation between SAE sizes), though the exact patterns differ between models. We have updated the text to clarify this. Note that by changing the threshold, we can adapt the trade-off between false positives and false negatives, as shown in the ROC plot in Figure 16.\\n\\n> What are \\\"active latents\\\"?\\n\\nWe have added a definition to the Glossary of Terms (Appendix A.1) - these are latents with non-zero activation values after applying the sparsity-inducing activation function.\\n\\n> Would you please clarify \\\"on what SAEs\\\" the MetaSAE were applied in these experiments? For example, from SAE of what size \\\"the Meta SAE with 2304 latents\\\" was obtained?\\n\\nThe meta-SAE with 2304 latents was trained on the decoder directions of the GPT-2 SAE with 49152 latents. These experimental details can be found in line 403 in the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thank you very much for the followup!\", \"comment\": \"Thank you for addressing my concern, I am more comfortable with the claim now.\\nWhile I am positive about the score however, there seems to be one last bit; I am worried about the citation of [Upcoming Work] that are being referred either as an evaluation method in the said appendix or as a benchmark in the main manuscript. Would you please clarify what is [Upcoming Work] ?\"}", "{\"comment\": \"Thank you for your continued engagement in helping us improve this paper!\\n\\n\\n> I also disagree with the sentence \\\"This allows us to smoothly interpolate between SAEs of different sizes in terms of dictionary size, sparsity, and reconstruction performance\\\". It's not correct that you're \\\"interpolating\\\" between SAEs. Again, the asymmetry in bias choice means that you cannot interpolate between SAE0 and SAE1.\\n\\nThanks for pointing this out. We have updated our methodology and Figure 5 such that when we do the interpolation between two SAEs, we now also interpolate between the biases of the two SAEs by calculating a weighted mean of the bias based on the number of latents from each SAE. In calculating the interpolated bias term, we weight each bias proportionally to the number of latents from each SAE that are included in the stitched SAE. Given that we start with SAE0, end with SAE1, and the points in between are a mix between the two (in terms of latents, biases, reconstruction performance, L0), we believe the term interpolation is now warranted. Given that the biases are so similar, this has no discernable difference on our results, but it helps make our methodology robust to settings where the biases may differ more.\\n\\n> I still don't see any substantial improvement of the presentation of figure 5. There are 4 phases in the figure and I cannot find a full description of all of these phases nor why they are chosen and in what order. Why does L0 go up and then down in each of the 4 phases?\\n\\nWe have clarified this information in the text introducing Figure 5. There are 4 phases, as we interpolate between five SAEs (size 768 -> 1536, size 1536 -> 3072, etc.). Every colored \\u2018phase\\u2019 corresponds to interpolating between two SAEs (eg. 768 -> 1536), and consists of two parts: first we insert the novel latents (which increases L0), then we swap groups of reconstruction latents (which decreases L0 on average). Novel latents cause the mean L0 to increase, as they only add more latents to the stitched SAE, however reconstruction latents cause the L0 to decrease on average again, as we swap latents in the smaller SAE (eg. blue, circle) with more sparse latents in the larger SAE (eg. blue circle).\\n\\n> There is no mathematical reason (to my mind at least) to expect the MSE to increase or decrease as latents from the larger SAE are added. \\n\\nOur study, as with much interpretability research, is empirical rather than mathematical. When we add a latent from a larger SAE to a smaller SAE, the change in MSE tells us something important. If MSE decreases, it means this latent captures information that was completely missing from the smaller SAE. Conversely, if MSE increases, it suggests the smaller SAE was already representing this information in some form, and we're now reconstructing it redundantly. We use this insight to show that smaller SAEs are not complete and do not capture all relevant features from the model activations.\\n\\n> Why is there no recognition that there is a separate challenge of associating which words or concepts will `activate' an SAE feature (I know there is now a reference to how people do this, but it's not recognised as a challenge in itself)? I feel the paper addresses a community that somehow already believes in this approach but leaves an outsider like myself scratching my head. In my view the methodology of using SAEs to find units of meaning in transformers is at best insufficiently well motivated and at worst somewhat meaningless. \\n\\nWe agree that associating SAE features with concepts is an area of active research and is not yet a solved problem, and have added this caveat to our conclusion. We chose to use standard interpretation techniques rather than tackle this challenge directly. The main message of our work is that even if you accept the standard SAE paradigm for assigning meaning to units, these methods don't learn canonical units. We believe focusing on too many critiques at once would dilute our core message. For researchers who were already skeptical of attributing meaning to SAE features, we believe our work still provides value by deepening their empirical understanding of how SAEs organize information.\\n\\n\\n> From the final concluding sentence, what does \\\"leveraging SAEs as flexible tools tailored to the requirements of each analysis\\\" mean?\\n\\nWhat we meant to say here is that \\u201cGiven that SAEs do not find canonical units of analysis, if one still wants to use SAEs for certain analyses (probing, unlearning, steering) one should adapt the hyperparameters such as dictionary size to find the level of granularity and composition of units that is needed for that specific analysis. We have updated the sentence to make this clearer.\"}", "{\"title\": \"Clarification on new interpretability experiments\", \"comment\": \"Thank you for clarifying your concern about the additional interpretability experiments. The two interpretability experiments we are referring to can be found in Appendix A.9 of the updated version. The first experiment (in Appendix A.9.1) shows how the latents of SAEs of different sizes can be used as linear probes for multiple classification tasks, such as classifying the occupation of a person based on online biographies, predicting the sentiment of amazon reviews, and the category of news articles (e.g. world, sports, business).\\n\\nThe second experiment (in Appendix A.9.2) scores how well ablating latents from SAEs of different sizes removes information from the model activations. Here we use the same tasks (i.e. the sentiment of amazon reviews), but score how well SAE latents can be used to remove the class information without impacting other information present in the activations. \\n\\nWe are happy to see that we have addressed most of your concerns. Do these experiments resolve your remaining concern regarding the probing capability in downstream tasks? We would greatly appreciate it if you would consider raising your score for the paper.\"}", "{\"title\": \"References mentioned in response to Reviewer pzvM\", \"comment\": \"Samuel Marks, Can Rager, Eric J Michaud, Yonatan Belinkov, David Bau, and Aaron Mueller.\", \"sparse_feature_circuits\": \"Discovering and editing interpretable causal graphs in language models.\", \"arxiv_preprint_arxiv\": \"2403.19647, 2024.\\n\\nLieberum, T., Rajamanoharan, S., Conmy, A., Smith, L., Sonnerat, N., Varma, V., ... & Nanda, N. (2024). Gemma scope: Open sparse autoencoders everywhere all at once on gemma 2. arXiv preprint arXiv:2408.05147.\\n\\nLeo Gao, Tom Dupr\\u00b4e la Tour, Henk Tillman, Gabriel Goh, Rajan Troll, Alec Radford, Ilya\\nSutskever, Jan Leike, and Jeffrey Wu. Scaling and evaluating sparse autoencoders. arXiv preprint arXiv:2406.04093, 2024\\n\\nSenthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Tom Lieberum, Vikrant Varma, J\\u00e1nos Kram\\u00e1r, Rohin Shah, Neel Nanda. Improving Dictionary Learning with Gated Sparse Autoencoders. arXiv preprint arXiv:2404.16014, 2024\\n\\nTrenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Conerly, Nick\\nTurner, Cem Anil, Carson Denison, Amanda Askell, et al. Towards monosemanticity: Decom-\\nposing language models with dictionary learning. Transformer Circuits Thread, 2, 2023\"}", "{\"title\": \"Overall Response\", \"comment\": \"We thank all reviewers for their thorough and constructive feedback.\\n\\nWe are encouraged that the reviewers found our work addresses \\\"an important question\\\" (jKs2 and 9gei) about \\u201can interesting topic\\u201d (pzvM) and that our experiments were assessed as \\u201coriginal and well-executed\\u201d (jKs2), \\u201cthorough and solid\\u201d (9gei), with \\u201cdetailed experimental information\\u201d (HcSs).\\n\\nMost reviewers found our presentation \\u201cclear\\u201d (jKs2) and that our \\u201cmotivation and methods are explained clearly and intuitively, with helpful examples\\u201d (HcSs), but we also appreciate the feedback that it was \\u201cquite presumptive about readers knowledge of the topic\\u201d (pzvM) and 9gei\\u2019s dissatisfaction with \\u201chow the research is positioned\\u201d. We have made the paper more appealing and readable to a broader audience by expanding the explanation of SAEs in Section 2, adding a glossary of terms to the appendix to aid readers with less background knowledge of the field, improving the framing in our introduction, and clarifying the motivation of the BatchTopK section.\\n\\nReviewer HcSs and 9gei were both interested in how our work can be used to choose the proper dictionary size for interpretability tasks, such as probing. Prior to our research, it was thought that just training larger SAEs would result in better units of analysis for interpretability tasks. However, our work demonstrates why this is not the case. To reinforce this argument, we have added two sets of interpretability experiments to Appendix A.9, that show the complex relationship between the usefulness of latents for down-stream interpretability tasks and size of the dictionary.\\n\\nWe hope these improvements make our contributions more accessible while preserving the technical rigor appreciated by the reviewers.\"}", "{\"title\": \"Thank you for the clarification\", \"comment\": \"Thank you for clarifying the [Upcoming Work], and I am very sympathetic to the situation.\\nWhile so, I have never seen this case in my moderately long research career so far, particularly the case in which the \\nreference is being made to a work that has not been even unofficially published (e.g. ArXiv) so that no one can verify its validity.\\nWhile I do feel the sympathy, please understand that I must also play my part in conducting a fair review and that the problem addressed in this part of the additional experiment is very important in my opinion.\\nBy any chance, would it be possible to point out any preceding example in which something like this has been done before, so that I feel more confident in raising the score? Meanwhile, I will try to look for a such example myself within my capability.\"}", "{\"title\": \"Thank you very much for the feedback!!!\", \"comment\": \"I am sorry in the delayed response, and thank you very much for addressing most of my concerns.\\nThank you also for updating the abstract as well. My remaining concern is just the relation between the claimed novelty to the SAE's probing \\\"capability\\\" in actual downstream tasks. \\nIf my understanding of your response comment is correct, I believe that the said \\\"top-level comment\\\" in response to my concern is \\n\\n> To reinforce this argument, we have added two sets of interpretability experiments to Appendix A.9, that show the complex relationship between the usefulness of latents for down-stream interpretability tasks and size of the dictionary.\\n\\nIn this regard, I was sweeping through the Appendix section, and I am postulating that the two sets of \\\"interpretability experiments\\\" in mention are the numerical report of A.4 regarding the similarities between Large and small SAE features. Would you please be more specific as to which \\\"additional experiments\\\" you are referring to, and which downstream task (such as IoI) is being discussed there?\"}", "{\"summary\": \"In this work, the authors introduce two new methods for analyzing latents in SAEs. The first, SAE stitching, enables comparison of latents across SAEs of different sizes by categorizing latents in larger SAEs as either novel or reconstruction latents found in smaller models. The second, Meta-SAE, decomposes decoder directions into interpretable meta-latents They also propose the BatchTopK SAE, a variant of the TopK SAE that forces a fixed average sparsity. By applying these methods, they obtained empirical results that suggest that SAEs are unlikely to learn canonical sets of units.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The motivation and methods are explained clearly and intuitively, with helpful examples.\", \"The authors contextualize their approach by discussing relevant state-of-the-art methods.\", \"Several experiments are conducted to assess the methods' performance, with comparisons to state-of-the-art baselines and detailed experimental information.\", \"An interactive dashboard is included to explore the latents learned with meta-SAEs.\"], \"weaknesses\": [\"Including a discussion on the potential limitations of the proposed approaches would be valuable.\", \"It would also be helpful to have an expanded discussion on assessing the quality of the representations and adapting the dimensionality of the SAE to suit the requirements of different analyses.\"], \"questions\": [\"Is there any particular reason to omit the bias term of the decoder 1 in equation 5?\", \"How does the stitching SAE handle shared reconstruction latents during the swap process? Since some reconstruction latents in larger SAEs are composites, they may contain information from multiple smaller SAE latents. For example, in the case of colored shapes, how would the model swap the \\\"square\\\" or \\\"blue\\\" latent if these are entangled in composite latents in the larger SAE? Would they need to be swapped simultaneously?\", \"Would it be possible to introduce some form of supervision in meta-SAEs? Do you have any intuition as to whether this might be beneficial, considering that the ultimate goal is to interpret model activations? Following a concept bottleneck approach, one could directly associate human-interpretable meta-latents with the representations learned by larger SAEs (although I assume the main limitation is defining labels/concepts for the large dictionary sizes considered).\", \"For BatchTopK, have the authors examined how varying values of k impact the semantics of the learned latents? Do the \\\"concepts\\\" learned under a stronger sparsity constraint become more abstract (as they need to explain a given activation with fewer latents) for a fixed dictionary size?\", \"Have the authors extracted any intuition on how should the dictionary size be adjusted to tailor the SAE to the requirements of an specific analysis?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your feedback, we\\u2019re glad we were able to improve your confidence in the research!\\n\\n> I am worried about the citation of [Upcoming Work] that are being referred either as an evaluation method in the said appendix or as a benchmark in the main manuscript. Would you please clarify what is [Upcoming Work] ?\\n\\nThanks for raising this concern. [Upcoming Work] is a SAE benchmark being developed by researchers we know personally, who were kind enough to let us build on their work before the official publication date, intended for early December. This work will be published well before the decision deadline, so the current awkward state of the manuscript will be fixed in the camera ready with a clear citation to their work. In order to avoid confusion, or to claim any undue credit for their methodology, we tried to make it clear that this was upcoming work. We have added a footnote to Appendix A.9 clarifying this.\\n\\nWe hope this has addressed your remaining concern, but please feel welcome to ask for any further clarifications during the next stage of the review process. If you are now satisfied with the manuscript, we would politely ask that you increase your support for the paper.\"}", "{\"comment\": \"Thanks. I still don't see any substantial improvement of the presentation of figure 5. There are 4 phases in the figure and I cannot find a full description of all of these phases nor why they are chosen and in what order. Why does L0 go up and then down in each of the 4 phases?\\n\\nI also disagree with the sentence \\\"This allows us to smoothly interpolate between SAEs of different sizes in terms of dictionary size, sparsity, and reconstruction performance\\\". It's not correct that you're \\\"interpolating\\\" between SAEs. Again, the asymmetry in bias choice means that you cannot interpolate between SAE0 and SAE1. \\n\\nSorry for being a grump here, but the submission still has issues such as: Why bother with a bias term at all? Why is there no recognition that there is a separate challenge of associating which words or concepts will `activate' an SAE feature (I know there is now a reference to how people do this, but it's not recognised as a challenge in itself)? There is no mathematical reason (to my mind at least) to expect the MSE to increase or decrease as latents from the larger SAE are added. The authors support their results simply because b0 and b1 turn out to be similar. The argument to me it feels like: here is an idea about adding latents from a larger SAE (for which a priori there is no clear reason to expect the MSE to go up or down); however, it turns out that b0 and b1 are similar (empirically) and we'll therefore claim that these results a posteriori have meaning. Just because others in that community are comparing things that seem even less related doesn't increase my confidence in the rigour of that community.\\n\\nI feel the paper addresses a community that somehow already believes in this approach but leaves an outsider like myself scratching my head. In my view the methodology of using SAEs to find units of meaning in transformers is at best insufficiently well motivated and at worst somewhat meaningless. The paper to some extent supports that view, but I do have a concern about how meaningful the methodology of \\\"stitching\\\" is.\\n\\nFrom the final concluding sentence, what does \\\"leveraging SAEs as flexible tools tailored to the requirements of each analysis\\\" mean? \\n\\nI would be happy to raise my score provided that a fuller description of figure 5 is made and there is some attempt to recognise the above concerns.\"}", "{\"title\": \"Response to Reviewer pzvM\", \"comment\": \"Thank you for your extensive review! We appreciate your feedback in helping us make this paper more appealing and clear to readers from the broader community. We have run some additional experiments, and address your questions and critiques below. If you have any further questions or feedback we are keen to hear them, and otherwise we would appreciate you to increase your support for our paper.\\n\\n> I don't really agree with the phrase \\\"circuit\\\". To me this suggests some end to end explanation, whereas the reality is that a small part of a transformer is being examined (the activations in a single layer).\\n\\nWe use the term \\\"circuit\\\" only in the related work section where we reference papers that examine activations across multiple layers (e.g., Marks et al.'s \\\"Sparse Feature Circuits\\\"). While our work focuses on single-layer analysis, we maintain this terminology when discussing prior work to be consistent with the established literature.\\n\\n> I can't find any information on how the SAE is trained. What is the optimiser and how are the hyperparameters (eg \\\\lambda) set?\\n\\nThanks for pointing this out. We have added the training details for the GPT-2 SAEs to Appendix A.6 The Gemma-2 SAEs are described in Lieberum et al.\\n\\n> Why choose layer 8 of GPT2?\\n\\nWe followed the lead of Gao et al. (2024) in choosing the layer 8 residual stream. This clarification has been added to Appendix A.6 along with the training details.\\n\\n> Equation 5 has an asymmetry. Why is the bias from decoder 0 used, rather than decoder 1, or a combination of the two?\\n\\nFor a given base language model, the decoder biases of all the SAEs are very similar. For GPT-2, the minimum cosine similarity between any two pairs of SAE decoder biases is 0.9970, and the magnitude of the vectors differs by less than 0.01%. We've updated the text around the equation to clarify this.\\n\\n> The motivation for stitching isn't clear to me. The features and decoders learned are optimised for each SAE separately. Why would combining them in a suboptimal way mean anything? Why not fix the features f0, f1 from each encoder but then learn optimal decoder weights W_{01} for the combination of these feature (this is a simple quadratic optimisation problem). Similarly, in the subsequent discussion, I don't follow why an increase or decrease in MSE is of any significance.\\n\\nWhile learning optimal decoder weights is possible, our goal is to understand relationships between features learned by SAEs of different sizes, not to optimize their combination. By keeping original decoder weights, we can directly measure whether larger SAE features provide novel information or reconstruct information already captured by smaller SAEs. The MSE changes directly indicate whether larger SAEs learn novel features or just different representations of the same information.\\n\\n> If one wishes to understand whether latents in larger SAEs are finer grained versions of latents in smaller SAEs, would it not be feasible to look at a sentence that coactivates two features, one feature from the smaller SAE and one from the larger SAE?\\n\\nFrom a single sentence it would be hard to distinguish such a relationship between features in SAEs of different sizes as many features from each SAE are likely to be active on that sample. Bricken et al. (2023) use coactivation as a measure of latent similarity, however we empirically found a high correlation between this metric and the decoder similarity metric (see Appendix A.4 Figure 11), and decoder similarity has lower computational cost. Furthermore, decoder directions define the effect of an active latent on the reconstruction, rather than just the magnitude of the activation.\\n\\n> In figure 5 it's not clear to me what the average L0 means -- what is being averaged over here?\\n\\nThe L0 is averaged over a number of input sequences in our dataset. We've now clarified this in the figure caption.\\n\\n> As far as I understand the SAE objective is non-convex, meaning that there is no guarantee of finding the global optimum. It's therefore also quite possible that different SAEs are simply finding different latents simply because of finding different local minima.\\n\\nThank you for raising a valid point about local optima. To investigate this, we have added an extra experiment to Appendix A7 (Figure 19) where we compare the number of reconstruction/novel latents between SAEs of the same size. We find that an average of 94% of latents are reconstruction latents with regard to SAEs of the same size, compared to a maximum of 68% for SAEs of different sizes.\"}", "{\"summary\": \"This paper wrestles with the question of whether sparse autoencoders (SAEs) of different sizes learn the same set of \\\"canonical\\\" or \\\"atomic\\\" units. The paper approaches this question from a couple of directions:\\n\\n1. First, the authors investigate how SAEs of different sizes can be \\\"stitched\\\" together. They find that some features from a larger SAE, when added to a smaller SAE, improve loss (novel latents), and others make loss worse (reconstruction latents). It turns out that there is a rough relationship between, for a given large-SAE latent, the maximum cosine similarity between that latent and the small-SAE latents, and whether adding that latent improve or huts performance. Latents which are dissimilar from all small-SAE latents improve performance of the small-SAE when added, and latents which are quite similar to a small-SAE latent hurt performance of the small-SAE when added. This makes sense -- \\\"novel latents\\\" are features which the small SAE has not learned, and \\\"reconstruction latents\\\" are features which the small SAE has already represented in some manner. The authors describe a procedure by which latents from a large SAE can be added to a smaller SAE -- novel latents can be added, but reconstruction latents must replace latents in the small SAE. This procedure allows one to interpolate between SAEs of different sizes while continuously improving reconstruction error. Nice! Lastly, the authors note that the \\\"reconstruction latents\\\" are not identical to the latents they replace in the smaller SAE, and also sometimes have high cosine similarity with multiple latents in the smaller SAE. One explanation of this is that some large-SAE latents are in fact linear combinations of multiple latents in the smaller SAE. This calls into question whether latents learned by SAEs are atomic.\\n\\n2. Second, the authors attempt to decompose SAE latents into more atomic features with \\\"meta-SAEs\\\". Meta-SAEs attempt to represent SAE latents (decoder vectors) as a sparse linear combination of some smaller set of features. Interestingly, the authors report that many meta-SAE features seem interpretable, and many interpretable latents in the original SAE decompose into interpretable combinations of meta-SAE features! For instance, a dedicated \\\"Einstein\\\" latent from the original SAE can be approximated as a linear combination of meta-SAE latents for \\\"Germany\\\", \\\"prominent figures\\\", \\\"scientist\\\", \\\"space and galaxies\\\", and \\\"starts with E\\\". The authors demonstrate that meta-SAE latents are similar to the latents learned by a similarly-sized base SAE.\\n\\nFor their meta-SAEs, the authors trained a new SAE variant called BatchTopK SAEs. While BatchTopK SAEs are not the main contribution of the paper, the authors test them against standard TopK and JumpReLU SAEs, and find that BatchTopK beat TopK SAEs across settings they evaluated, but don't always beat JumpReLU SAEs. BatchTopK SAEs are not the primary contribution of the paper, but are a nice bonus.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"Overall, the paper addresses an important question about SAE features with reasonable experiments and presentation. This issue, of whether SAE features are \\\"canonical\\\" or \\\"atomic\\\", could have implications for how we scale up SAEs and also for how we use their latents for model interventions, circuit analysis, etc. Overall, the presentation in the paper is clear and the experiments are original and well-executed. Some particular strong points:\", \"I think Figure 5 is quite compelling. It makes a lot of sense that the L0 would rise as novel latents are added, and it's very cool to see continuous curves like this as one \\\"interpolates\\\" between two SAEs.\", \"I think that Figure 6 is also quite compelling in showing that meta-SAE latents are pretty similar to similarly-sized base SAE latents.\"], \"weaknesses\": [\"I could not find reported values of the reconstruction loss that meta-SAEs obtain in reconstructing the base 49k-latent SAE latents. How precisely do meta-SAEs actually reconstruct the latents? If they are only a very weak approximation, what would that say about the hypothesis that large-SAE latents are linear combinations of more atomic latents?\", \"Some minor grammatical and presentation issues: \\\"vertexes\\\" -> vertices, the left quotation marks in the meta-SAE section should be fixed, etc.\"], \"questions\": [\"It's interesting that the reconstruction error falls as much as it does when the reconstruction latents are stitched in. If reconstruction latents were exactly linear combinations of more atomic latents learned by the smaller SAE, I'd expect that replacing those atomic latents with the reconstruction latents would yield the same reconstruction error, but at a lower L0. Instead reconstruction tends to fall *more* steeply during reconstruction latent stitching vs. novel latent stitching. Do you have a guess as to how the reconstruction latents relate to the latents they replace? Perhaps they are a combination of the small-SAE latents but with other additional features included too? I wonder if a feature-manifold explanation might also be worth considering here (Engels et al. 2024), where reconstruction features are more densely covering a feature manifold that corresponding latents in the small-SAE are more coarsely covering. In this sort of model, latents aren't just combinations of atomic latents, and maybe there is no good definition of \\\"atomic latent\\\" when features are multi-dimensional. What do you think?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper examines whether sparse autoencoders (SAEs) discover canonical units of analysis in language models through two novel techniques: SAE stitching, which analyzes relationships between SAEs of different sizes, and meta-SAEs, which attempt to decompose SAE latents into more atomic features. The work also introduces BatchTopK SAEs, a variant that enforces fixed average sparsity.\\nReviewers appreciated the paper's thorough empirical analysis of how SAEs organize information, with clear methodology and detailed experimental results. They found the work addresses an important question about the capabilities and limitations of SAEs as interpretability tools.\", \"the_main_concerns_centered_on\": \"1) The positioning of the work specifically for language model interpretability rather than general neural networks, and 2) The initial lack of downstream interpretability experiments to validate the practical implications of the findings. The authors addressed these by clarifying the LLM focus and adding new experiments.\\n\\nAll reviewers recommended acceptance, and I agree.\", \"additional_comments_on_reviewer_discussion\": \"See above\"}", "{\"comment\": \"Thanks. Whilst I still have reservations about the approach, I think that the community of mechanistic interpretabilty will find this an interesting paper. I'll raise my score.\"}", "{\"comment\": \"Before this phase of the rebuttal period ends, we wanted to ask the reviewer whether we have addressed your concerns with our work?\"}", "{\"comment\": \"[1] Trenton Bricken, Adly Templeton, Joshua Batson, Brian Chen, Adam Jermyn, Tom Conerly, Nick Turner, Cem Anil, Carson Denison, Amanda Askell, et al. Towards monosemanticity: Decom- posing language models with dictionary learning. Transformer Circuits Thread, 2, 2023\\n\\n[2] Samuel Marks, Can Rager, Eric J Michaud, Yonatan Belinkov, David Bau, and Aaron Mueller. Sparse feature circuits: Discovering and editing interpretable causal graphs in language models. arXiv preprint arXiv:2403.19647, 2024.\\n\\n[3] Lieberum, T., Rajamanoharan, S., Conmy, A., Smith, L., Sonnerat, N., Varma, V., ... & Nanda, N. (2024). Gemma scope: Open sparse autoencoders everywhere all at once on gemma 2. arXiv preprint arXiv:2408.05147.\\n\\n[4] Leo Gao, Tom Dupr\\u00b4e la Tour, Henk Tillman, Gabriel Goh, Rajan Troll, Alec Radford, Ilya Sutskever, Jan Leike, and Jeffrey Wu. Scaling and evaluating sparse autoencoders. arXiv preprint arXiv:2406.04093, 2024\\n\\n[5] Senthooran Rajamanoharan, Arthur Conmy, Lewis Smith, Tom Lieberum, Vikrant Varma, J\\u00e1nos Kram\\u00e1r, Rohin Shah, Neel Nanda. Improving Dictionary Learning with Gated Sparse Autoencoders. arXiv preprint arXiv:2404.16014, 2024\\n\\n[6] Engels, J., Michaud, E. J., Liao, I., Gurnee, W., & Tegmark, M. (2024). Not all language model features are linear. arXiv preprint arXiv:2405.14860.\\n\\n[7] https://www.nytimes.com/2024/05/21/technology/ai-language-models-anthropic.html\\n\\n[8]https://www.technologyreview.com/2024/11/14/1106871/google-deepmind-has-a-new-way-to-look-inside-an-ais-mind\\n\\n[9] https://goodfire.ai/\\n\\n[10] https://www.tilderesearch.com\"}", "{\"title\": \"Thanks for answering my questions!\", \"comment\": \"> The meta-SAE with 2304 meta-latents (and an average L0 of 4) explains 55.47% of the variance of the 49k latents in the SAE. We have included this information in Section 5 of the updated manuscript.\\n\\nThanks for providing this!\\n\\n> The steeper fall in MSE during reconstruction latent stitching in Figure 5 occurs because we've already added in the novel latents, which are optimized to work best in combination with the other latents of the larger SAE.\\n\\nAh okay, makes sense. For the final version, you'll probably want to increase the resolution of Figure 17.\\n\\nThanks for answering my questions! I'll keep my score at an 8/10 with a confidence of 4/5. I think this paper makes a solid approach at a basic question in contemporary interpretability work and therefore would be great to include at the conference.\"}", "{\"title\": \"Response to Reviewer HcSs\", \"comment\": \"Thank you for your positive review and thoughtful questions. We appreciate your assessment that our \\u201cmotivation and methods are explained clearly and intuitively, with helpful examples\\u201d and that we provide \\u201cdetailed experimental information\\u201d. We address your main points and questions below:\\n\\n> Including a discussion on the potential limitations of the proposed approaches would be valuable.\\n\\nWe have expanded the conclusion section, highlighting the limitations of our work. \\n\\n> Is there any particular reason to omit the bias term of the decoder 1 in equation 5?\\n\\nFor a given base language model, the decoder biases of all the SAEs are very similar. For GPT-2, the minimum cosine similarity between any two pairs of SAE decoder biases is 0.9970, and the magnitude of the vectors differs by less than 0.01%, so we arbitrarily use b^{dec}_0. We've updated the text around the equation to clarify this.\\n\\n> How does the stitching SAE handle shared reconstruction latents during the swap process? For example, in the case of colored shapes, how would the model swap the \\\"square\\\" or \\\"blue\\\" latent if these are entangled in composite latents in the larger SAE?\\n\\nIn case of one-to-many, many-to-one, or many-to-many relationships between reconstruction features in two SAEs, they are indeed swapped simultaneously. Please see Appendix A.5 for some examples of the sub-graphs of swaps that were performed on the two smallest GPT-2 SAEs. In the case of the \\u201cblue square\\u201d-example, this would result in a sub-graph containing all the latents in both SAEs. \\n\\n> Would it be possible to introduce some form of supervision in meta-SAEs?\\n\\nWe believe the unsupervised nature of both SAEs and meta-SAEs is actually a key advantage, as it allows us to discover what compositional structure the model naturally represents rather than imposing our assumptions about what concepts should exist. We agree that future work with supervised datasets could be valuable for validating (meta-)SAE performance and understanding how discovered decompositions align with human concepts.\\n\\n> For BatchTopK, have the authors examined how varying values of k impact the semantics of the learned latents? Do the \\\"concepts\\\" learned under a stronger sparsity constraint become more abstract (as they need to explain a given activation with fewer latents) for a fixed dictionary size?\\n\\nOur paper primarily focuses on the effect of dictionary size on the abstractness of features, rather than the role of sparsity (k). However, we agree that analyzing the relationship between sparsity and semantic granularity would be valuable future work, both in BatchTopK SAEs and other SAE architectures.\\n\\n> Have the authors extracted any intuition on how should the dictionary size be adjusted to tailor the SAE to the requirements of an specific analysis?\\n\\nWe have addressed this in the top-level comment.\"}", "{\"comment\": \"Thanks. It's clear that by increasing the dictionary size, a reduction in MSE can occur (indeed if b1 were used instead of b0, then as the full SAE1 features are in introduced, then the MSE must reduce to that of SAE1, which must by definition be lower than that of SAE0).\\n\\nHowever, figure 5 says \\\"every insertion or *switch* results in a strict improvement in reconstruction (MSE)\\\". This doesn't make sense to me. A switch (which by definition therefore maintains the dictionary size) cannot cause a decrease in an optimal MSE. \\n\\nAlso, the argument that larger dictionary sizes tend to have lower MSEs is specious. This is only (ultimately) guaranteed when the bias b1 of the larger SAE1 is used as discussed above. However, since the bias of the smaller SAE0 is used in the \\\"stitching\\\" method, there is no guarantee of improvement or even expectation of improvement. I feel \\\"stitching\\\" is comparing two unrelated entities and has no obvious significance. This is why I was so curious why you kept the bias b0. The asymmetry of the approach renders comparison of features problematic and is at the heart of my misgivings.\"}", "{\"comment\": \"Thanks. Definitely helpful, but I still have some basic concerns.\\n\\nI still don't understand figure 5 and I feel this needs better explanation. The x-axis suggests that the number of features is always increasing, yet the text suggests that features are also swapped (which means a non increase)?\\n\\nFor an optimally trained SAE, by definition, using any other features must increase the reconstruction error. I therefore don't understand why say swapping a feature from a small SAE with one from a larger SAE would bring any useful information -- by definition it has to increase the reconstruction error (unless the training got trapped in a suboptimal local optimum). I feel like I'm missing something important.\\n\\nI also still feel the paper is really for existing believers in this overall approach and doesn't do enough to convince others. For example there is still no explanation early on as to how to associate features with linguistic concepts (which is potentially problematic in itself).\\n\\nOverall, I like the message of the paper, namely that SAEs don't find atomic linguistic units. However, this isn't surprising to me (why should they? -- an LLM is not trained to respect the SAE structure and represent atomic linguistic units).\"}", "{\"summary\": \"This research concerns SAE, an assessment tool specialized for GPT-type models, espeically those used as LLM.\\nSAE is applied on the activation of LLM to infer the captured information, and at least in this paper, its wellness is \\nevaluated based on the MSE of the reconstruction. \\nThe novelties of of this research are as follows.\\n1. It proposes SAE stitching, a tool to compare large SAE against Small SAE. With SAE stitching, one can use the \\nchange in MSE to assess how the larger SAE's weight direction relates to the smaller SAE's weight direction (intersection of notions), \\nas well as the uniquness of the notion captured by the larger SAE. \\n2. It proposes Meta SAE, which applies still another SAE on the top of an already established SAE. It is used to obtain the \\nmonosemantic latent.\\n3. It proposes BatchTopK, a variant of MetaSAE which achieves SOTA in terms of architecture. \\nBase on the observations made by these claimed novel technique set, the paper advocates the need to carefully choose the size of the \\nSAE as well as the need to compare SAEs of different sizes for more semantically meaningful analysis.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This research provides thorough and solid experiments that are in alignment with\", \"those of the original SAE paper.\", \"This research furthers the understanding of SAE in application to LLM; considering that SAE is considered an important probing unit in the understanding of how LLM processes the information, this research answers an important question regarding (1) how the probing unit deomposes the LLM activations and (2) how its size and design affects the analysis.\"], \"weaknesses\": [\"The reviewer is a little unsatisfied with how the research is positioned. In the introduction and in the abstract, the paper is presented as if the research topic pertains to an analysis of a decomposition of the activations of a \\\"general\\\" neural network into features, while in fact the SAE introduced by [Bricken et al] and [Cunningham et al] is a probing tool for a specific 'organism' that is LLM. The term 'language model' is mentioned only at the part 3 of the contribution, leaving an impression that the paper investigates a very general feature analysis that is 'also' applicable to LLM. The research also experiments with LLM exclusively as well.\", \"While the keyword SAE may automatically link to the behavioral study of LLM in the minds the the readers that are actively involved in LLM behavioral research, the reviewer feels that it shall be clearly stated/emphasized in both the introduction and the abstract that this research is about LLM (which is merely one genre of ML research). Otherwise, the reviewer believes the applications other than LLM shall be presented in the paper.\", \"Another concern is that, while the research clearly furthers the understanding of the SAE, an important probing tool for LLM, the research does not relate the claimed novelty to the SAE's probing \\\"capability\\\". For example, [Cunningham et al] quantifies the SAE's ability to localize a specific model behavior in Indirect object Identification (IoI) task, thereby evaluating the goodness of the probing conducted by SAE. Meanwhile, the paper evaluates the goodness of SAE with the reconstruction error. While the reviewer values the thoroughness of independent experiments, the reviewer also feels that the authors did not intend their study to be interpreted as an investigation of a probing tool for the sake of probing tool itself. The reviewer feels that the research can be justified if the authors can add analysis of how their new toolset can be used to better the probing capability of SAE in terms of IoI, for example, or to actually better uncover a features that are causally responsible for counterfactual behavior in LLM.\"], \"questions\": \"1, Figure 4, it is said that \\\"Features with cosine similarity less than 0.7 tend to improve MSE\\\". Do you mean 0.6 or less? In the similar regard, thsi rendency is much less clear in Figure 18. Would you please make comments regarding GemmaScope32k? It is said that \\\" Using this threshold, we find that our feature stitching methods work on these SAEs as well.\\\" Can you elaborate on what is meant by \\\"well\\\" in this context?\\n\\n2, What are \\\"active latents\\\"? The reviewer presumes that these are the latents that are not \\\"zero-ed out\\\" by the training with the sparsity constraint, but it would help to clarify its precise mathematical meaning.\\n\\n3, In Figure 6 and experiments regarding this figure, MetaSAE with N latents are evaluated. Now, the MetaSAE is presented as the application of SAE on the latents of SAE. Would you please clarify \\\"on what SAEs\\\" the MetaSAE were applied in these experiments? For example, from SAE of what size \\\"the Meta SAE with 2304 latents\\\" was obtained?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper discusses the use of sparse autoencoders (SAEs) to examine how activations in a transformer relate to concepts in natural language. The authors examine the relationship between small SAEs and larger SAEs (a larger number of latent states) to assess whether larger SAEs contain more fine-grained information. Additionally the authors introduce a sparsity objective that enforces sparsity across a batch of feature vectors, rather than at for individual feature vectors.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper attempts to understand how activations deep within a transformer might correspond to higher level natural language concepts. This is an interesting topic. The conclusion is also interesting, namely that sparse autoencoders may not form a complete set of explanations, with larger SAEs potentially containing more fine-grained information.\", \"weaknesses\": \"The paper is quite presumptive about readers knowledge of the topic, lacking in clear explanations in places. Whilst the paper does a reasonable job of explaining sparse encoders, it doesn't explain how these are used to find actual \\\"inputs\\\" that activate particular features. This isn't explained in the paper and I had to read the cited papers to understand this. There is some description in section 5.1 (page 8) but this is too late for the reader unfamiliar with this area to understand the paper.\\n\\nI also found it quite hard to follow the authors reasoning in places and it's not clear to me why stitching might provide insight. Overall, the methods used in the paper are rather straightforward and I would therefore have hoped for a very clear presentation to make up for a lack of technical contributions. The Meta SAE isn't very well explained and the BatchTopK is idea is simple but now well motivated and it's unclear to me how this connects to the rest of the paper. \\n\\nTry to be consistent with spelling rather than mix eg color, colour. Also opening parentheses are incorrectly used in places (see lines 369 onwards).\\n\\nI do think there is something potentially insightful in this work, but the paper needs to be more clearly presented.\", \"questions\": \"*** page 4\\n\\nI don't really agree with the phrase \\\"circuit\\\". To me this suggests some end to end explanation, whereas the reality is that a small part of a transformer is being examined (the activations in a single layer).\\n\\nI can't find any information on how the SAE is trained. What is the optimiser and how are the hyperparameters (eg \\\\lambda) set?\\n\\n*** page 5\\n\\nWhy choose layer 8 of GPT2? \\n\\nEquation 5 has an asymmetry. Why is the bias from decoder 0 used, rather than decoder 1, or a combination of the two?\\n\\nThe motivation for stitching isn't clear to me. The features and decoders learned are optimised for each SAE separately. Why would combining them in a suboptimal way mean anything? Why not fix the features f0, f1 from each encoder but then learn optimal decoder weights W_{01} for the combination of these feature (this is a simple quadratic optimisation problem). Similarly, in the subsequent discussion, I don't follow why an increase or decrease in MSE is of any significance.\\n\\nIf one wishes to understand whether latents in larger SAEs are finer grained versions of latents in smaller SAEs, would it not be feasible to look at a sentence that coactivates two features, one feature from the smaller SAE and one from the larger SAE?\\n\\nIn figure 5 it's not clear to me what the average L0 means -- what is being averaged over here?\\n\\nAs far as I understand the SAE objective is non-convex, meaning that there is no guarantee of finding the global optimum. It's therefore also quite possible that different SAEs are simply finding different latents simply because of finding different local minima.\\n\\nIsn't it clear that (local optima aside) a larger SAE will always find features that are missed by smaller SAE? I'm not sure I follow the argument from line 355 onwards.\\n\\n*** page 8\\n\\nPlease add more clarity around treating the latents W_i^{dec} (why is W in boldface)? W_i^{dec} is a scalar quantity, the ith component of the d-dimensional vector W^{dec}. What does it mean to take scalars \\\"training data for our meta-SAE\\\"? The directions W don't convey information about which directions are simultaneously activated. I would have thought it would make more sense to treat the feature vectors f(x) as the entities for learning a meta SAE.\\n\\n\\n*** page 9\\n\\nThe BatchTopK function is fully defined. In line 465 it states that the function selects the top K activations, suggesting that this is a mapping from activations to indices. I suspect that the authors mean that the function should return zero for all activations that are not in the top K highest positive values, and is the identity function otherwise.\\n\\nThe introduced batch method is used only during training, with a different non-linearity used during \\\"inference\\\". This seems quite strange and it's not clear to me how to justify this. A potential issue not mentioned is that it's quite possible that the batch approach means that some input sequences will have entirely zero activation.\\n\\nIt's hardly surprising that the BatchTopK SAE has lower SAE than TopK SAE since BatchTopSAE imposes fewer constraints on the objective. I'm not sure why this would be seen as \\\"outperformance\\\".\\n\\nI'm also unclear as to why BatchTopK SAE is being discussed. Is this method used in all the previous experiments in the paper, or is this a separate piece of work orthogonal to the other contributions of the paper?\\n\\n*** page 10\\n\\nThe conclusion \\\"These findings suggest that there is no single SAE width at which it learns a unique and complete dictionary of atomic features that can be used to explain the behaviour of the model.\\\" is interesting and (perhaps) not surprising.\\n\\n\\n*** supplementary material\\n\\nFigure 11 isn't well explained. Please explain what is being shown here.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
9cQB1Hwrtw
Transformers Struggle to Learn to Search
[ "Abulhair Saparov", "Srushti Ajay Pawar", "Shreyas Pimpalgaonkar", "Nitish Joshi", "Richard Yuanzhe Pang", "Vishakh Padmakumar", "Mehran Kazemi", "Najoung Kim", "He He" ]
Search is an ability foundational in many important tasks, and recent studies have shown that large language models (LLMs) struggle to perform search robustly. It is unknown whether this inability is due to a lack of data, insufficient model parameters, or fundamental limitations of the transformer architecture. In this work, we use the foundational graph connectivity problem as a testbed to generate effectively limitless high-coverage data to train small transformers and test whether they can learn to perform search. We find that, when given the right training distribution, the transformer is able to learn to search. We analyze the algorithm that the transformer has learned through a novel mechanistic interpretability technique that enables us to extract the computation graph from the trained model. We find that for each vertex in the input graph, transformers compute the set of vertices reachable from that vertex. Each layer then progressively expands these sets, allowing the model to search over a number of vertices exponential in the number of layers. However, we find that as the input graph size increases, the transformer has greater difficulty in learning the task. This difficulty is not resolved even as the number of parameters is increased, suggesting that increasing model scale will not lead to robust search abilities. We also find that performing search in-context (i.e., chain-of-thought) does not resolve this inability to learn to search on larger graphs.
[ "search", "reasoning", "transformers", "scaling laws", "mechanistic interpretability", "circuit analysis" ]
Accept (Poster)
https://openreview.net/pdf?id=9cQB1Hwrtw
https://openreview.net/forum?id=9cQB1Hwrtw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zzZNZpnrIa", "zbGfChBygS", "xTociuEEZ7", "th0Obi17oc", "sJkxCkWfpR", "qWUrMWDd9f", "qVVdFyq3Kb", "ngGUrQRSxk", "nCjrAt59RX", "hVZxdbaayw", "cNJKwPguKc", "Yc98JsueY6", "WsN4fgrzz1", "VUQQARoaxz", "PZQycPsxbA", "MCDLZ9yUQH", "LaMcktJ2Xv", "L7aEK9gOrP", "KZ0ua1cu7Z", "JuxYn4ALJS", "GnQhz11PqT", "D4WotJufmK", "0MnP3EHaKT" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730505668616, 1732595607138, 1731328942649, 1732650677532, 1732553022125, 1733155539536, 1732382098204, 1730696803679, 1733164942624, 1732382365292, 1730939891968, 1734552655379, 1732382808746, 1732382893757, 1732381700028, 1732787331063, 1733199580052, 1733146676445, 1737524137518, 1732382697632, 1733167896777, 1733165078764, 1732381547484 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11660/Reviewer_wdgg" ], [ "ICLR.cc/2025/Conference/Submission11660/Authors" ], [ "ICLR.cc/2025/Conference/Submission11660/Reviewer_NMwf" ], [ "ICLR.cc/2025/Conference/Submission11660/Reviewer_fM1a" ], [ "ICLR.cc/2025/Conference/Submission11660/Reviewer_wdgg" ], [ "ICLR.cc/2025/Conference/Submission11660/Authors" ], [ "ICLR.cc/2025/Conference/Submission11660/Authors" ], [ "ICLR.cc/2025/Conference/Submission11660/Reviewer_zLV2" ], [ "ICLR.cc/2025/Conference/Submission11660/Reviewer_wdgg" ], [ "ICLR.cc/2025/Conference/Submission11660/Authors" ], [ "ICLR.cc/2025/Conference/Submission11660/Reviewer_fM1a" ], [ "ICLR.cc/2025/Conference/Submission11660/Area_Chair_hSuN" ], [ "ICLR.cc/2025/Conference/Submission11660/Authors" ], [ "ICLR.cc/2025/Conference/Submission11660/Authors" ], [ "ICLR.cc/2025/Conference/Submission11660/Authors" ], [ "ICLR.cc/2025/Conference/Submission11660/Authors" ], [ "ICLR.cc/2025/Conference/Submission11660/Authors" ], [ "ICLR.cc/2025/Conference/Submission11660/Reviewer_zLV2" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11660/Authors" ], [ "ICLR.cc/2025/Conference/Submission11660/Reviewer_fM1a" ], [ "ICLR.cc/2025/Conference/Submission11660/Reviewer_wdgg" ], [ "ICLR.cc/2025/Conference/Submission11660/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper explores the ingredients that constitute the search ability in pre-trained language models. The authors introduce a synthetic setting--searching over DAGs represented in the natural language space--and pre-train autoregressive transformers of varying scale. The finding is mixed: transformers can implement search under restrictive data distribution, but face significant challenges with scaled problem sizes. The authors have explored training strategies that encourage the transformer to generalize better.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Combining mechanistic interpretability with the search problem is, to my knowledge, novel, as most prior works have focused on \\\"classification\\\"-style tasks that feature a very restricted output space in terms of vocabulary and length. The problem is more challenging in term of complexity, and would presumably require chain-of-thought capabilities to solve effectively. This new setting also prompts the author to introduce a new algorithm for mechanistic analysis, which may be of interest to the interpretability community.\", \"I enjoyed the exposition style presentation of the paper, with each section introducing problem setting of growing complexity, as well as experimental findings that sufficiently supports these findings.\", \"The authors study nuanced challenges for transformers to generalize on algorithmic tasks in the presence of distribution shift.\"], \"weaknesses\": \"- My primary concern of this paper stems from the broader implication of the authors findings. Several prior works have found that large-language models can implement certain graph algorithms [1][2], including graph connectivity, and that this type of algorithmic reasoning can be improved with appropriate adaptations of chain-of-thought [3]. It is unclear whether the authors findings contradict, confirm, or offer more nuanced insights to prior works.\\n- While the paper does a fine job surveying relevant works in mechanistic interpretability, it is somewhat lacking when situating itself in the LLM planning/search and theoretical expressivity literature. Aside from the aforementioned works, several works have directly studied whether LLM can internalize search (in the form of MCTS) [4] and explore in-context [5]. The lack of a theoretical analysis, or a proper discussion of them make understanding the authors' contribution challenging.\\n- While the strategy that strengthens the LLM's search ability in the means of data augmentation is nice, it may not directly translate to practical guidance due to the synthetic nature of the task setup.\\n\\nOverall, this is an interesting paper in terms of its mechanistic study; but I would encourage the author to situate its theses more broadly with the recent growing body of work in LLM exploration, search, and algorithmic reasoning. \\n\\n[1] NPHardEval: Dynamic Benchmark on Reasoning Ability of Large Language Models via Complexity Classes, https://arxiv.org/abs/2312.14890\\n\\n[2] IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations, https://arxiv.org/abs/2404.01266\\n\\n[3] The Expressive Power of Transformers with Chain of Thought, https://arxiv.org/abs/2310.07923\\n\\n[4] Amortized Planning with Large-Scale Transformers: A Case Study on Chess, https://arxiv.org/abs/2402.04494\\n\\n[5] Can large language models explore in-context?, https://arxiv.org/abs/2403.15371\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their quick response. We welcome any other suggestions or comments to help further improve the paper during the rebuttal period.\"}", "{\"summary\": \"This study investigates whether transformers can learn to perform search by training small models on graph connectivity data. Results show that transformers can learn to search under certain conditions, but struggle with larger graphs, indicating that simply scaling LLMs may not enable robust search. The study introduces a new interpretability method to analyze the model's learned algorithm.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The study tackles an intriguing and practical research question: understanding the mechanisms behind search capabilities in LLMs. This is not only scientifically interesting but also has meaningful implications for real-world applications.\", \"Training a small GPT model on synthetic graph data is a reasonable and well-justified approach to investigate this research question.\"], \"weaknesses\": \"The logical flow of the paper is weak in several areas. The authors should clarify the connection between their empirical results and the statements made, as well as provide more intuition behind their hypotheses.\\nFor example, in line 51, the authors state, \\\"We demonstrate experimentally that transformers can indeed be taught to search, but only under fairly restrictive conditions on the training distribution.\\\" However, Figure 3 does not fully support this claim. While it may indicate that the model does not generalize well to a larger number of lookaheads than seen in the training data, it does not substantiate any firm conclusions about the training distribution itself. \\nIn line 359, the authors mention, \\\"We noticed a pattern and formed a hypothesis about the algorithm the model has acquired to solve the search problem.\\\" However, the pattern observed and its connection to the proposed hypothesis remain unclear and should be elaborated upon.\\n\\nAdditionally, the proposed method and analysis require clarification. For instance, in line 337, the phrase \\\"path of explainable attention operations\\\" is used\\u2014was this path inspected manually? And in line 358, the authors mention \\\"a number of input examples\\\" without specifying the exact number. Providing this detail would help improve the robustness of their claim.\", \"questions\": [\"Interpretation of Figure 3: The paper claims that transformers can search under restrictive training distributions, but Figure 3 only seems to show limited generalization to larger lookaheads. Can the authors explain how this supports claims about the training distribution?\", \"Pattern and Hypothesis Formation (Line 359): What specific pattern did the authors observe, and how did it lead to the hypothesis about the algorithm the model uses? Could they provide a clear link between the observed pattern and their hypothesis?\", \"Explainable Attention Path (Line 337): What exactly is meant by a \\\"path of explainable attention operations\\\"? Was this path derived through manual inspection, or was there a specific method used?\", \"Quantifying Examples (Line 358): The authors mention using \\\"a number of input examples\\\" but do not specify the exact number. Could they provide this detail to strengthen the robustness of their conclusions?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the rebuttal\", \"comment\": \"I appreciate the authors' rebuttal and their investigation into LLMs' capabilities for graph search problems. While the work presents an interesting analytical approach, I still have some major concerns about the experimental setup that I believe greatly weaken its impact:\\n\\n1. The choice of an encoder-only architecture, while valid for analysis, may limit the generalizability of the findings to contemporary LLMs, which predominantly use decoder-only architectures. I appreciate that the authors have revised their description from \\\"decoder-only transformers\\\" to simply \\\"transformers\\\" - a notable correction given that their original manuscript described using decoder-only models with bidirectional attention masks. As this represents a basic architectural contradiction in transformer design (decoder-only models, by definition, use causal masking, not bidirectional attention).\\n\\n2. The simplification of using one-hot embeddings with concatenated positional embeddings might not capture the complexity of modern embedding approaches. \\n\\n3. The assumption about token modifications affecting only subsequent layers may oversimplify the intricate dynamics of attention mechanisms, as modifications can propagate through the entire network.\\n\\n4. Given the authors' acknowledgment that the proposed analysis may not be directly applicable to current LLMs due to computational constraints, it would be valuable to elaborate on the broader implications and potential future applications of this research direction.\"}", "{\"comment\": \"Thank you for your detailed response. I'm happy with the updated paragraph. I would give a paper an updated score that's close to 7. But since we do not have this option, I'm keeping the score as is, but nevertheless be happy to see this paper at ICLR.\\n\\nI encourage the authors to address other lingering concerns and questions from other reviewers.\"}", "{\"title\": \"Reminder to Reviewers\", \"comment\": \"Thank you to the reviewers for their responses thusfar. We believe we have addressed _**all**_ of your concerns in our last reply. We would really appreciate if you could let us know whether there are any additional concerns before the end of the rebuttal period (11:59pm today AoE).\"}", "{\"title\": \"Response to Reviewer (1/2)\", \"comment\": \"We thank the reviewer for their thoughtful and helpful feedback. We would greatly appreciate if the reviewers could inform us whether the revisions are sufficient or if there are any additional concerns, so that we may further iterate during the revision period. We clarify that the focus of our analysis is on small transformer models, rather than current large language models. We provide the transformer with effectively limitless and idealized examples, with the aim being to estimate an \\u201cupper bound\\u201d on the model\\u2019s ability to learn to search. We have added sentences and modified the language in the introduction to better reflect our high-level aims.\\n\\n1. \\u201cThere are potential data leakage issues in the training and testing datasets constructed by the authors: The authors use a generation method to generate training data online and save the first few generated results as test data. While the authors claim they will remove overlapping samples between training and test data, they don't explain how they compare whether two graphs are identical.\\u201d\\n\\nWe use exact string matching to filter examples from the training set that appear in the test set. While it is true that this would not identify graphs that are isomorphic (which would be computationally intractable to compute), we do not _need_ to fully control for data leakage to demonstrate our claims. In Sections 5 and 6, we perform scaling experiments whose goal is to effectively compute an _upper bound_ on the transformer\\u2019s ability to learn to search. We provide the model with effectively limitless and idealized training data, which has been carefully curated to preclude the learning of heuristics or shortcuts. However, if the transformer does end up memorizing some of the data, or learning a heuristic, the measured performance would only increase, and our measured upper bound is still valid.\\n\\nWe are not aware of any work that suggests that transformers can generalize to isomorphic graphs. Our mechanistic interpretability analysis also shows that the transformer is indeed learning a robust and generalizable algorithm that explains its behavior on almost all inputs. Furthermore, in the last row of Figure 3, we observe that the model trained on graphs with lookaheads up to 12 is able to generalize to lookaheads 13 and 14, which would not be possible if the model has memorized the graph topologies of a large portion of training examples, since there are no graphs with lookaheads larger than 12 in the training distribution.\\n\\nIn order to more clearly convey our high-level aim, we add the following sentence to the second paragraph in the Introduction:\\n > By automatically generating such examples, we provide the transformer with effectively limitless and idealized training data, with which we can estimate an ``upper bound'' on the transformer's ability to learn to search.\\n\\nWe additionally rephrase the sentence in the second paragraph of 3.1 to clarify the filtering procedure:\\n > The first few batches are reserved for testing, and subsequent batches are filtered via exact matching to remove any examples that appear in the test set, to ensure that the examples in the test set are unseen.\\n\\n\\n2. \\u201cGiven the number of vertices and max number of in-out edges, DAG generation is finite. Thus, the authors' claim about infinite graph generation may be incorrect.\\u201d\\n\\nWe say the amount of training data is \\u201ceffectively limitless\\u201d only to draw comparison with the non-synthetic datasets which have a fixed size. In contrast, with sufficiently many vertices, the number of possible DAGs is quite high (see https://oeis.org/A003024), and we are limited entirely by compute rather than by data availability.\\n\\nIn addition, since transformers have a fixed input size (and limited precision), it is impossible to provide more than a finite number of graphs as input to the transformer. So the more precise question that we seek to answer is not whether transformers can learn to search on _all_ graphs, but whether they can learn to search on any graph that can be provided as input.\\n\\n\\n3. \\u201cThe authors only used one-hot embedding for each token and position embedding when training this encoder-only model.\\u201d\\n\\nThe purpose of this was to facilitate the mechanistic interpretability analysis in Section 4.\"}", "{\"summary\": \"The paper explores the behavior of transformers models when trained on search questions on directed acyclic graphs (DAGs). The authors show the importance of training data distribution for better generalization. Then, they conduct mechanistic understanding of the trained models to discover a progressive message passing algorithm utilized by the model to explore search paths. However, the models struggle to learn from larger graphs. Finally, the authors propose proxy in-context examples that help the model for robust exploration of the graph before solving the search problem. Overall, the paper represents a significant step towards our understanding of the inner mechanisms of transformer models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The major strength of the paper lies in its motivation to understand transformer mechanisms in search based tasks. The authors take carefully designed experimental exploration to train transformers on directed acyclic graphs, with careful design discussion on data distribution, and propose a new mechanistic approach to analyze the learned algorithm. The authors discover a message passing algorithm, where the neighborhood information are shared progressively among the vertices, which leads to an exponential path-merging algorithm.\\n\\nThe authors also touch upon the difficulty of training transformer models on larger graph structures, and propose in-context tasks to help the model explore the graph better. Overall, the paper conducts an in-depth analysis of transformer model training on search problems and will be an important contribution to the academic community.\", \"weaknesses\": \"As such, the paper doesn't have many weaknesses. I have a couple of questions regarding the experimental setup.\\n\\na) **Sequence length in In-context exploration:** As the experiments require training on higher sequence length, how are the samples in training data distribution decided? How many steps in DFS traces are necessary for the model to learn? If the authors had provided same 'K' padding tokens to the experiments in the experiments in section 4, would the models generalize better?\\n\\nb) **Distribution of path-merge operations:** Are there patterns in the distribution of path-merge operations and copy operations across the layers in the trained model? \\n\\nc) **Evaluation with density of graphs?:** Do the trained models generalize to extremely sparse graphs? \\n- Furthermore, on cases where the graph contains $2$ disconnected components, what will the model output be for start and goal vertices not in the same component? \\n\\nd) **Values of $\\\\alpha$, $\\\\kappa_1$ and $\\\\kappa_2$ in section 4**: How are these values decided in experiments?\\n\\ne) **Clarification questions:**\\n\\n- \\\"the log attention weight of each important operation in the last layer.\\\" (line 303) - \\nWhat does log attention weight mean? How do you define important operation?\\n- \\\"it requires many forward passes (linear in the number of attention operations and in the number of input examples).\\\" (line 354) - Can the authors give details on the number of passes necessary? Furthermore, do the number of necessary passes depend (logarithmically) on the length of the search process for a given input example?\", \"questions\": \"Please check my questions in the above section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Additional Comments on the Author Response\", \"comment\": \"Dear Reviewer fM1a,\\n\\nAs a fellow reviewer, I hope to address some of your followup questions to contextualize some of the authors' design choices.\\n\\n### **Encoder vs. Decoder**\\n\\nMost of the existing studies on mechanistic interpretability (in particular those pertaining to training dynamics) uses some simplified transformer variants. For example, [1] proposes an architecture that circumvents the additive structure of the residual stream, [2][3] use linear attention with linear attention. While it's desirable to study an architecture that mimics LLMs, the current progress should allow for mediated design choices.\\n\\n### **Concatenated Embedding**\\n\\nAgain this seems to be a common assumption that theoretical transformer papers tend to make (see e.g. [4][5]), and it can often be shown WLOG that this concatenation can be converted to the typical additive structure with 1 additional layer.\\n\\nSince we still understand very little about transformer interpretability, I think that it's reasonable to operate with mediated expectations and study stylized problems. \\n\\n[1] How Transformers Learn Causal Structure with Gradient Descent, https://arxiv.org/abs/2402.14735\\n\\n[2] Birth of a Transformer: A Memory Viewpoint, https://arxiv.org/abs/2306.00802\\n\\n[3] Progress measures for grokking via mechanistic interpretability, https://arxiv.org/abs/2301.05217\\n\\n[4] Transformers Learn to Achieve Second-Order Convergence Rates for In-Context Linear Regression, https://arxiv.org/abs/2310.17086\\n\\n[5] Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection, https://arxiv.org/abs/2306.04637\"}", "{\"title\": \"Response to Reviewer (2/2)\", \"comment\": \"4. \\u201cFor the graph connectivity problem, the authors' trained model only outputs one token, while current LLMs typically have reasoning steps.\\u201d\\n\\nWe tested the effect of intermediate steps in Section 6. Here, rather than teaching the model to perform the full search in a single forward pass, we instead train the model to predict the next step in a depth-first search (DFS). Thus, in this task, the transformer only needs to predict the next step (which may or may not lead to the goal). We added new results on the scaling behavior of transformers on this DFS task and we found that while they are able to learn the task more easily, they still struggle on larger graphs (see revised Section 6 and Figure 12).\\n\\nIn addition, we also test the scaling behavior of transformers on the selection-inference task [1], where each step in the search is broken into two subtasks: (1) select a previously-visited vertex with unvisited child vertices, and (2) from the current vertex, select an unvisited child vertex. We find that transformers are able to learn the first subtask (i.e., \\u201cselection\\u201d) relatively easily, they struggle with the second subtask (i.e., \\u201cinference\\u201d) when given larger graphs (see Section 6.2, and Figures 14 and 15).\\n\\n[1] Antonia Creswell, Murray Shanahan, Irina Higgins: Selection-Inference: Exploiting Large Language Models for Interpretable Logical Reasoning. ICLR 2023.\\n\\n\\n5. \\u201cThe proposed method requires performing perturbation and forward pass once for each element in every attention map of the LLM, and then forward pass to see the effects on the output logits, which is time consuming.\\u201d\\n\\nWe acknowledge that our proposed analysis is computationally expensive. However, we do not aim to apply our method to current LLMs. Rather, we only apply our method to small transformer models which we train in our experiments, and we demonstrate the utility of our method on such models in Section 4.2 by performing our analysis on 2,000 input examples _each_ on 4 different models.\\n\\n\\n6. \\u201cIn Line 318, determining the influence of modified tokens on attention through frozen previous layers is not reasonable, as modified tokens also influence previous attention calculation and thus influence the activation for each layer.\\u201d\\n\\nThe goal of this step is specifically to _isolate_ the effect of the perturbed tokens on the attention operation being studied. We do this after performing the very same step on the attention operations in the previous layers, and so by this point in the analysis, we have already characterized the influence of perturbations on the attention of all previous layers. To hopefully clarify this in the text, we add the following sentence to Footnote 3:\\n > The aim of freezing the previous layers is to measure the effect of the perturbation on the current layer _in isolation_ of changes in behavior in preceding layers.\"}", "{\"summary\": \"This paper aims to explore LLMs' internal mechanisms for graph connectivity problems tasks: given a graph (nodes and connections between nodes), a starting vertex, and a goal vertex, the LLM outputs the next vertex from the starting vertex.\\nSpecifically, the paper constructs a training set to train a small decoder-only transformer. It improves the Mechanistic Interpretation visualization method to explore which tokens influence the LLM's output. Based on experimental results, the authors conclude that LLMs must be trained with in-context samples to fully understand the graph search problem.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The topic is interesting and may have influence in the community.\", \"weaknesses\": \"1. There are potential data leakage issues in the training and testing datasets constructed by the authors:\\nThe authors use a generation method to generate training data online and save the first few generated results as test data. While the authors claim they will remove overlapping samples between training and test data, they don't explain how they compare whether two graphs are identical. If only using string matching, it cannot determine whether two graphs are completely equal. For example, these two graphs: node 1 -> node 2 and node 3 -> node 4 are completely equivalent but cannot be detected through string matching. Therefore, the test set is likely included in the training set. Additionally, given the number of vertices and max number of in-out edges, DAG generation is finite. Thus, the authors' claim about infinite graph generation may be incorrect.\\nThe authors trained a simplified model that doesn't correspond to currently widely-used LLMs:\\nFirst, since the authors used full attention rather than causal attention, they actually trained an encoder-only model rather than a decoder-only model.\\n\\n2. Second, the authors only used one-hot embedding for each token and position embedding when training this encoder-only model.\\nFinally, for the graph connectivity problem, the authors' trained model only outputs one token, while current LLMs typically have reasoning steps.\\n\\n3. The authors' improved Mechanistic Interpretation has the following issues:\\nThe proposed method requires performing perturbation and forward pass once for each element in every attention map of the LLM, and then forward pass to see the effects on the output logits, which is time consuming.\\nIn Line 318, determining the influence of modified tokens on attention through frozen previous layers is not reasonable, as modified tokens also influence previous attention calculation and thus influence the activation for each layer.\", \"questions\": \"Please refer to the weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper investigates the ability of large language models (LLMs), specifically transformers, to perform search tasks using graph connectivity problems as a testbed. The paper includes experiments on directed acyclic graphs (DAGs) and introduces novel mechanistic interpretability tools for this problem, revealing that transformers compute reachable vertex sets layer by layer. All the reviewers appreciated the paper's contribution in terms of methodology and empirical findings, and agree than the paper is likely to be of significant interest to the broader ICLR community. The main weakness noted by the reviewers pertained to presentation and writing. Many of those were resolved by a thorough rebuttal and during the discussion phase. Additionally, reviewer fM1a raised concerns about a potential data leakage, which seems to have been resolved by the authors' response.\\n\\nBased on the reviews, rebuttal, and discussion, I recommend accepting the paper. The main reasons for this decision are:\\n* **Significant Contribution**: The paper addresses a fundamental problem in the field, and adds to our growing understansding of the capabilities and limitations of LLMs. The core findings of the paper (e.g. the mechanism by which the models perform search over graphs) are to the best of mine and the reviewers' knowledge, novel. \\n* **Thorough Experimentation**: The experiments are well-designed and provide strong evidence for the paper\\u2019s claims.\\n* **Clear Presentation**: The paper is well-written and clearly presents its findings, making it accessible to a broad audience.\\n\\nOverall, I think this paper would make a good addition to ICLR. I anticipate it will garner significant interest and will be heavily discussed.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided a detailed rebuttal addressing the reviewers\\u2019 concerns. They clarified the mechanistic interpretability section and expanded the discussion on potential solutions to the challenges identified. The authors\\u2019 responses were well-received by the reviewers, who acknowledged the improvements and maintained their positive evaluations.\\n\\nThe discussion among reviewers was constructive, focusing on the paper\\u2019s contributions and the importance of the problem it addresses. There was a consensus that the paper provides valuable insights into the limitations of transformers and offers a solid foundation for future research in this area. The reviewers agreed that the paper\\u2019s strengths outweigh the minor issues identified.\\n\\nAfter the discussion phase, only one reviewer (NMwf) had a below-acceptance score. However, that reviewer stopped engaging after posting their review and did not react to the authors' response to their review (nor to my requests for discussion), which as far as I can tell addressed their main concerns, so I have downweighed that score in my decision.\"}", "{\"title\": \"Response to Reviewer (2/2)\", \"comment\": \"8. \\\"...It requires many forward passes (linear in the number of attention operations and in the number of input examples).\\\" (line 354) - Can the authors give details on the number of passes necessary? Furthermore, do the number of necessary passes depend (logarithmically) on the length of the search process for a given input example?\\n\\nThe number of forward passes is $L n^2 m F$ where $L$ is the number of layers, $n$ is the input length of the transformer, $m$ is the number of input examples, and $F$ is the number of perturbed features. We have added a footnote to this sentence in the revised submission for clarification. We also note that the method as presented is not practically applicable to very large models, but we do demonstrate its applicability and utility to our trained models.\\n\\nThe number of forward passes does not directly depend on the length of the search process. However, since we show that the minimum number of layers needed to perform search with lookahead $L$ is ~$\\\\text{log}_2(L)$, and so there could be an indirect relation if the number of layers was specifically selected for a particular target lookahead.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"We thank the reviewer for their thoughtful and helpful feedback. We would greatly appreciate if the reviewers could inform us whether the revisions are sufficient or if there are any additional concerns, so that we may further iterate during the revision period.\\n\\n1. \\u201cMy primary concern of this paper stems from the broader implication of the authors findings. Several prior works have found that large-language models can implement certain graph algorithms [1][2], including graph connectivity, and that this type of algorithmic reasoning can be improved with appropriate adaptations of chain-of-thought [3]. It is unclear whether the authors\\u2019 findings contradict, confirm, or offer more nuanced insights to prior works. While the paper does a fine job surveying relevant works in mechanistic interpretability, it is somewhat lacking when situating itself in the LLM planning/search and theoretical expressivity literature. Aside from the aforementioned works, several works have directly studied whether LLM can internalize search (in the form of MCTS) [4] and explore in-context [5]. The lack of a theoretical analysis, or a proper discussion of them make understanding the authors' contribution challenging.\\u201d\\n\\nWe thank you for raising this concern. We agree that the paper would benefit from additional related work on the expressivity of transformers as well as prompting-based approaches for search. As such, we have added several sentences to the revised Related Work section and included numerous additional references, including those suggested by the reviewer (thank you!). Notably, we include a discussion of whether transformers can express search algorithms versus whether they can learn those algorithms from data.\\n\\nTo address each of the reviewer\\u2019s specific points: [1] and [2] show that transformers are able to perform some graph reasoning, but their abilities are certainly imperfect, and the graph sizes they consider are much smaller than those that we generate. Additionally, the question that we aim to answer is whether better training and/or additional scale can help further improve transformer\\u2019s graph reasoning abilities. While [4] demonstrates that transformers can approximate classical search algorithms, they note that there exists a gap due to approximation, and they do not test whether this gap would be narrowed with further training/scale. [5] found that GPT-4 is only able to engage in exploration when given an \\u201cexternally summarized interaction history\\u201d and that \\u201call other configurations did not result in robust exploratory behavior, including those with chain-of-thought reasoning but unsummarized history.\\u201d [3] does indeed show that transformers, equipped with the ability to generate intermediate steps (such as in chain-of-thought), are sufficiently expressive to simulate any Turing machine. However, the expressiveness of a task is certainly not the same as its learnability by a transformer, and our empirical study aims to better understand the learnability of the graph search task by transformers.\\n\\n[1] NPHardEval: Dynamic Benchmark on Reasoning Ability of Large Language Models via Complexity Classes, https://arxiv.org/abs/2312.14890\\n\\n[2] IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations, https://arxiv.org/abs/2404.01266\\n\\n[3] The Expressive Power of Transformers with Chain of Thought, https://arxiv.org/abs/2310.07923\\n\\n[4] Amortized Planning with Large-Scale Transformers: A Case Study on Chess, https://arxiv.org/abs/2402.04494\\n\\n[5] Can large language models explore in-context?, https://arxiv.org/abs/2403.15371\\n\\n\\n2. \\u201cWhile the strategy that strengthens the LLM's search ability in the means of data augmentation is nice, it may not directly translate to practical guidance due to the synthetic nature of the task setup.\\u201d\\n\\nWe agree that the current results do not necessarily lend themselves to practical guidance, and this was not a goal of our work, though it is an interesting direction for future work. We do experiment with a natural language proof search version of the task in Section 3.1.2, where the length of each sentence can vary. Additional work is needed to further translate the sentences into more naturalistic forms in order to hopefully demonstrate promise in more practical applications.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"We thank the reviewer for their thoughtful and helpful feedback. We would greatly appreciate if the reviewers could inform us whether the revisions are sufficient or if there are any additional concerns, so that we may further iterate during the revision period.\\n\\n1. \\u201cThe authors should clarify the connection between their empirical results and the statements made, as well as provide more intuition behind their hypotheses. For example, in line 51, the authors state, `We demonstrate experimentally that transformers can indeed be taught to search, but only under fairly restrictive conditions on the training distribution.\\u2019 However, Figure 3 does not fully support this claim.\\u201d\\n\\nWe agree with the reviewer that the clarity of the paper can be improved. On this specific comment, we rephrased the claim to more accurately convey what is shown in the results. Line 51 has been rephrased to\\n > We demonstrate experimentally that transformers can indeed be taught to search, when given the right training distribution.\\n\\nWe likewise modified all similar claims made elsewhere in the paper, such as in the abstract. We make this point to contrast with recent work showing that transformers are not able to learn to search [1,2].\\n\\nTo better support this rephrased claim, we add additional results to Figure 3, where we train and test models on an additional \\u201cstar graph\\u201d distribution. We added a paragraph describing this distribution in Section 3, and updated the text in Figure 3 and Section 3.1.1. accordingly.\\n\\n[1] Honghua Zhang, Liunian Harold Li, Tao Meng, Kai-Wei Chang, Guy Van den Broeck: On the Paradox of Learning to Reason from Data. IJCAI 2023.\\n\\n[2] Gregor Bachmann, Vaishnavh Nagarajan: The Pitfalls of Next-Token Prediction. ICML 2024.\\n\\n\\n2. \\u201cIn line 359, the authors mention, \\u2018We noticed a pattern and formed a hypothesis about the algorithm the model has acquired to solve the search problem.\\u2019 However, the pattern observed and its connection to the proposed hypothesis remain unclear and should be elaborated upon\\u2026 And in line 358, the authors mention \\u2018a number of input examples\\u2019 without specifying the exact number.\\u201d\\n\\nWe removed this opaque sentence. It was only meant to briefly describe the exploratory analysis that we conducted before forming our hypothesis. The more rigorous experiments that test this hypothesis are described in the next paragraph, where we added additional details, such as the fact that we performed our analysis on a total of 2000 inputs from the naive and balanced distributions (100 for each lookahead).\\n\\n\\n3. \\u201c...In line 337, the phrase \\u2018path of explainable attention operations\\u2019 is used\\u2014was this path inspected manually?\\u201d\\n\\nWe agree the current description is unclear. The explainable attention operations are computed automatically. To improve clarity, we rewrite this paragraph:\\n > Starting from the first layer, let $t_k$ be the token at position $k$ of the input. We say each input vector ``_explainably contains_'' information about the token value $t_k$ and position $k$. Next, we consider the attention operations in the first layer. Suppose an attention operation copies source token $i$ into target token $j$, and depends on the source token embedding containing features $f^S_1,\\\\ldots,f^S_u$ and depends on the target token embedding containing features $f^T_1,\\\\ldots,f^T_v$ to perform this operation (as computed in Step 4). We say this attention operation is _explainable_ if the embedding of token $i$ explainably contains all features $f^S_1,\\\\ldots,f^S_u$, and the embedding of token $j$ explainably contains all features $f^T_1,\\\\ldots,f^T_v$. If the attention operation is explainable, we say the output embedding of the target token $j$ explainably contains the union of the features: $f^S_1,\\\\ldots,f^S_u,f^T_1,\\\\ldots,f^T_v$. We repeat this for each successive layer, computing all explainable attention operations throughout the model. Pseudocode for this procedure is shown in Algorithm 1.\\n\\nWe also add an algorithm in the Appendix with pseudocode describing how we compute the set of explainable attention operations.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"We thank the reviewer for providing additional feedback, and we welcome further discussion so that we can continue to further improve the submission.\\n\\n1. On the point on the encoder-only architecture and simplified positional embeddings:\\n\\nWe agree with the reviewer that contemporary LLMs are largely implemented with decoder-only transformers and dense positional embeddings. As such, we have re-run our scaling experiments in Section 5 with decoder-only transformers as well as learnable token embedding and rotary positional embeddings (summed rather than concatenated). We add these results to Section A.5 (see Figures 11 and 12) in the new revision. We find that the causal mask and rotary positional embeddings do not yield a significant difference in the model's scaling behavior on the search task. \\n\\n2. On elaborating on the broader implications and potential future applications of the proposed mechanistic interpretability analysis:\\n\\nThe conclusion contains some sentences discussing the broader implications, but we have the following to elaborate further:\\n > Though additional work is welcome to improve the scalability of our analysis to larger models, our analysis can provide insights on smaller models that can be tested separately in larger models.\\n\\n3. On the point about the effect of token perturbations in the mechanistic interpretability analysis:\\n\\nWe do not make the assumption that token perturbations only affect subsequent layers. To clarify, we start our analysis with the first layer, performing a forward pass (only up to the first attention block) on both the original and perturbed inputs. Then for each element in the attention matrix, we compute the product of the original key vector with the corresponding _perturbed_ query vector (and similarly, we compute the dot product of the perturbed key vector with the corresponding original query vector). By observing the change in the perturbed dot products relative to the original dot product, we characterize the dependence of each attention operation on the input features. Once this is done for the first layer, we move onto the second layer and repeat the analysis, and so we have already obtained a mechanistic description of the first layer's behavior without any assumptions on the effect of the perturbations on subsequent layers.\"}", "{\"title\": \"Reminder to Reviewer\", \"comment\": \"Dear Reviewer NMwf,\\n\\nWe are sending a reminder that the rebuttal period is ending soon. If you have any additional feedback, we would really appreciate it if you would provide them as soon as convenient.\\n\\nSincerely,\\nSubmission11660 Authors\"}", "{\"comment\": \"I thank the authors for their detailed responses. I believe this is a strong paper, hence I am increasing my score to 8.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer (1/2)\", \"comment\": \"We thank the reviewer for their thoughtful and helpful feedback. We would greatly appreciate if the reviewers could inform us whether the revisions are sufficient or if there are any additional concerns, so that we may further iterate during the revision period.\\n\\n1. As the experiments require training on higher sequence length, how are the samples in training data distribution decided?\\n\\nWhen training on the balanced distribution, we first sample the desired backtrack distance uniformly at random from {1,...,B_max} where B_max is the largest possible backtrack for any graph that can fit in the transformer\\u2019s input. Next, we generate a graph with the selected backtrack, and repeat until we have generated a full batch (this is analogous to how we generate examples with uniform lookahead in Section 3/4). Thus, we generate graphs with a wide variety of sizes and backtrack distances. Please refer to Appendix Section A.7 for further details on the precise generative process.\\n\\n\\n2. How many steps in DFS traces are necessary for the model to learn?\\n\\nIn our experiments, we show examples with DFS traces of many different lengths. This is required if using standard padding since the transformer would not be able to generalize to DFS traces that are longer than those seen during training. However, if you use random padding, the model can be taught to perform DFS using only examples with shorter traces. Though further experimentation would be interesting to explore this further.\\n\\n\\n3. If the authors had provided same 'K' padding tokens to the experiments in section 4, would the models generalize better?\\n\\nWe don\\u2019t think so, since the experiments in Section 4 were on a task that\\u2019s more akin \\u201cdirect prompting.\\u201d In this task the model is only given the start vertex and asked to predict the next vertex on the path to the goal vertex. As such, the model is required to perform the full search in a single forward pass. Padding is unnecessary since the graph edges already appear in the same input positions across examples in the Section 4 experiments.\\n\\n4. Do the trained models generalize to extremely sparse graphs? Furthermore, on cases where the graph contains disconnected components, what will the model output be for start and goal vertices not in the same component?\\n\\nWe only generate connected graphs. Though we do generate connected graphs with minimal edges (such as those with maximal lookahead) and we find that if the model is able to reach 100% training accuracy, it can correctly perform search on these minimal connected graphs. However, we have not experimented with graphs with multiple components, and we agree this would be interesting to explore.\\n\\nBut if a model trained on connected graphs were given a disconnected graph where the goal is unreachable, then we suspect the model would randomly predict one of the child vertices of the start vertex, since this heuristic would work reasonably on the training examples.\\n\\n\\n5. Values of $\\\\alpha$, $\\\\kappa_1$ and $\\\\kappa_2$ in section 4: How are these values decided in experiments?\\n\\nThese three parameters determine the sensitivity of the mechanistic interpretability analysis, where if they are set loosely, the analysis will find that many more attention operations are \\u201cimportant\\u201d. But this increases the computational cost of the analysis. If the parameters are set too strictly, the analysis may miss some important attention operations and fail to reconstruct the computation graph. Thus, there is a tradeoff between sensitivity and computational cost, and we selected the values to be loose enough to identify the path-merging algorithm for most inputs, but not much looser. We also demonstrate the analysis is highly specific as we apply it to an untrained transformer model (random weights) in Figure 6.\\n\\n\\n6. \\\"the log attention weight of each important operation in the last layer.\\\" (line 303) - What does log attention weight mean? How do you define important operation?\\n\\nThis is the attention weight _before_ the softmax operation. An attention operation is defined to be _important_ if perturbing the weight causes a sufficiently large change in the output logits. We describe how we compute this in Steps 2 and 3 of Section 4.1.\"}", "{\"title\": \"Response to the rebuttal\", \"comment\": \"I thank the authors and Reviewer wdgg for their comprehensive response. Since the settings align with common practices in transformer theoretical analysis, and given the additional experiments provided by the authors, I have raised my score accordingly. I strongly recommend including more detailed descriptions of the supplementary experiments on decoder-only models to enhance the paper's broader impact.\"}", "{\"comment\": \"I have read the authors' rebuttals to other reviewers, and I believe the additional experiments have further strengthened this paper. I have adjusted my score accordingly.\"}", "{\"title\": \"General Response to Reviewers\", \"comment\": \"We sincerely thank all reviewers for their thoughtful feedback. We have taken your comments into careful consideration and worked to improve on the paper. We have improved the clarity throughout the paper and tightened many of the claims.\\n\\nAdditionally, we added more complete experimental results, notably in Section 6, where we experiment with in-context (i.e., step-by-step) methods for solving the task, akin to chain-of-thought. We added results on scaling experiments for the depth-first search task (Section 6.1; Figure 14). We also experimented with selection-inference \\u201cprompting,\\u201d where each step of the search is decomposed into two subtasks: (1) select a previously-visited vertex that has unvisited child vertices, and (2) from the current vertex, predict an unvisited child vertex. We added scaling experiment results for this selection-inference task (Section 6.2; Figures 16 and 17). We found that, while depth-first search and selection-inference is easier to learn than direct prompting, the transformer still struggles on larger input graphs, and we find that increasing model scale does not mitigate this difficulty.\"}" ] }
9c96mGtQVR
Verifying Properties of Binary Neural Networks Using Sparse Polynomial Optimization
[ "Jianting Yang", "Srecko Durasinovic", "Jean B. Lasserre", "Victor Magron", "Jun Zhao" ]
This paper explores methods for verifying the properties of Binary Neural Networks (BNNs), focusing on robustness against adversarial attacks. Despite their lower computational and memory needs, BNNs, like their full-precision counterparts, are also sensitive to input perturbations. Established methods for solving this problem are predominantly based on Satisfiability Modulo Theories and Mixed-Integer Linear Programming techniques, which are characterized by NP complexity and often face scalability issues. We introduce an alternative approach using Semidefinite Programming relaxations derived from sparse Polynomial Optimization. Our approach, compatible with continuous input space, not only mitigates numerical issues associated with floating-point calculations but also enhances verification scalability through the strategic use of tighter first-order semidefinite relaxations. We demonstrate the effectiveness of our method in verifying robustness against both $\||.|\|_\infty$ and $\||.|\|_2$-based adversarial attacks.
[ "Binary Neural Networks", "Sparse Polynomial Optimization", "Semidefinite Programming", "Robustness Verification" ]
Accept (Poster)
https://openreview.net/pdf?id=9c96mGtQVR
https://openreview.net/forum?id=9c96mGtQVR
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zxp6QGRevW", "yinRPPe3Dw", "rxjY2j1Qho", "q0L33B5sBT", "mt3s5ckd98", "mRLL4CcBfk", "htYcY5bFGC", "gBVvY5Kax3", "g3sar1VtXt", "fGYdbCg7ne", "aphatZ1arw", "aWpiZ1mTjS", "WnExrVSsRo", "UwRuw0uvlh", "SEy5cRVHey", "Pe3aoZ0jOs", "OKy2EfGiQb", "LTiQu6iJEb", "KIIQAfbkjn", "ENw9rnVvSb", "70gzqaF0Jm", "3pQkY77f6E" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment" ], "note_created": [ 1732340389075, 1732332545592, 1732332588526, 1732615399284, 1732337798277, 1732339854097, 1737523819238, 1733307613480, 1732338867857, 1730675980672, 1732332637258, 1729757962944, 1732622132547, 1733124077822, 1730698290026, 1732333295483, 1732333735636, 1732951250643, 1732336967503, 1730679898777, 1734901288333, 1732952205103 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7133/Authors" ], [ "ICLR.cc/2025/Conference/Submission7133/Authors" ], [ "ICLR.cc/2025/Conference/Submission7133/Authors" ], [ "ICLR.cc/2025/Conference/Submission7133/Reviewer_o79c" ], [ "ICLR.cc/2025/Conference/Submission7133/Authors" ], [ "ICLR.cc/2025/Conference/Submission7133/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7133/Authors" ], [ "ICLR.cc/2025/Conference/Submission7133/Authors" ], [ "ICLR.cc/2025/Conference/Submission7133/Reviewer_7uz5" ], [ "ICLR.cc/2025/Conference/Submission7133/Authors" ], [ "ICLR.cc/2025/Conference/Submission7133/Reviewer_o79c" ], [ "ICLR.cc/2025/Conference/Submission7133/Reviewer_7uz5" ], [ "ICLR.cc/2025/Conference/Submission7133/Reviewer_o79c" ], [ "ICLR.cc/2025/Conference/Submission7133/Reviewer_TkGW" ], [ "ICLR.cc/2025/Conference/Submission7133/Authors" ], [ "ICLR.cc/2025/Conference/Submission7133/Authors" ], [ "ICLR.cc/2025/Conference/Submission7133/Authors" ], [ "ICLR.cc/2025/Conference/Submission7133/Authors" ], [ "ICLR.cc/2025/Conference/Submission7133/Reviewer_Rj7X" ], [ "ICLR.cc/2025/Conference/Submission7133/Area_Chair_aM7h" ], [ "ICLR.cc/2025/Conference/Submission7133/Authors" ] ], "structured_content_str": [ "{\"title\": \"Detailed Response to Reviewer o79c - Continuation 2\", \"comment\": [\"### **Questions:**\", \"We refer the reviewer to our previous responses, where we have provided detailed explanations addressing all the points raised. If there are any specific aspects still unclear, we would be happy to provide additional clarification.\", \"A dual reformulation of the polynomial optimization problems requires verifying a difficult positivity constraint of a general polynomial, which is known to be intractable. However, requiring the polynomial to be a sum of squares of other polynomials is tractable, as it can be done using Semidefinite Programming. Since the set of polynomials that are sum of squares represents a proper subset of nonnegative polynomials, this tractable positivity constraints induce a valid convex relaxation of the initial problem. Moreover, by increasing the relaxation order, and under some technical conditions, we can expect the lower bound from the relaxation to converge to the true optimal value, generally in finite amount of steps. This is the case because the subset of sum of squares polynomials is dense in the set of positive polynomials.\", \"Requiring the weights in BNNs to be in {$-1,0,1$} is the cornerstone of their computational efficiency. However, imposing the same constraint on the bias parameter $\\\\textbf{b}_j^{[i]}$ would make the bias almost irrelevant, as $\\\\textbf{b}_j^{[i]}$ would be able to influence the output of a node $\\\\textbf{x}^i_j$\", \"only if $\\\\sum_{j=1}^{n_{i-1}} \\\\textbf{x}^{i-1}_j w^i_j =0$,\", \"where $w^i_j=\\\\textbf{W}^{[i]}_{(j,:)}$.\", \"We refer the reviewer to our response to the first question in the **Minor points** section.\"]}", "{\"title\": \"Common Reply - Part 3\", \"comment\": \"## __Conclusion:__\\n For more detailed responses regarding all specific comments, we refer the reviewers to the individual response sections. \\n\\n We greatly appreciate the constructive feedback and will incorporate these insights to enhance the clarity, completeness, and applicability of our work.\"}", "{\"title\": \"Common Reply - Part 2\", \"comment\": \"### - __MILP-related:__\\nReviewers 7uz5 and o79c raised concerns about the performance superiority of MILP methods in certain scenarios. MILP/MINP methods are exact (capable of proving both robustness and non-robustness), thus it is not surprising to observe them solving more instances within relatively long time. Despite this, as displayed in Table 1 and Table 2, the gap in terms of number of solved instances significantly reduces for more severe or $||.||_2$ attacks, favoring our method. \\n\\n\\nTo further demonstrate the versatility of our method, we provide additional illustrative experiments involving more complex data sets and larger scale optimization problems, as suggested by reviewers TkGW and Rj7X. More precisely, we introduce $BNN_3: [3072,5000,800,10], w_s=55.97$%, achieving the test accuracy of $47.66$% on CIFAR-10 data set.\\n \\n\\n**Table 1: Verifying robustness of BNN_3 on CIFAR-10, for an input region determined by $\\\\delta_{||.||_\\\\infty}=0.2/255$. Data were scaled to $[-1,1]^{3072}$, and a time limitation of $3600 s$ was used. We present the results of the robustness verification queries on the first $40$ images from the test data set, among which $22$ were correctly classified.**\\n\\n\\n| Image Index |$\\\\tau_\\\\text{tighter, cs}^{1}$ - bound | $\\\\tau_\\\\text{tighter, cs}^{1}$ - $t\\\\phantom{1}(s)$ | $\\\\tau_{\\\\text{Soft-MILP}}$ - bound | $\\\\tau_{\\\\text{Soft-MILP}}$ - $t\\\\phantom{1}(s)$ |\\n| --- | --- | --- | --- | --- |\\n1 | **20.46 (robust)** | 1179.55 |timeout | $>3600$ |\\n2 | **7.83 (robust)** | 571.08 |timeout | $>3600$ | \\n7 | -30.28 (unknown) | 380.10 |timeout |$>3600$ |\\n8 |-68.50 (unknown) | 1721.79 |feasible (not robust) |1.71 |\\n10 |-62.31 (unknown) | 1330.03 |feasible (not robust) |87.46 |\\n11 |-142.50 (unknown) |650.79 |feasible (not robust) |0.11 |\\n12 |-94.92 (unknown) |1115.18 |feasible (not robust) |0.071 |\\n14 |-103.36 (unknown) |493.80 |feasible (not robust) |0.067 |\\n15 |-71.09 (unknown) |2090.44 |feasible (not robust) |57.09 |\\n18 |-97.96 (unknown) |1469.67 |feasible (not robust) |0.08 |\\n19 |31.06 (robust) | 344.70 |infeasible (robust) |9.02 |\\n20 |-25.66 (unknown) |823.54 |timeout |$>3600$ |\\n21 |-90.54 (unknown) |670.94 |feasible (not robust) |0.59 |\\n24 |-29.08 (unknown) |629.50 |timeout |$>3600$ |\\n27 |-83.76 (unknown) |766.95 |feasible (not robust) |0.54 |\\n29 |-110.26 (unknown) |873.51 |feasible (not robust) |0.07 |\\n30 |-60.63 (unknown) |1407.68 |feasible (not robust) |1.39 |\\n31 |-29.51 (unknown) |1190.56 |timeout |$>3600$ |\\n33 |timeout |$>3600$ |timeout |$>3600$ |\\n34 |-73.15 (unknown) |569.46 |feasible (not robust) |0.51 |\\n35 | **36.62 (robust)** |$ 657.73 $ |timeout |$>3600$ |\\n40 |-52.82 (unknown) |650.79 |timeout |$>3600$ |\\n\\n\\n\\n \\nAs presented in the supplementary Table 1, our approach demonstrates comparable performance even on larger datasets, such as CIFAR-10, and for larger networks involving nearly 9000 neurons. \\n\\nSpecifically, $\\\\tau_{tighter, cs}^{1}$ proves the robustness of images 1, 2 and 35 at least 3x, 6x, and 5.5x faster, respectively. In contrast, the low quality of LP bounds prevents MILP from providing an answer within a 1-hour time limit. These additional experimental results confirm that our method keeps providing high-quality lower bounds even, even for large-scale problems, which is consistent with the results presented in Table 3 of our paper.\\n \\nHowever, for instances 8 and 10, our method is unable to provide an answer, while MILP can certify **non-robustness** efficiently. This difference arises because MILP methods do not solve the optimization problems to optimality but instead focus on determining the feasibility of an attack, which is inherently less demanding. \\n\\n\\nWe believe that incorporating our tighter SDP bounds within the MILP framework could enhance the ability of MILP methods to certify **robustness** in more complex cases. This represents a promising direction for future research. \\n\\nWe will include a more comprehensive set of large-scale experiments in the subsequent versions of our work. \\n\\nWe would also like to emphasize that we did not require branch-and-bound based MILP/MINP methods to compute *exact* optimal values of the involved optimization problems, as it would have resulted in much more timeouts, making this approach impractical. Rather, we required the MILP/MINP solver to stop as soon as the nonnegative bound could be obtained, while the SDP solver was required to compute $\\\\tau_{tighter,cs}^1$ exactly.\\n\\nTo conclude, our approach should be understood to complementary to the exact MILP/MINP approach, especially for providing robustness certificates (not for finding valid adversarial attacks) in cases where the increased complexity of exact computations prevents MILP/MINP from providing a valid answer. Since availability of precise bounds is essential for robustness verification, we believe that our proposed tight and reliable bounds would yield promising result if efficiently coupled with MILP/MINP methods.\"}", "{\"comment\": \"Thank you for your detailed answer. I raised my score as some of the points were addressed, but I still think it would improve the paper by a lot if my points and also points raised by others are adequately incorporated into the paper. This includes improvements in (i) notation, (ii) visualization, and (iii) experiments.\\n\\nApart from my comments above, I give examples for each of these points here while re-reading the paper:\\n- (i) Notation: Line 128-129: The k-th row is indexed by $A_{(k\\\\colon,)}$ (did you mean $A_{(k,\\\\colon)}$?) Why is the i,j-th entry then $A_{i,j}$ instead of $A_{(i,j)}$? Also, I find $x^2$ confusing as it usually means \\\"x squared\\\" instead of a second vector Maybe use $x_2$? The k-th entry would then be $x_{2(k)}$ (also not optimal, but consistent with the notation above).\\n- (ii) Visualization: I would really like to see a running example showcasing the computed bounds and relaxations in the two-dimensional output space and not just the cliques as in Fig. 1. I think this would really strengthen the otherwise rather dry Sec. 3 and 4. You might also want to look at how such visualizations were done in related work with non-trivial relaxation in standard NNV [1,2].\\n- (iii) Regarding question 1 of reviewer Rj7X: You mentioned that your approach might scale better than the others (although the quality of the bounds might suffer). Show it! This would strengthen your claim by a lot.\\n\\nOverall, I find the direction very promising. Best of luck with your submission!\\n\\n[1] Fatnassi, W., et al. \\\"BERN-NN-IBF: Enhancing Neural Network Bound Propagation Through Implicit Bernstein Form and Optimized Tensor Operations.\\\" IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 43.11 (2024): 4334-4345.\\n\\n[2] Ladner, T., and Althoff, M. \\\"Automatic abstraction refinement in neural network verification using sensitivity analysis.\\\" Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control. 2023.\"}", "{\"title\": \"Detailed Response to Reviewer 7uz5 - Continuation\", \"comment\": \"- MILP/MINP methods being exact, it does allow them to prove both robustness and non-robustness. Consequently, it is not surprising that they solve more verification tasks within a sufficiently large time period. However, as shown in Table 1 and Table 2, the gap in the number of solved instances significantly reduces under more severe or $||.||_2$ attacks, where the strengths of our method become more apparent.\\\\\\nIt is also important to note that we did not require branch-and-bound-based MILP/MINP methods to compute *exact* optimal values of the optimization problems. Doing so would have led to significantly more timeouts, rendering this approach impractical for larger or more complex networks. Instead, we allowed the MILP/MINP solvers to terminate as soon as a nonnegative bound was obtained, while our SDP solver solved the relaxed optimization problems to optimality.\\\\\\nLastly, the introduction of the Soft-MILP approach was not intended as a claim of novelty. Rather, it was designed to highlight the intrinsic limitations and unreliability of MILP bounds when applied to challenging combinatorial and ill-conditioned problems. \\n\\n- We thank the reviewer for the suggestion. Our primary focus was to compare our method with other optimization-based BNN verification techniques. Additionally, the network studied in (Amir et al., 2021) consists of only 300 hidden neurons, and it is unlikely that their approach can scale effectively to the larger networks we considered. Moreover, efficient SAT methods, such as those proposed in (Jia \\\\& Rinard, 2020a), rely heavily on input space quantization, restricting their applicability to the framework we considered.\\n\\n- On Line 397, we described the hardware used for our experiments and would like to clarify that Gurobi was already utilizing its default multi-threading configuration. While we acknowledge that fine-tuning the number of threads could potentially enhance performance, our primary focus was to ensure a fair comparison across different optimization methods, rather than adjusting solver-specific settings. \\n\\n- Instances presented in Table 3 in Appendix 3 were generated randomly. From our experience, randomly generated instances are unlikely to exhibit robustness. Despite this, we can still use these instances to obtain meaningful information regarding both the quality of the bound and the solving time. Naturally, an algorithm that can compute tighter bounds can verify more instances as well. This is further demonstrated in Table 1, where our improved SDP bounds always outperform the LP bounds in terms of the number of certifiably robust cases. \\n\\n### **Questions:**\\n\\n- On Line 421 we stated: \\\"The runtime in parentheses refers to\\nthe average runtime over the instances that our method verified successfully\\\". For instance, when $\\\\delta_{||.||_\\\\infty}=1.50$, our method successfully verifies 6 instances with an average time of $26.99 s$ , while for the MILP solver, the average run time over those 6 instances is $191.95 s$.\\n\\n- We would like to emphasize that the importance of bound tightness should not be understated, as MILP methods are also inherently dependent on bound tightness. Specifically, branch-and-bound based MILP approaches relax the discrete constraints $x_i \\\\in$ {0,1} into linear constraints $x_i \\\\in [0,1]$. In our paper, we demonstrate, both theoretically and experimentally, the limitations of these linear bounds, such as those used by MILP solvers like Gurobi, in the context of BNN robustness verification. For large-scale problems, *tight bounds are critical*, as overly conservative LP relaxations can result in an excessively large number of branching steps. \\n\\n One of the key suggestions of our paper, as pointed out by the reviewer Rj7X, is to explore replacing these LP bounds with \\n tighter SDP bounds. Our experimental results indicate that developing SDP-based branch-and-bound solvers could lead to \\n significant progress when it comes to addressing challenging optimization problems, such as the ones studied here.\"}", "{\"title\": \"Detailed Response to Reviewer o79c - Continuation 1\", \"comment\": [\"We would like to point out that our relaxation-based method\", \"yields optimization problems with number of constraints being polynomial in the number of (clique)\", \"variables, which gives us confidence that for larger-scale problems, it would demonstrate significantly\", \"better efficiency compared to existing algorithms such as exact SAT/SMT-based methods, or even MILP, which we found to be up to 50x slower than our method in certifying robustness against some severe cases. In fact, due to the computational complexity of MILP, there is no available polynomial-time algorithm for solving MILP problems (unless P=NP). However, the SDP relaxation problem is essentially a *convex* optimization problem with polynomial number of variables and constraints.\", \"Figure 1 is already closely tied to our running example, *Example 2.1*, where we present the SDP optimization problem derived from a simple network structure with 2 hidden layers, each containing 2 nodes. Figure 1 revisits *Example 2.1* while introducing a key additional concept: correlative sparsity. Finally, in (20), immediately following Figure 1, we integrate sparsity, constraint encoding, and tautologies to provide a concrete example of the optimization problems our method addresses.\", \"Although the redundant constraints do not appear useful from the primal perspective, as they do not influence the feasible set, we have proven (see Corollary A.3.1) that their\", \"presence results in a larger quadratic module from the dual perspective. This larger quadratic\", \"module encapsulates more polynomials and enables better SOS decompositions, which in turn\", \"results in improved relaxation bounds.\", \"We refer the reviewer to Line 151, where we stated: \\\"Let $L \\\\geq$ be the number of hidden layers of a *classifying* BNN...\\\". Moreover, Remark 2.1 focuses on adversarial attacks which specifically concern label-altering in classification tasks.\", \"We aimed to make our related works section as comprehensive as possible, particularly with regard to BNNs. Some methods for standard neural network verification were already included, especially those relying on Semidefinite Programming techniques (see the paragraph on SDP-based verification methods). However, as suggested, we will incorporate additional relevant references to better situate our work within the broader context of neural network verification research.\", \"In Figure 2, the explanation for x-axis is already given on Line 433: \\\"Each subplot x-axis\", \"represents *image indices* sorted in the descending order of $\\\\tau_{tighter, cs}^{1}$ values\\\". That is why the red line appears smoother than the other two lines. Also, since those indices would appear in different order for each sub-plot, we believe it would make the interpretation even more difficult while occupying more space.\", \"We included a zip file in our submission containing the data used in our experiments, along with detailed instructions on how to execute the relevant code which can be found in *readme.md*. Additionally, we will provide an explicit link to the GitHub repository in the non-anonymous version of our paper.\", \"----\", \"We thank the reviewer for the suggestion. Indeed, BNNs are widely used in edge devices for tasks such as object detection, image recognition, and decision-making in *resource-constrained environments*, like autonomous delivery drones, for example. Such drones might encounter adversarial environmental conditions, such as lighting variations, or weather disturbances, and failing to robustly classify objects under these altered environmental conditions could lead to a collision, endangering people or property. We will include these illustrations in the refined version of our work.\", \"We thank the reviewer for the suggestion. We will try to minimize the use of abbreviations, while adhering to the page number limitations. However, some of the abbreviations that we employed frequently, like SDP, MILP, or BNN, are widely used as autonomous words, and we believe that indicating their full meaning could eventually make the paper more cumbersome.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Dear Reviewers,\\n\\nWe would like to use this opportunity to provide a short summary following the discussion period.\\n\\nFirst of all, we thank you all for your time, valuable feedback and constructive comments. It has been very useful for enhancing the overall quality of our paper.\\n\\nSecondly, we express our sincere gratitude to the Reviewers for recognizing the strengths of our contribution. \\n\\nSpecifically, Reviewers 7uz5 and o79c highlighted the significance of addressing the BNN robustness verification problems, while all Reviewers acknowledged the novelty of our approach, which is the *first SDP relaxation based* BNN verification framework that efficiently exploits the sparse structure of the network. Moreover, our proposed method has a strong theoretical foundation and has been supported experimentally, which has been emphasized by Reviewers TkGW, Rj7X and o79c. \\n\\nFurthermore, our method does not rely on input quantization and is capable of handling various types of adversarial attacks, such as $||\\\\cdot||_2$-norm attacks, which have been studied in the context of general DNN verification, but have never been discussed in BNN verification literature. This distinctive feature in the domain of BNN verification was particularly recognized by Reviewers Rj7X, TkGW, and o79c.\", \"we_have_also_addressed_the_following_important_points_raised_by_the_reviewers\": \"- *Additional results:* Reviewers Rj7X, TkGW, and 7uz5 suggested incorporating additional experiments and comparisons to further substantiate our findings. In response, we included Table 8, containing new results about verification queries from the CIFAR-10 dataset. These results confirm our previous conclusions: exact methods, such as MILP, may be efficient for finding adversarial attacks but *cannot provide certified robustness guarantees* within a relatively short time limitation due to their reliance on extremely coarse LP bounds. Similarly, other exact methods, such as the current SAT-based approaches, are highly dependent on heuristic simplifications introduced by input discretization and assumptions of attack linearity.\\n- *Continuous input:* In *Detailed Response to Reviewer 7uz5 - Continuation 2*, we have provided stronger arguments supporting the importance of verifying BNNs on continuous input spaces. Indeed, BNNs are typically trained and deployed on continuous data, underscoring the need for verification methods that are compatible with continuous input as well. Input quantization, while used as an artificial heuristic to enhance scalability, fails to account for all possible input configurations. \\n- *Other remarks:* The revised version of our paper contains many improvements in phrasing, notations, and overall clarity, which was recognized by the Reviewer o79c. \\n\\nWe believe that these revisions and additional explanations address your concerns and significantly enhance the clarity and completeness of our paper. Furthermore, we are confident that our work has the potential to convince the research community of the necessity of developing more efficient optimization solvers, thereby facilitating the resolution of many important industrial problems, including the robustness verification of BNNs.\\n\\nSincerely,\\n\\nThe Authors\", \"title\": \"Post-rebuttal Summary\"}", "{\"title\": \"Detailed Response to Reviewer o79c\", \"comment\": \"We are grateful to the reviewer for the nice summary and for acknowledging the strengths of our work. Below, we address all the raised weaknesses and questions in order in which they appear.\\n\\n### **Weaknesses:**\\n\\n**Main points:**\\n\\n- Our approach is designed for optimization problems where the objective function and constraint set can be encoded semi-algebraically (as intersections of polynomial equalities and inequalities). This is why we specifically focus on encoding the $\\\\text{sign}(\\\\cdot)$ function in this manner, as different encodings yield different levels of bound tightness. For example, the encoding presented in (7) might seem the most standard. However, in (17), we adopt an *alternative* encoding introduced in Chen et al. (2020), since the subgradient of the ReLU function is the $\\\\text{sign}(\\\\cdot)$ function itself. To clarify, (7) was not *transformed* into (17); instead, (17) provides an equivalent but more efficient encoding of (7). \\n\\n We would be happy to address any other reference-related confusions in order to enhance the clarity of our work.\\n\\n- Our encoding method allows $\\\\operatorname{sign}(0)$ to be either $+1$ or $+1$, which can be inferred from (6) and (7). Moreover, the operator $\\\\operatorname{nv}(\\\\cdot)$ defined after (12b) is used to handle the general cases (normalizations) where $w \\\\in${-1, 0, 1}. If\\n $w \\\\in${-1, 1}, we always specify it before stating mathematical expressions. The assumptions in Sec. 3 that $w$ cannot be 0 are mainly for the purpose of facilitating statements and proofs, they do not influence the validity of our main results. In fact, in our main theorem (Theorem 4.1), we allow the elements of $W^{[1]}$ to be in $[-1,1]$, and the elements of other weight matrices to be in {-1,0,1}. We will standardize this\\nnotation in the main text.\\n\\n- It is true that *cliques* should be understood as subsets of neurons (decision variables in (19a)-(19d)), and their determination is dictated by the problem structure. More precisely, cliques correspond to subsets of {$1,\\\\dots,n$} such that the two properties stated on Lines 259 and 260 hold. We can replace the term *cliques* by *subsets of variables* in subsequent versions of our work.\\\\\\nThe variable $d$ is also defined on Line 229, saying \\\"...defines a\\nhierarchy of dense SDP relaxations whose size increases with $d$...\\\". We could transform this into \\\"...defines a hierarchy of dense SDP relaxations whose size increases with *relaxation order* $d$...\\\", in order to increase the clarity. \\\\\\nWe will add the definition of the operator $\\\\text{nv}(\\\\cdot)$ to the notation section.\\n\\n- We have made efforts to reduce the theoretical complexity of the paper as much as possible. The two main types of constraints were introduced as follows: the network-structure-induced constraints in (7) and the input region constraints on Line 195. These were first combined in (8). All other constraints used in the paper are variations (rather than transformations) of these two types, resulting in different optimization problems with varying levels of bound quality. If the reviewer has specific suggestions on improving the logical flow, we would be happy to incorporate them. \\n\\nOn Line 408, we stated: \\\"We assess the performance of our method (number of solved cases (cert.) and verification time $t(s)$) in verifying robustness of **the first 100 test instances**\\\". Due to the high computational cost of these experiments, re-executing them to derive statistical measures such as standard deviations would be impractical and with little value, as the robustness answers would remain unchanged. \\\\\\nAs highlighted in our response to Reviewer Rj7X, we compare our method to other optimization-based techniques from the literature, specifically MILP. Other methods are either not optimization-based, incompatible with continuous input spaces and $||.||_2$ attacks, or exhibit lower scalability compared to the two methods evaluated in our work. We aimed to focus on approaches most relevant to the context of our study. \\n\\n**Minor points:**\\n\\n- Since SDP solvers are implemented using the floating-point arithmetic, there may be some error polynomial $e$ with extremely small coefficients, and such that the actual output of the solver is $f-\\\\lambda-\\n\\\\sigma +e \\\\in Q(g)+I(h)$ instead of $f-\\\\lambda-\\\\sigma \\\\in Q(g)+I(h)$. Since $x [-1,1]^n$, it is possible to derive a valid lower bound of the form $f \\\\geq \\\\lambda-e^{*}$, \\n\\n for some small $e^{*}$ that can be computed from the coefficients of $e$.\\n\\n For more details regarding the accuracy of SDP solving procedures, we refer the reader our reference (Magron et al., 2015). To the \\n best of our knowledge, no such guarantees are available for the MILP/MINP methods.\"}", "{\"summary\": \"This paper introduces a novel approach to verify the robustness of BNN based on sparse polynomial optimization, specifically through Semidefinite programming relaxation. The authors first encode the original verification problem as a polynomial optimization problem (POP) and then apply SDP relaxation on it to obtain lower bounds for certifying the robustness. While the verification method is sound and incomplete, experimental results show that it provides a more precise bound than LP-based methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The problem (verification of Transformers) is important.\", \"A novel method based on SDP to verify the robustness of BNN.\", \"Compared to LP-based methods, a more precise lower bound is obtained.\"], \"weaknesses\": [\"The motivation for this approach is not intuitive. The authors claim that existing methods are either incompatible with continuous input data or are limited to $L_\\\\infty$ input perturbations. However, expanding support to other types of input regions may not constitute a compelling contribution: i) $L_\\\\infty$ perturbations are the primary standard in the field, and ii) most BNN verifiers support continuous input spaces. Furthermore, for methods $M$ specifically designed for discrete input region (i.e., input $x\\\\in$ {-1,1}$^{n_0}$), one can also treat the second layer (assuming the original input region is continuous, as the setting in this work) as the new input layer and then apply these methods $M$ to verify the robustness.\", \"The experimental results are not convincing. The MILP-based method, which is sound and complete, solves significantly more verification tasks than the proposed SDP-based approach. While the authors introduce a \\\"soft\\\" encoding to avoid numerical errors, it is relatively straightforward similar to methods in previous work.\", \"For $L_\\\\infty$ experiments, it would be beneficial to include comparisons with other SOTA methods, such as the SMT-based approach (Amir et al.).\", \"As Gurobi supports multi-threaded solving, presenting experimental results on multiple threads would also strengthen the evaluation.\", \"Table 3 gives more results on bound computation, however, it is not clear if these bounds lead to verified results (i.e., proving the robustness).\"], \"questions\": \"See the weakness raised in **Weaknesses**.\", \"other_minor_comments\": [\"I failed to find the explanation for what the data in parentheses in Column $\\\\tau_{tighter,cs}^1$ represent in Tables 1 and 2.\", \"MILP methods aim to verify whether a property holds or fails, offering a definitive answer rather than estimating bounds to certify robustness. Therefore, comparing the two approaches (SDP-based vs. MILP-based) should ideally focus on metrics related to verification success rates, computational efficiency, or scalability across various network sizes, rather than on bound tightness.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Common Reply - Part 1\", \"comment\": \"## __Strengths:__\\n\\nWe greatly appreciate the reviewers\\u2019 recognition of the key strengths of our work. \\n\\nReviewers generally highlighted the relevance of research works attempting to develop scalable verification algorithms that leverage the unique structure of BNNs, avoiding the complexity of exact SAT/SMT/MILP methods.\\n\\n\\nBoth Reviewers o79c and TkGW noted the strong theoretical foundation and empirical evidence supporting our tight first-order SDP relaxation.\\n\\n\\nMoreover, reviewers also appreciated the novelty of our method, as we are the first to employ SDP relaxations for BNN verification, effectively exploiting the sparse structure of BNNs and\\nproviding reliable and tighter lower bounds capable of certifying robustness.\\n\\n\\nThe ability to handle $||\\\\cdot||_2$ -attacks and continuous input data was also noted as a significant strength, notably by the reviewers 079c, Rj7X and TkGW. \\n\\n## __Limitations:__\\n We would also like to express our gratitude to all the reviewers for their meaningful critiques and suggestions. Below, we address some common points and concerns that were raised.\\n### - __Additional experiments:__\\nAs pointed out by reviewers TkGW and Rj7X, we acknowledge the need for results on deeper networks with more layers and more complex data sets to better understand scalability. At the same time, we are not aware of any scalable methods specifically tailored for BNN verification that can successfully operate on continuous input spaces and handle both $||\\\\cdot||_\\\\infty$ and $||\\\\cdot||_2$ attacks. Our relaxation-based method yields optimization problems with number of constraints being polynomial in the number of (clique) variables, which gives us confidence that for larger-scale problems, it would demonstrate significantly better efficiency compared to existing algorithms such as exact SAT/SMT-based methods. Moreover, we anticipate that while the quality of our bound may degrade with increased network depth, it would still outperform the LP bound used in the MILP approach, as supported by Theorem 4.1 and insights from Table 3 in Appendix A.3. \\n\\nFinally, we have included additional experiments (see **Table 1** in **Common Reply - Part 2**) from the CIFAR-10 data set, with another network of more than 9000 nodes, which is quite important with respect to other methods relevant to our framework. We observe the similar type of behavior of our method - it can provide robustness guarantees for instances where MILP fails due to the extremely increased complexity. \\n\\nWe hope that our work can inspire further research into developing easily parallelizable and scalable SDP solvers, or BNN-verification-tailored branch-and-bound algorithms, as this would address the raised limitations of our POP-based approach.\\n\\n### - **Continuous input and $||\\\\cdot||_2$-robustness**\\n Relevance of $||\\\\cdot||_2$ attacks and continuous input spaces was questioned by the reviewer 7uz5. However, verifying $||\\\\cdot||_2$ robustness of neural networks has been widely studied in the standard neural network literature since $||\\\\cdot||_2$ perturbations allow to model attacks with coordinate inter-dependence, which is typical in many real-life cases (blurring, noise, etc...). Additional complexity introduced by these non-linearities prevents most currently available BNN verification tools from being applicable without significant modifications. \\n\\nVerifying BNNs on continuous input spaces is uniquely challenging due to joint impact of binary weights, discontinuous activation functions, and increased sensitivity to floating-point errors induced by non-discretization. All these factors make the verification problem highly combinatorial and ill-conditioned, as noted on Line 222. Input quantization, while improving scalability, sacrifices precision, often leading to false negatives or inconsistent results depending on the discretization level. In contrast, our method avoids artificial discretization and provides reliable bounds accounting for all possible input states.\"}", "{\"summary\": \"The paper considers the formal verification of binary neural networks in the classification setting.\\nFormal verification is necessary as (any) neural network is inherently prone to adversarial attacks.\\nIn the paper, the robustness of binary neural networks is verified using semidefinite programming derived from polynomial optimizations.\\nIn particular, polynomials up to order 2 are used as constraints and existing solvers (Mosek, Gurobi) are used to prove the robustness property by providing a lower bound.\\nThe experients considers l_2 and l_inf robustness properties and shows that while achieving better bounds than simple linear programming, it also has a reduced verification time than related methods based on MILP/MINP.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"the considered problem is highly relevant\", \"deep theoretical analysis that goes beyond simple linear relaxations and rather design the verification algorithm to exploit certain properties of the considered network architecture\", \"The theoretical results show strictly better bounds (Thm. 3.1 & 4.1), which proves the point of using the more complex verification algorithm.\", \"the approach allows to verify more general input perturbations. In particular, l_2, which was not considered before in binary neural network verification although it has been considered in standard neural networks.\"], \"weaknesses\": [\"Main points:\", \"It is hard to follow the theoretical analysis in the paper. I tried to identify some of the key issues and give examples for each case:\", \"Crucial results from references are used without re-stating them or at least providing the exact equation number in the reference paper: E.g., Sec. 4: \\\"Firstly, notice that the semi-algebraic representation of the subgradient of the ReLU function derived in Chen et al. (2020) provides an alternative encoding of the sign(\\u00b7) function.\\\" This sentence in then used without further explanation to transform (7) into (17).\", \"Formal inconsistencies: E.g., it is unclear what the output of sign(0) is; (4) says that W can be in {-1,0,1} but later in Sec. 3 says it is in {-1,1}, which might influence how certain normalizations have to be considered, e.g., the bounds in (11).\", \"try to introduce all concepts before they are used. E.g., the term \\\"cliques\\\" is not properly introduced. It appears to be sets of neurons but I am unable to assess how they are determined. Also, the variable \\\"d\\\" is used throughout Sec. 2.3 but is only later introduced as \\\"relaxation order\\\". Similarly, the operation \\\"nv(A)\\\" is introduced after its first usage in (12), making (12) unable to be understood by the reader up to this point. It is explained in the paragraph after, but I think moving such things to the notation section would be more beneficial as they are used throughout the paper and a reader can then go back to the notation section to (re-)refer to those details.\", \"Steps to derive the constraints are rather quick and more explanations could help to follow along\", \"Additionally, the experiments show only single-dimensional results without saying over how many instances the results are averaged and do not provide a standard deviation where applicable. Also, it is unclear if the compared approaches are taken from the literature or arbitrarily constructed. If the latter is true, it misses a comparison to related work altogether.\"], \"minor_points\": [\"It is unclear why the approach does not suffer from floating point inaccuracies. In fact, the term \\\"floating-point\\\" only appears twice in the paper (only in the abstract and the contribution section (Sec. 1.1)).\", \"the paper claims that the approach does not suffers from an exponential running time (Sec. 6) but it misses a thorough analysis of its runtime. The average speed up of 4.5 / 11.4 stated in the contribution section (Sec. 1.1) does also not support this claim.\", \"Make the example in Fig. 1 a running example by moving it further up in the paper and continuously refer to it when explaining all terms.\", \"Similarily, give an intuitive explanation why adding those tautologies are necessary\", \"Only the classification setting is considered. This could be stated more clearly.\", \"the related works section could also include more research on the verification of standard neural networks (e.g., VNN-COMP) to better place this line of research in the broader context.\", \"Fig 2: misses a label and ticks of the x axis.\", \"no repeatability package is provided\"], \"further_points_that_could_help_improve_the_paper_but_did_not_directly_influence_the_score\": [\"give a real-life example where binary neural networks are used and verifying the robustness is necessary\", \"Avoid the usage of abbreviations. These usually do not save a lot of space but makes reading more difficult as one is challenged to memorize all abbreviations in addition to the complexity of the paper.\"], \"questions\": [\"Can you provide the missing details about the evaluation I mentioned above?\", \"You mentioned in Sec. 2.3 that you use the (tractable) inner approximations of the set of polynomials that are nonnegative on S. Why is it enough to only consider that subset?\", \"Why can the weights take values {-1,0,1} but the bias any value in |R?\", \"Why does you appraoch not suffer from floating-point inaccuracies?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your detailed answers. With some of my concerns clarified, I have adjusted my score. However, I still question the motivation for exploring the continuous input space for BNNs as I really don't think it should pose a technical issue.\\nHowever, I agree with the authors on the importance of addressing other norm-based attacks after reading the comments. I also concur with the authors and other reviewers that tighter bounds might significantly enhance verification scalability, like when combined with B&B algorithms. Nevertheless, I remain concerned about the effectiveness and efficiency of its \\\"robustness verification\\\" performance (not bound computation results) compared to SOTA tools (or when combined with the B&B algorithm), particularly in the paper's current state.\"}", "{\"comment\": \"Thank you for your answer and for clarifying the notation, the description of the visualization, and for addressing my questions about the experiments. I raised my score because of it.\"}", "{\"summary\": \"The paper presents a novel approach for verifying properties of Binary Neural Networks (BNNs), particularly in the context of robustness against adversarial attacks. Traditional verification methods for BNNs, such as Satisfiability Modulo Theories (SMT) and Mixed-Integer Linear Programming (MILP), face scalability issues when applied to larger networks. To address these challenges, the authors propose using Semidefinite Programming (SDP) relaxations derived from sparse polynomial optimization. This approach is designed to verify BNN robustness efficiently and accurately, overcoming numerical challenges inherent in MILP solvers. Experimental results indicate that the SDP-based method provides significant improvements in both robustness verification against adversarial attacks and computational efficiency, with an average speedup of 4.5 to 11.4 times compared to conventional methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Introduces a new SDP-based approach that enhances the scalability and precsion of BNN verification.\", \"Efficiently handles continuous input spaces without requiring input quantization.\", \"Theoretical contributions, including tighter SDP relaxations, improve the accuracy of robustness bounds.\", \"Experimental validation across benchmarks highlights the method\\u2019s advantages in speed and robustness certification.\"], \"weaknesses\": \"The paper presents an interesting approach; however, it lacks sufficient model variety in its experiments. Only two models were used to demonstrate the proposed method, which limits the generalizability and persuasiveness of the results. For a more robust evaluation, it would be beneficial to include additional models, particularly from diverse architectures, to strengthen the findings and validate the method across a broader range of scenarios.\", \"questions\": \"Could the authors provide additional experimental results on a wider variety of models? Including more model architectures would enhance the robustness of the conclusions and provide a stronger case for the method\\u2019s effectiveness across different settings.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"/\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Detailed Response to Reviewer TkGW\", \"comment\": \"We are grateful to the reviewer for the nice summary and for acknowledging the strengths of our work. Below, we address all the raised weaknesses and questions in order in which they appear.\\n\\n### **Weaknesses:**\\n\\n- One of the primary objectives of our work was to highlight the potential of combining sparse SDP relaxations with redundant constraints (tautologies) to enhance the robustness verification process for binary neural networks (BNNs). \\\\\\nOur focus was primarily on feed-forward neural networks, as they often serve as the foundational building blocks for more complex architectures. \\nHowever, our approach is adaptable and can be extended to more advanced architectures, such as convolutional neural networks (CNNs), since operations like max-pooling and average pooling can be formulated in a semi-algebraic manner. Exploring these directions represents a promising avenue for future research. \\n\\n### **Questions:**\\n\\n- We refer the reviewer to the **Common Reply** section and our response to the previous point.\"}", "{\"title\": \"Detailed Response to Reviewer Rj7X\", \"comment\": \"We are grateful to the reviewer for the nice summary and for acknowledging the strengths of our work. Below, we address all the raised weaknesses and questions in order in which they appear.\\n\\n### **Weaknessess:**\\n\\n- We refer the reviewer to the **Common Reply** section above, where we presented further results showcasing the performance of our method on the CIFAR-10 dataset. As indicated by those results, our method continues to demonstrate notable efficiency compared to MILP in providing *certifiable robustness guarantees*.\\n\\n- There are several factors that influenced the design of our experimental setup:\\n\\n - *First*, our approach is a *relaxation-based* and grounded in optimization, as opposed to the more commonly used SAT/SMT-based methods, which focus on *exact* verification. \\n - *Second*, to the best of our knowledge, there are currently no other scalable methods, specifically tailored for BNN verification that effectively handle continuous input spaces while supporting both $||.||_\\\\infty$ and $||.||_2$ adversarial attacks.\\n\\n While the SAT/SMT approach proposed by (Amir *et al.*, 2021) supports continuous input spaces, its performance has only been demonstrated on very small networks with a total of 300 hidden layers.\\n Similarly, MILP-based methods (Lazarus \\\\& Kochenderfer, 2022) depend on a specific encoding of the sign function, which, as we show in Remark 3.1, is less efficient than the MILP formulation used in our benchmarks.\\n\\n### **Questions:**\\n\\n- As we have argued in **Common Reply** section and other answers above, our relaxation-based method yields optimization problems with a polynomial number of constraints in the number of (clique) variables, suggesting better efficiency for larger-scale problems compared to SAT/SMT-based methods. While bound quality may degrade with network depth, Theorem 4.1 and Table 3 (Appendix A.3) indicate that our bounds would still outperform the LP bounds used in MILP approaches. We hope this work inspires further research into scalable SDP solvers or BNN-specific branch-and-bound algorithms to address the limitations of our POP-based approach.\"}", "{\"title\": \"Detailed Response to Reviewer 7uz5 - Continuation 2\", \"comment\": \"We thank the reviewer for additional suggestions.\\n\\n\\n\\n1) Regarding the continuous input argument, let us consider the reference (Jia \\\\& Rinard, 2020a) from our paper. This work introduced a novel SAT solver that is currently among the most efficient and scalable BNN verification tools. These authors themselves argue at page 3:\\n\\n------\\n\\\"*The first layer of a BNN is usually applied on floating point inputs or fixed-point numbers. However, **encoding floating-point or integer arithmetic in SAT typically incurs high complexity**. To **simplify** the verification process, we quantize the inputs:*\\n$$ x^q = \\\\left\\\\lfloor \\\\frac{x}{s} \\\\right\\\\rceil \\\\cdot s, $$\\n*where $x \\\\in \\\\mathbb{R}^n_{[0,1]}$ is the real-valued input, $x^q$ is the quantized input to be fed into the BNN, and $s$ is the quantization step size which can be set to $s = \\\\frac{1}{255}$ for emulating 8-bit fixed-point values, or $2\\\\epsilon$ for adversarial training with a $\\\\ell_\\\\infty$ perturbation bound of $\\\\epsilon$. Since a robust network should be invariant to perturbations within $[x - \\\\epsilon, x + \\\\epsilon]$, we **expect** the quantization with $s = 2\\\\epsilon$ not to discard information useful for robust classification, which is confirmed by checking that a few choices of the quantization step do not noticeably affect test accuracy.*\\\"\\n_______________________________________\\n - As previously discussed, since BNNs are both trained and typically deployed on continuous data, it is natural to expect them \\n to be verified on continuous data as well. However, robustness verification algorithms that rely on input discretization fail to \\n account for all possible input configurations. In essence, input quantization serves only as a heuristic, as there is no \\n guarantee that the outcomes of robustness queries will remain consistent before and after quantization, as suggested by the \\n last sentence of the cited paragraph.\\n\\nMoreover, on page 4, the authors develop:\\n\\n__________________\\n\\\"*Recall that inputs are quantized by $x^q = \\\\left\\\\lfloor \\\\frac{x}{s} \\\\right\\\\rceil \\\\cdot s$, which enables us to only encode the integer interval of allowed $\\\\left\\\\lfloor \\\\frac{x}{s} \\\\right\\\\rceil$ values by merging the multiplier $s$ into $k^{\\\\text{BN}}$ of the first layer. Encoding the constraint $v = \\\\left\\\\lfloor \\\\frac{x}{s} \\\\right\\\\rceil \\\\in \\\\mathbb{Z} \\\\cap [a, b]$ is achieved by introducing $b - a$ auxiliary Boolean variables $\\\\{t_1, \\\\cdots, t_{b-a}\\\\}$ and assigning $v = a + \\\\sum_{i=1}^{b-a} t_i$.*\\\"\\n___________________\\n - This further illustrates that input quantization is an artificial step used to significantly enhance the verification procedure, which \\n would otherwise become rather impractical. Indeed, in non-discretized scenarios, the number of Boolean variables $\\\\{t_i\\\\}$ \\n would very quickly become *intractable*.\\n\\n2) Regarding the *robustness verification performance*, we argue that our approach, supported by the additional results in Table 8, demonstrates clear advantages in delivering reliable bounds that can guarantee robustness, particularly under challenging scenarios such as severe attacks or $||\\\\cdot||_2$-norm perturbations. While MILP-based methods can be effective in finding counterexamples to **prove non-robustness**, they struggle significantly if required to **certify robustness**. Similarly, exact SAT-based methods, though effective in discrete settings, become impractical when addressing continuous input spaces or nonlinear perturbations like \\n$||\\\\cdot||_2$-norm attacks.\\n\\n We believe that our approach should not be viewed as directly competing with existing methods but rather as complementary, \\n as it aims to address scenarios that are particularly challenging for traditional techniques, such as proving robustness in \\n continuous domains with complex perturbation models. Finally, we hope that our work has the potential to encourage the \\n research community to focus on developing more efficient and scalable SDP solvers, which would have many significant \\n repercussions in different domains.\"}", "{\"title\": \"Detailed Response to Reviewer 7uz5\", \"comment\": \"We are grateful to the reviewer for the nice summary and for acknowledging the strengths of our work. Below, we address all the raised weaknesses and questions in order in which they appear.\\n\\n### **Weaknesses:**\\n\\n- We acknowledge that most works in the domain of BNN verification primarily focus on $||.||_\\\\infty$ perturbations. However, this should not overshadow the importance of addressing $||.||_2$ attacks. In fact, a significant body of impactful research has been conducted on $||.||_2$ robustness verification for full-precision (non-quantized) neural networks, including the works in [1, 2, 3] mentioned below. \\n\\n While $||.||_\\\\infty$ robustness emphasizes localized, worst-case scenarios, $||.||_2$ robustness remains essential for understanding a \\n model\\u2019s behaviour under more natural, distributed perturbations. These two metrics are complementary, and studying both provides a \\n more comprehensive evaluation of a model's reliability. That said, current BNN verification methods appear incapable of addressing \\n the additional complexity introduced by the non-linearities inherent in $||.||_2$ attacks.\\n\\n Moreover, we argue that there is a fundamental difference between continuous and discrete input spaces when it comes to robustness \\n verification. As highlighted in (Ivashchenko et al., 2023), input quantization is an artificial step designed to improve the efficiency of \\n the verification process. Since networks are generally trained to operate on continuous input spaces, quantization can reduce \\n network's post-training accuracy. Verifying BNNs on continuous input spaces allows for the consideration of all possible input states, \\n thereby enhancing the certainty of verification. However, when applied to continuous input spaces, current BNN verification methods\", \"face_two_major_challenges\": \"scalability and the inability to provide reliable bounds. Continuous input spaces induce numerous\\n numerical errors and lead to ill-conditioned optimization problems, as noted in our paper on Line 438. Thus, adaptations proposed by \\n the reviewer, although possible, are not always straightforward and may lead to inefficiencies or loss of precision in verification.\\n \\n For instance, one of the difficulties is providing an exact description of the input region for method *M*. This requires an oracle capable \\n of precisely characterizing the image set of a single-layer BNN with continuous input space. Specifically, it must describe all elements \\n of the set {$y \\\\in $ {$-1,1$ }$^{n_1} | \\\\exists x \\\\in \\\\mathcal{B} \\\\subseteq [-1,1]^{n_0}, y = \\\\operatorname{sign}(Wx+b)$}, given a region \\n $\\\\mathcal{B}$ and some parameters $(W,b)$. This problem can be reformulated as a maximum feasible subsystem problem, which is a \\n well-known *NP-hard* problem, by optimizing a linear objective function over this set.\\n\\n Finally, let us provide an illustrative example showcasing the difficulty of handling continuous input spaces and the necessity of\", \"considering_floating_point_errors\": \"Consider a BNN with input $\\\\mathbf{x}:=(x_1,x_2)\\\\in[0,255]^2$ and one hidden layer $\\\\mathbf{y} :=(y_1,y_2,y_3)$, where:\\n\\n $y_1=\\\\text{sign}(x_1+127.5)$, \\n\\n $y_2=\\\\text{sign}(x_2+127.5)$, \\n\\n $y_3=\\\\text{sign}(255+\\\\varepsilon-x_1-x_2)$, \\n\\n with $\\\\varepsilon>0$. Let \\n the objective function be $f:(\\\\mathbf{x},\\\\mathbf{y}) \\\\mapsto y_1+y_2+y_3+1.5$. \\n\\n If the input space of this BNN is discrete, meaning \\n $x_1$ and $x_2$ are integer-valued, then using the encoding method from (Narodytska et al, 2018), would lead to the following\", \"integer_linear_programming_formulation\": \"$\\\\min_{\\\\mathbf{x},\\\\mathbf{y}} f(y_1,y_2,y_3)=y_1+y_2+y_3+1.5$ \\n s.t. \\n $x_1 < C_1 \\\\implies y_1=-1, x_1 \\\\geq C_1 \\\\implies y_1=1$, \\n\\n $x_2< C_2 \\\\implies y_2=-1, x_2 \\\\geq C_2 \\\\implies y_2=1$, \\n\\n $-x_1-x_2< C_3 \\\\implies y_3=-1, -x_1-x_2\\\\geq C_3 \\\\implies y_3=1,$\\n\\n where $C_i$ refer to the values obtained by rounding the some constants up to the nearest integer. \\n\\n Specifically, $C_1=C_2=128$ and $C_3=\\\\lceil- 255-\\\\varepsilon \\\\rceil $. Thus, as long as $0<\\\\varepsilon<1$, problem will remain \\n unchanged and will always give the true solution $f^{*}=0.5$. \\n\\n However, for a continuous input space, when $\\\\varepsilon$ is very \\n close to 0 (in practice when $\\\\varepsilon <10^{-6}$), the MILP solver may output a spurious result, say$f^{*}=-1.5$.\\n\\n - [1] Jeremy Cohen, Elan Rosenfeld, Zico Kolter. \\\"Certified Adversarial Robustness via Randomized Smoothing\\\". Proceedings of the \\n 36th International Conference on Machine Learning, PMLR 97:1310-1320, 2019.\\n - [2] Sahil Singla, Surbhi Singla, Soheil Feizi. \\\"Improved deterministic $||.||_2$ robustness on CIFAR-10 and CIFAR-100\\\". \\n International Conference on Learning Representations, 2022.\\n - [3] Xiaojun Xu, Linyi Li, Bo Li. \\\"LOT: Layer-wise Orthogonal Training on Improving $||.||_2$ Certified Robustness\\\". Conference on \\n Neural Information Processing Systems, 2022.\"}", "{\"summary\": \"The paper introduces a novel method for verifying the robustness of Binary Neural Networks (BNNs) against adversarial attacks using Semidefinite Programming (SDP) relaxations derived from Sparse Polynomial Optimization. This approach outperforms existing LP relaxation used in MILP-based verification methods without introducing much computation overhead. Specifically, the authors suggest that the SDP-based relaxations could be embedded within branch-and-bound algorithms in MILP solvers to improve bound estimation, accelerating the verification process without altering the core MILP framework. Their method achieves bounds that are up to 55% tighter than those obtained with traditional linear relaxations and demonstrates significant computational efficiency, especially under large input perturbations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. **Novel Use of SDP for BNN Verification**: Employing sparse Polynomial Optimization and SDP relaxations for BNN verification represents an innovative approach, potentially enhancing scalability and precision over existing MILP-based methods.\\n\\n2. **Significantly Improved Bounds**: By using SDP relaxations, the authors achieve up to 55% tighter bounds compared to traditional linear relaxations in MILP, a notable improvement in robustness certification accuracy.\\n\\n3. **Efficient Computation**: Experimental results demonstrate considerable speedups (up to 50x in severe attack scenarios), showing that the method is computationally efficient and less conservative in bounding compared to LP-based techniques, especially in high-dimensional BNNs.\\n\\n4. **Broad Norm Compatibility**: The method accommodates both \\u2225.\\u2225\\u221e and \\u2225.\\u22252 norms for adversarial attacks, expanding its applicability across different attack types, which is less common in the BNN verification field.\", \"weaknesses\": \"1. **Limited Dataset and Network Complexity**: The experiments are primarily on MNIST-based networks, which may not fully demonstrate the method\\u2019s performance on more complex datasets or larger architectures. Expanding the experimental validation could strengthen the generalizability of the approach.\\n\\n4. **Comparative Analysis with State-of-the-Art**: While comparisons with LP and MILP are made, a broader comparison with recent state-of-the-art BNN verification techniques could further validate the advantages and potential limitations of the SDP-based approach.\", \"questions\": \"1. Have you tested this method on more complex datasets or architectures beyond MNIST? If so, what were the results, and if not, do you anticipate any challenges?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes a new approach for verifying binary neural networks. It is based on a novel formulation of semidefinite programming (SDP), derived from sparse polynomial optimization. The angle to develop this new formulation is novel, and the paper also includes good insights on how it compares to linear programming and how to strengthen this formulation. The numerical results are also promising, although conducted on limited settings only. The AC believes the theoretical contribution is novel and sufficient for publication.\\n\\nThe AC shares the same opinion as Reviewer o79c that **more related work from the broad neural network verification community should be discussed in this paper** in Section 1.2 (VNN-COMP reports should be good starting points to find relevant references). The current scope of related work is too narrow.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers discussed the paper with the authors, and during the rebuttal period, most technical concerns were addressed. During the discussion between reviewers and the AC, we reached a consensus that this paper could be accepted.\"}", "{\"title\": \"Detailed Response to Reviewer o79c - Continuation 3\", \"comment\": \"We are very grateful to the reviewer for acknowledging the full potential of our work and for providing the additional feedback.\\n\\n(i) We thank the reviewer for the suggestions regarding the presentation aspect of our paper. We have tried to modify some notations to avoid ambiguity, notably concerning matrices and vectors. We have also included more precise references regarding the theoretical results upon which our approach is dependent. Some examples regarding the relevance of the considered problem were also provided in the introduction. We invite the the reviewer to consult these, and other modifications, in the revised version of our paper (changes are highlighted in blue).\\n\\n(ii) We thank the reviewer for providing these additional references. However, creating similar illustrations, even for our toy BNN example would be very challenging. The relaxations proposed in [1] and [2] do not involve any *lifting*, meaning that the feasible set remains within the same decision space as the original problem. In contrast, SDP relaxations involve introducing new variables, representing the moments of some unknown measure, and requiring these new variables to satisfy specific semidefinite positivity constraints. Consequently, the new feasible set is a spectrahedron in this higher-dimensional space. If one would try to project this set into the original space, one would end up with the convex hull of the graph of the $\\\\operatorname{sign}(\\\\cdot)$ function, and thus unable to perceive any difference with respect to LP relaxations. \\nThat is why we opted for analytical proofs of the superiority of our relaxations, as stated in Theorem 4.1, and demonstrated experimentally in Table 3, for example. \\n\\nFinally, our running example is meant to illustrate the nature of the problems we solve. Example 2.1, Figure 1, and equation (20) allow the reader to understand how the network structure determines the sparsity pattern of the problem, and how this sparsity pattern in turn translates into matrices (decision variables) of smaller size, highlighting the efficiency of the proposed approach.\\n\\n(iii) Regarding performance evaluations of our method, throughout the paper, this was something that has been repeatedly highlighted in many ways. Specifically, *Theorem 4.1*, *Table 3*, and the additional *Table 8* demonstrate, **both theoretically and experimentally**, that our bounds consistently outperform LP bounds, even in deeper networks (up to 6 layers) with a large number of hidden nodes (reaching as many as 9000). \\n\\nWe emphasize once again that no other method in the literature, capable of simultaneously addressing continuous input spaces and $||.||_2$ attacks, has been able to achieve this level of scalability.\\n\\nNotably, additional experiments conducted at the request of reviewers Rj7X and TkGW reveal that the MILP method, which relies on LP bounds, is only capable of proving the robustness of a single image within the 1h time limit. \\n\\nThus, MILP method can be very useful for finding attacks (since the attack feasibility problem is much easier), but they cannot be expected to provide robustness certificates more often than our method. Similar reasoning applies for other SAT/SMT-based, exact methods, if one would try to make them compatible with continuous input spaces and $||.||_2$ attacks (more details on this can be found in our latest reply to the reviewer 7uz5).\"}" ] }
9bwPESShgf
TranSpa: Towards Efficient Structured Sparse Training for Transformers
[ "Jinqi Xiao", "Miao Yin", "Cheng Yang", "Yang Sui", "Huy Phan", "Xiao Zang", "Wenqi Jia", "Hang Liu", "Zhao Zhang", "Jian Ren", "Bo Yuan" ]
Transformers have emerged as the backbone neural network architecture in today's AI applications. Due to their high complexity, sparsifying transformers, at both pre-training and fine-tuning stages, is very attractive for lower the training and inference costs. In this paper, we propose TranSpa, an efficient structured sparse training approach for language and vision transformers. Unlike prior works focusing on individual building blocks, TranSpa fully considers the correlation between the weight matrices and their component rows/columns, and performs the coupled estimation and coupled sparsification. To achieve that, TranSpa introduces the use of new granularity when calibrating the importance of structural components in the transformer and removing the insignificant parts. Evaluations across different models, in both pre-training and fine-tuning scenarios, demonstrate the effectiveness of the proposed approach. TranSpa can bring $1.6\times$ size reduction with $0.6$ lower perplexity when training GPT-2 model from scratch. It also enables $1.6\times$ training speedup over the existing sparse pre-training method. For training sparse LLaMA-1B from scratch, our approach reduces GPU memory usage by 50\%, decreases training time by 21\%, and achieves a $1.6\times$ speedup in inference throughput while maintaining model performance. Experiments of applying TranSpa for fine-tuning tasks also show significant performance improvement with respect to model accuracy and pruning cost reduction.
[ "Sparse Training", "Transformer", "Efficient Inference", "Efficient Training" ]
https://openreview.net/pdf?id=9bwPESShgf
https://openreview.net/forum?id=9bwPESShgf
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wcguQrGypy", "uygROvHLxN", "mr0uzGNsKp", "JndtSDXtac", "IS8HqNEfAH" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730210224630, 1731010595822, 1729866139710, 1730904273689, 1731686175295 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4743/Reviewer_EfcW" ], [ "ICLR.cc/2025/Conference/Submission4743/Reviewer_3b7F" ], [ "ICLR.cc/2025/Conference/Submission4743/Reviewer_ZjV6" ], [ "ICLR.cc/2025/Conference/Submission4743/Reviewer_mtkH" ], [ "ICLR.cc/2025/Conference/Submission4743/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose TranSpa, an efficient structure sparsification method accelerating both training and inference of transformers. TranSpa couples weight matrices, estimates their importance accordingly, and removes coupled row/col pairs in each coupled weight matrix. The proposed method is evaluated across different scenarios, including pre-training, fine-tuning and inference. Experimental results demonstrate that TranSpa effectively speedup up transformer training and inference while maintaining the performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This work proposes to consider the importance of weight matrices by couples and removes row/col pairs in coupled weight matrices, improving the performance.\", \"It is carefully written.\", \"Experimental results demonstrate a significant reduction on training time and memory usage.\"], \"weaknesses\": [\"The experimental section lacks some details. For example, Table 1 presents training times but omits FLOPs savings, while Table 2 provides Flops savings without showing time savings.\", \"The training times for sevearl baselines, such as Monarch in Table 1, are missing, complicating the comparison of baseline performance and TranSpa.\", \"The estimation of weight importance is based on loss, as outlined in Eq. (6). It's unclear how importance evaluation and sparsification are conducted when there is no loss during inference.\"], \"questions\": [\"Can you clarify how to evaluate the importance of weights and conduct sparsification during inference?\", \"In Table 1, why are some pre-training experiments are conducted on 8 cards while others use 4 cards? This inconsistency makes it inconvenient to derive time savings.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes TranSpa, an efficient structured sparse training approach for transformers. Experiments on various models in pre-training and fine-tuning show significant improvements in training speed, accuracy, and cost reduction compared to existing methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"TranSpa proposed Coupled Estimation and coupled sparsitifation in Sec 3. With inspiration from tSVD, TranSpa implements structured pruning which can be easily translated into real acceleration.\\n\\nTranSpa brings weights reduction and speedup in training and inference, e.g., 1.6\\u00d7 size reduction and 0.6 lower perplexity for GPT - 2, and 50% GPU memory usage reduction and 1.6\\u00d7 inference throughput speedup for LLaMA - 1B. TransPa also demonstrates better performance when scaling to llama-2-7b fine-tuning tasks.\", \"weaknesses\": [\"While the authors applied TransSpa on various architectures (GPT-2, Llama-1b, DeiT), the results are not entirely convincing:\", \"Table 1 compares TransSpa with the GPT-2 baseline. However, GPT-2's training epochs are twice as many, and the authors don't provide evidence that the TransSpa model has fully trained and converged with half the epochs. This weakens the credibility of the claimed training speedup.\", \"Table 2 lacks crucial comparisons with PEFT methods for LLM training. The LoRA series is missing data on memory and training time. Additionally, it's unclear why LoRA has the same number of parameters as the baseline.\", \"Table 3 shows that TransSpa significantly underperforms on small tasks like Winograd (69 \\u2192 60), while many studies have shown LoRA can achieve similar performance to Full-FT. This raises concerns about the pre-training results, which are typically considered more challenging than quick 1\\u20133 epoch SFT.\", \"Table 5 switches to the much smaller CIFAR-10 dataset, despite Table 3 showing comparisons on ImageNet. This inconsistency raises questions about the quality of the loss curve on ImageNet.\", \"The method of TranSpa is not complex, however, why and how it can preseve accuracy compared to full FT is not fully discussed in the paper\"], \"questions\": \"see comments above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces TranSpa, an efficient structured sparsification method tailored for transformers used in both language and vision AI models. Unlike previous methods that focus on individual transformer components, TranSpa considers the correlations between weight matrices and their rows and columns, applying a coupled sparsification approach. By introducing a new granularity level, TranSpa selectively removes less significant parts of the transformer, optimizing its structure and reducing computational costs. Empirical results demonstrate that TranSpa achieves a 1.6x reduction in model size with minimal accuracy loss (0.6 perplexity increase) in GPT-2 training from scratch, and offers a 50% memory reduction and a 1.6x training speedup for sparse LLaMA-1B models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.Innovative Sparsification: By considering correlations within weight matrices, TranSpa\\u2019s coupled sparsification maintains essential model structure.\\n\\n2.Efficient Granularity: TranSpa\\u2019s new granularity method enables precise pruning of less critical parts, optimizing model efficiency.\\n\\n3.Resource Savings: TranSpa reduces GPU memory usage by 50% and speeds up training by 21% in large models like LLaMA-1B, addressing scalability.\\n\\n4.Versatile Application: Effective in both pre-training and fine-tuning stages, TranSpa adapts well across various training scenarios.\", \"weaknesses\": \"1.One limitation of this paper is that, while it emphasizes the correlation between weight matrices, it does not verify whether this correlation exists only between adjacent weight matrices. Further analysis is needed to confirm if non-adjacent matrices also exhibit correlations, as this could impact the effectiveness and generality of TranSpa\\u2019s sparsification approach.\\n\\n2.A limitation of this paper is the lack of time complexity analysis for key computations, such as compute v and compute $\\\\hat{I}$. Including these analyses would clarify the computational overhead introduced by TranSpa and provide a more comprehensive understanding of its efficiency.\", \"questions\": \"Please see the weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a novel granularity, called \\\"coupled weight matrix\\\", to evaluate the importance of structural components within a model and apply structured sparsity based on component importance. The \\\"coupled weight matrix\\\" refers to a pair of weight matrices, such as $W_AW_B$. By removing the less important rows of $W_A$ and columns of $W_B$, the authors introduce sparsity into the model.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. A novel granularity is proposed to evaluate the importance of structural components, offering broader perspective on model optimization.\\n2. Experimental results appear promising, demonstrating that this approach can effectively accelerate training and inference by utilizing structured sparsity and achieve high performance.\", \"weaknesses\": \"1. In Eq. 5, the latter part $\\\\left (0.5 \\\\operatorname{erf} \\\\left ( Y_1 / \\\\sqrt{2} \\\\right ) \\\\odot X W_{in} \\\\right ) W_{out} \\\\neq 0.5 \\\\operatorname{erf} \\\\left ( Y_1 / \\\\sqrt{2} \\\\right ) \\\\odot X \\\\left (W_{in} W_{out} \\\\right )$. Consequently, $W_{in}$ and $W_{out}$ do not fit the definition of \\\"coupled weight matrix.\\\"\\n2. Most models incorporate position embedding in Multi-Head Attention. For example, the attention mechanism in LLaMA can be formulated as $\\\\operatorname{softmax}\\\\left(\\\\frac{X_Q \\\\boldsymbol{W}_i^Q ROPE \\\\left(X_K W_i^K\\\\right)^{\\\\top}}{\\\\sqrt{e}}\\\\right) X_V W_i^V$, where $ROPE$ represents a rotation matrix with an an angle determined by the relative positions of two tokens. Thus, $W^Q$ and $W^K$ do not constitute a \\\"coupled weight matrix.\\\" \\n3. In current models, such as LLaMA or Mistral, the definition of FFN is $FFN(X)=(XW_{up}\\\\odot silu(XW_{gate}))W_{down}$. Authors didn't discuss this common structure in the article and I do not observe a \\\"coupled weight matrix\\\" within this FFN.\", \"questions\": [\"I note that your experiments include LLaMA. How did you identify the \\\"coupled weight matrix\\\" within LLaMA's FFN?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
9bMZ29SPVx
A CLIP-Powered Framework for Robust and Generalizable Data Selection
[ "Suorong Yang", "Peng Ye", "Wanli Ouyang", "Dongzhan Zhou", "Furao Shen" ]
Large-scale datasets have been pivotal to the advancements of deep learning models in recent years, but training on such large datasets inevitably incurs substantial storage and computational overhead. Meanwhile, real-world datasets often contain redundant and noisy data, imposing a negative impact on training efficiency and model performance. Data selection has shown promise in identifying the most representative samples from the entire dataset, which aims to minimize the performance gap with reduced training costs. Existing works typically rely on single-modality information to assign importance scores for individual samples, which may lead to inaccurate assessments, especially when dealing with noisy or corrupted samples. To address this limitation, we propose a novel CLIP-powered data selection framework that leverages multimodal information for more robust and generalizable sample selection. Specifically, our framework consists of three key modules—dataset adaptation, sample scoring, and selection optimization—that together harness extensive pre-trained multimodal knowledge to comprehensively assess sample influence and optimize the selection results through multi-objective optimization. Extensive experiments demonstrate that our approach consistently outperforms existing state-of-the-art baselines on various benchmark datasets. Notably, our method effectively removes noisy or damaged samples from the dataset, enabling it to achieve even higher performance with less data. This indicates that it is not only a way to accelerate training but can also improve overall data quality. The implementation is available at https://github.com/Jackbrocp/clip-powered-data-selection.
[ "Data selection", "generalization", "multimodal" ]
Accept (Spotlight)
https://openreview.net/pdf?id=9bMZ29SPVx
https://openreview.net/forum?id=9bMZ29SPVx
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zB0aj0QAlU", "yVEgjU4mjk", "x2bxkb9Gh8", "tGdZayEqaX", "qAxGufN2CM", "pjHRMo0LL3", "ocQKoBrx1j", "nwfZ5tyFuY", "n6RjLYlUe4", "mGH0hlrZVv", "hcAZd4FF3e", "dKnMTWFoYM", "cwbmsbJaY4", "a4MHZwudUG", "Z19bQBINO9", "WcsJhfRftn", "SU2cmDytpN", "NTuLD1iG4a", "Imhyh28vQL", "HOSqGoCKqR", "DCFpteC8IM", "Bi0UTLE3Gb", "7NHyB70Gp2", "5rAoGxlRGs", "5WlQAtBQDR", "599gXwpbvU", "1snXii2vye" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730639400746, 1732069863036, 1732261022851, 1732070775716, 1737523553182, 1732499781249, 1732074699541, 1730554881585, 1732281315696, 1732068776702, 1730215426373, 1732262274898, 1732280119662, 1732261398699, 1732538397975, 1732072707956, 1732069935791, 1734669340523, 1732452968236, 1732074309744, 1732072345154, 1730631269512, 1732261485830, 1732261352038, 1732588044719, 1732261127239, 1732499311747 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3088/Reviewer_QkGn" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Submission3088/Reviewer_HyV4" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Submission3088/Reviewer_vC5o" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Submission3088/Reviewer_QkGn" ], [ "ICLR.cc/2025/Conference/Submission3088/Reviewer_vU6u" ], [ "ICLR.cc/2025/Conference/Submission3088/Reviewer_HyV4" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Submission3088/Area_Chair_BdJe" ], [ "ICLR.cc/2025/Conference/Submission3088/Reviewer_HyV4" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Submission3088/Reviewer_vU6u" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ], [ "ICLR.cc/2025/Conference/Submission3088/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces a CLIP-based multimodal data selection framework that enhances the robustness and generalizability of data selection by leveraging both image and text information. Firstly, the paper trains an adapter to transfer pretrained knowledge to the target data. Then, the similarity between the image and text is used to calculate the sampling score for subset selection. The authors evaluate their results in various popularly datasets, obtaining state-of-the-art results in almost all of them.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The method performs more robustly compared to other baseline data selection methods in various popularly datasets.\\n2. The algorithm is easy to understand and to follow.\\n3. The writing of this paper is overall great.\", \"weaknesses\": \"Some details and motivation should be explained further.\\n1. The method requires training an adapter with data from the training dataset for data selection. If the noise ratio is high (such as 50%), wouldn't it be a better option not to use an adapter?\\n2. Does the Actual Selection Costs in Appendix G include the training time for the Adapter? Is the comparison fair?\\n3. From the ablation experiment, it can be seen that selection loss has a significant impact on the final result. So, what is the sensitivity of this parameter?\", \"questions\": \"The authors should take the above questions into consideration: the utilization of the adapter, the comparison, and the parameter.\\n1. The method requires training an adapter with data from the training dataset for data selection. If the noise ratio is high (such as 50%), wouldn't it be a better option not to use an adapter?\\n2. Does the Actual Selection Costs in Appendix G include the training time for the Adapter? Is the comparison fair?\\n3. From the ablation experiment, it can be seen that selection loss has a significant impact on the final result. So, what is the sensitivity of this parameter?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer vC5o (1/2)\", \"comment\": \"Dear Reviewer vC5o:\\n\\nThank you for providing insightful comment on our work. We appreciate your recognition of our work\\u2019s strengths and provide responses to address the comments as follows:\\n\\n- **Q1: Complexity: While effective, the Selection Optimization module\\u2019s complexity might hinder its application to extremely large datasets without computational resources.**\\n- **A1:** Thank you for your thoughtful comment. \\n\\n - Regarding computational complexity, as we have analyzed at the end of Section 3, this complexity is O(N), where N is the dataset size. The optimization module involves only numerical optimization without relying on deep models, ensuring high computational efficiency. For further details on computational costs, we kindly refer you to Table D-1 in Q3 and A3, where we provide the actual training costs for the selection optimization on large-scale ImageNet-1k.\\n\\n - Regarding parameter complexity, the module's parameter complexity is also O(N), which is negligible compared to deep model training. For instance, on ImageNet-1k with 1.2M samples, the parameter complexity of $d$ constitutes only **3.5%** of WRN-28-10 (36.5M), **5.5%** of ResNet-50 (23.5M), **1.5%** of ViT-B (86M), and **0.8%** of CLIP (ViT-B/32). Thus, the parameter complexity of the optimization module requires significantly less parameter complexity than these widely used models. This demonstrates that the optimization module requires significantly fewer parameters than these widely used models, ensuring its scalability to large-scale datasets with modest computational resources.\\n\\n We hope this clarifies the efficiency of the selection optimization module and its feasibility for applications.\\n\\n- **Q2: Generality: The framework is primarily tested on vision-based datasets; extending it to text or mixed-modality datasets may reveal further limitations. Future work could focus on enhancing the framework\\u2019s versatility across other modalities.**\\n- **A2:** Thank you for pointing out the generality of our framework and its potential for text or mixed-modality datasets. In our current work, following the task settings from [1-3], we focus on image datasets for data selection and enhance robustness and generalizability by introducing textual modality as complementary information, particularly for handling noisy or corrupted data.\\n\\n We agree that extending the framework to other modalities, such as text-only or mixed-modality datasets, is a promising direction. It is worth noting that research on data selection for text or mixed-modality datasets remains relatively limited compared to vision-based tasks, underscoring the potential impact of exploring these areas further. While such extensions may not be covered by this work, they represent an exciting avenue for future research, where enhancing the framework\\u2019s versatility across diverse modalities could unlock broader applications. We appreciate your suggestion and have incorporated this perspective into our discussion of future work.\\n\\n- **Q3: Could the authors provide more detail on the computational efficiency of the Selection Optimization module for very large-scale datasets?**\\n- **A3:** Thanks for raising this question. As analyzed in Section 3 (lines 292-301), the computation complexity of the selection optimization module is O(N), where N is the dataset size. The optimization process relies solely on numerical optimization and does not involve deep architectures, making it inherently efficient and scalable for large-scale datasets.\\n\\n To further address your concerns, we provide the actual training costs (h) for the selection optimization module, described in Section 3.3, on ImageNet-1k with a single V100 GPU across various selection ratios in the table below. The results, shown in the table below, demonstrate that the optimization process completes efficiently in just a few seconds, even for large-scale datasets.\\n\\n**Table D-1**: The actual training costs (h) for the selection optimization.\\n|Selection Ratio (%)|90|80|70|60|40|30|20|\\n|-|-|-|-|-|-|-|-|\\n|Tiny-ImageNet|0.003|0.004|0.004|0.004|0.004|0.004|0.004|\\n|ImageNet-1k|0.004|0.004|0.004|0.004|0.004|0.004|0.004|0.005|\"}", "{\"title\": \"Looking forward to the reply\", \"comment\": \"Dear reviewer QkGn:\\n\\nThanks so much again for the time and effort in our work. According to the comments and concerns, we conduct the corresponding experiments and further discuss the related points. Besides, according to your comments, we have revised our description of experiment settings in Appendix G for clarification.\\n\\nAs the discussion period is about to close, may I know if our rebuttal addresses the concerns? If there are further concerns or questions, we are willing to address them. Thanks again for taking the time to review our work and provide insightful comments.\"}", "{\"title\": \"Response to Reviewer HyV4 (1/3)\", \"comment\": \"Dear Reviewer HyV4:\\n\\nThank you for providing meticulous review and insightful feedback on our work. We appreciate your recognition of our work\\u2019s strengths. For the comments and questions, we provide our response as follows.\\n\\n- **Q1: Will using the imperfect data to fine-tune the adapter degrade the method's effectiveness? This needs to be discussed.**\\n- **A1:** Thank you for raising this insightful question. We would like to clarify that fine-tuning the adapter with imperfect data will **NOT** degrade the method's effectiveness. To validate this, we conducted a series of experiments:\\n\\n**1. Comparison with Clean vs. Imperfect Data:** \\n\\nWe compared the performance of fine-tuning the adapter using clean data (**Ours**) and imperfect data (**Ours***) on CIFAR-100 and Tiny-ImageNet. As shown in Table C-1, the results demonstrate that using imperfect data yields comparable noise proportions (introduced into the selected datasets) and model performance. Additionally, as shown in Table C-2, fine-tuning the adapter on corrupted data does not degrade the effectiveness, further showcasing the robustness of our method.\\n\\n**Table C-1**: Performance comparison between clean and data with noisy labels for fine-tuning the adapter on CIFAR-100 and Tiny-ImageNet with ResNet-50. Noise proportion means the noise ratio in the selected datasets.\\n| Model | |CIFAR-100 | | Tiny-ImageNet | | \\n|-|-|-|-|-|-|\\n| | Selection Ratio (%) | 20 | 30 | 20 | 30 | \\n| Ours | Noise Proportion (%)| 0.24| 0.25 | 0.28 | 0.27|\\n| | Acc. (%) | 45.63| 58.65 | 25.98| 32.21|\\n| Ours*| Noise Proportion (%)| 0.25 | 0.32 | 0.26 | 0.23 |\\n| | Acc. (%) | 46.05 | 58.34 | 26.09 | 33.13 |\\n\\n**Table C-2**: Performance comparison between clean and corrupted data for fine-tuning the adapter using Tiny-ImageNet with ResNet-50 and a 20% corruption ratio.\\n| Selection Ratio (%) | 20 | 30|40|60|80|\\n|-|-|-|-|-|-|\\n|Ours|26.05|32.13|37.66|44.05|47.30|\\n|Ours*|26.02|32.16|37.52|43.99|47.52|\\n\\n\\n **2. High-Noise Scenarios:**\\n\\nTo further evaluate the robustness, we increased the noise ratio to as high as 70% and fine-tuned the adapter using imperfect data. Meanwhile, we also tested performance without using the adapter (i.e., leveraging CLIP's zero-shot capability). As shown in Table C-3, the results demonstrate that even under high-noise conditions, our method maintains low introduced noise in the selected datasets and achieves robust accuracy, validating the adapter's effectiveness.\\n\\n**Table C-3**: Performance comparison under high-noise conditions with CIFAR-100 using different settings.\\n| | Noise Ratio (%) | 20 | | 50 | | 70 | |\\n|-|-|-|-|-|-|-|-|\\n|| Selection Ratio (%) |20 | 30 | 20 | 30 | 20 | 30 |\\n| Random | Noise Proportion (%)| 20.80 | 19.83 | 20.32 | 30.10 | 20.83 | 29.93 |\\n| | Acc. (%) | 34.47 | 43.26 | 18.70 | 22.79 | 11.56 | 13.38 |\\n| Ours w/o adapter | Noise Proportion (%)| 0.33 | 0.52 | 1.37 | 0.74 | 1.70 | 6.42 |\\n| | Acc. (%) | 45.37 | 55.82 | 46.08 | 58.68 | 46.54 | 53.05 |\\n| Ours* | Noise Proportion (%)| **0.25** | **0.32** | **0.43** | **0.68** | **0.80** | **4.30** |\\n| | Acc. (%) | **46.05** | **58.34** | **52.56** | **60.72** | **51.50** | **56.80** |\\n\\n**Analysis:**\\n\\nCLIP's strong alignment capabilities, derived from extensive pretraining, make it inherently robust to noise. The adapter, designed for domain-specific transfer, is lightweight, with significantly fewer parameters (a simple linear layer constituting only 0.04% of CLIP ViT-B/32\\u2019s parameters) and minimal training iterations. This ensures the adapter complements rather than overshadows CLIP\\u2019s alignment capabilities. Our analysis further reveals that, across different noisy conditions, the alignment discrepancy between adapters trained on clean and noisy data is negligible (**<= 0.02%**), validating its robustness in noisy conditions. This leads to minimal impact on the SAS in Eq. 1 and the subsequent optimization module, ensuring that our method remains robust and effective, even when fine-tuned on imperfect data. We appreciate your attention to this critical aspect.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"comment\": \"Dear Reviewer vC5o,\\n\\nConsidering the limited time available, and in order to save the reviewer's time, we summarized our responses here.\\n\\nThank you for your constructive feedback and recognition of our work\\u2019s originality, quality, and significance. In response, we show additional experimental results and analysis:\\n\\n1. We provide an analyses of both parameter and computational complexity of the selection optimization module, along with its computational efficiency on large-scale datasets (Table D-1). The results demonstrates that the optimization process completes efficiently within just a few seconds.\\n2. As suggested, we have included further discussion on high modality imbalance scenarios and outlined feasible directions for extending our method to such conditions. This has been added to the main paper on page 10 in the revised version.\\n\\nAs the discussion period is about to close very soon, could we know if our responses addressed your concerns? Please feel free to let us know if there are any other concerns we can clarify. \\n\\nThanks again for taking the time to review our work and provide insightful comments.\"}", "{\"title\": \"Response to Reviewer vU6u\", \"comment\": \"Dear Reviewer vU6u:\\n\\nWe sincerely thank you for the careful review and insightful questions/comments. For the comments and questions, we provide the responses here:\\n\\n- **Q1: The proposed method relies on a pretrained CLIP and hence any biases in the CLIP model will propagate to the selected dataset.**\\n- **A1:** Thank you for pointing out the potential issue of bias propagation from the pretrained CLIP model to the selected dataset. In our method, apart from leveraging CLIP's alignment capabilities, the multi-objective selection optimization also incorporates a diversity scoring mechanism (SDS) to ensure a more representative and diverse subset of samples. By considering both semantic alignment and diversity during sample selection, the bias caused by CLIP could be further alleviated. However, we note that other state-of-the-art works based on pretrained CLIP share the raised limitation. Since our work primarily focuses on data selection, fully mitigating potential bias in pretrained multimodal models is, in our opinion, beyond its scope but of interest for our future work. Some feasible solutions to reduce bias in the data selection process may include fine-tuning CLIP or adapters with bias-aware loss functions, adding bias-related evaluation metrics, and so on.\\n\\n We have added this discussion to the main paper on page 10.\\n- **Q2: The proposed method optimizes alignment and diversity but does it have any indirect effect on bias in the dataset?**\\n- **A2:** Thanks for the question. We acknowledge that investigating and addressing potential biases in training datasets is inherently challenging due to the complex nature of defining and quantifying bias in training datasets. In practice, training datasets are typically evaluated based on the model generalization performance trained using them. \\n Thus, in our work, we focus on validating the performance of the selected datasets across diverse architectures and scenarios. Specifically: \\n 1. General performance validation: Performance across multiple deep architectures and datasets, as demonstrated in Fig. 4 and Tab. 1.\\n 2. Robustness in Noisy Scenarios: Validation in noisy environments, shown in Tab. 2 and 3, and Fig. 5. \\n 3. Generalization on Unseen Architectures and Benchmarks: Demonstrated in Tab. 4 and 5.\\n\\n These evaluations collectively highlight the robustness and effectiveness of our method in improving performance across diverse conditions while indirectly addressing concerns about potential dataset bias. Thank you for bringing up this important point.\\n\\n- **Q3: Is it possible to control bias in the dataset or in the subsequent models that are trained on the selected dataset?**\\n- **A3:** We appreciate your insightful comment. As demonstrated in Tables 2/3 and Figure 5, our method effectively controls noise-related bias in the selected datasets, achieving a significantly low noise ratio and reducing the risk of bias in subsequent models.\\n While our approach could suppress noise-related bias, controlling other forms of bias in datasets or downstream models depends on additional factors, such as inherent biases in the original dataset and the specifics of the downstream task. To control bias in the selected datasets, potential strategies may be feasible through bias-aware sample weighting and fine-tuning, which remains an important direction for future research. Thank you again for highlighting this critical aspect.\\n\\n- **Q4: Does the STE cause convergence issues?**\\n- **A4:** Thanks for the question. We would like to clarify that STE will **NOT** cause convergence issues in our framework. To further address this concern, we provide the actual time costs (h) required for convergence using STE during the selection optimization process, as described in Section 3.3, on large-scale Tiny-ImageNet and ImageNet-1k across selection ratios on a single V100 GPU. As shown in the table below, the convergence completes efficiently in just a few seconds.\\n\\n**Table B-1**: Convergence costs (h) of the selection optimization module.\\n|Selection Ratio (%)|90|80|70|60|40|30|20|\\n|-|-|-|-|-|-|-|-|\\n|Tiny-ImageNet|0.003|0.004|0.004|0.004|0.004|0.004|0.004|\\n|ImageNet-1k|0.004|0.004|0.004|0.004|0.004|0.004|0.004|0.005|\\n\\n- **Q5: The variable d (sample wise parameter) can be easily confused with d (feature dimension). Consider changing one of them. Also, I don\\u2019t see d in Fig. 1. Is \\u201cw: N x 1\\u201d the sample wise parameter d? Or am I missing something?**\\n- **A5:** Thank you for pointing this out. \\n - We have renamed the variable representing the feature dimension from $d$ to $f_d$ throughout the manuscript to avoid ambiguity.\\n - Additionally, after reviewing, $w$ in Fig. 2 is the sample-wise parameter. We have fixed this point in Fig. 2 in the revised version.\\n- **Q6: The caption of Tab.3 needs to be corrected. You can also merge Tab.3 with Tab.2.**\\n- **A6:** Thank you for the suggestion. We have corrected the caption of Tab.3 and merged it with Tab.2 in the revised version.\"}", "{\"summary\": \"This paper proposes a novel CLIP-based data selection method leveraging multimodal information with an SGD optimization module. The experiments on several benchmarks show evident improvements over existing approaches.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Connecting the text and image information for the dataset selection is novel. The proposed CLIP-based method implements the idea well.\", \"The SGD-based selection optimization with multi-objective is interesting. It is different from the existing sampling strategies with combinational optimization.\", \"Sufficient experimental results show the effectiveness, including different datasets, different settings, and different model architectures.\"], \"weaknesses\": \"- The work relies on two adapters to project the CLIP features to the dataset-specific embedding space. The adapter training is based on perfect data. In fact, under the setting of noisy or corrupted data, this ideal data is inaccessible. Therefore, the experiments would be problematic. In Lines 514-515, the authors depict \\\"is essential for effectively transferring the model\\u2019s generalization ability to target datasets\\\". It seems the adapter has a large influence on the performance. Will using the *imperfect* data to fine-tune the adapter degrade the method's effectiveness? This needs to be discussed.\\n\\n- The class label in this work plays a role as a prototype, while previous work also uses prototypes. They [a] often use the average image features as the prototype and calculate the Euclidean distance between the embedded image and the corresponding prototype, similar to SAS (Eq. 1). Besides, they also use this distance to filter noisy labels [b]. Intuitively, we can use the average image feature to replace the text feature in Eq. 1 of the proposed method. Therefore, it is essential to discuss these results to convey to readers that introducing text is significant. \\n\\n- MoSo is NeurIPS'23 rather than NeurIPS'24. Therefore, methods from 2024 have not been compared, e.g., [c]. Besides, some typical methods good at low sampling ratios are missing, e.g., [d].\\n\\n- Typo: the symbol for the learnable parameter in Figure 2 should be $\\\\mathbf{d}$ rather than $\\\\mathbf{w}$.\\n\\n[a] Moderate coreset: A universal method of data selection for real-world data-efficient deep learning, ICLR, 2023\\n\\n[b] NGC: A unified framework for learning with open-world noisy data, ICCV, 2021\\n\\n[c] Spanning Training Progress: Temporal Dual-Depth Scoring (TDDS) for Enhanced Dataset Pruning, CVPR, 2024\\n\\n[d] Submodular Combinatorial Information Measures with Applications in Machine Learning, ALT, 2021\", \"questions\": [\"The text-image alignment is like using CLIP to measure the difficulty in learning the data. Noisy data or corrupted data are difficult to learn. However, will the selection also filter some important yet difficult data?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer QkGn,\\n\\nWe would like to express our sincere gratitude to reviewer QkGn for acknowledging our work and providing insighful comments. Thanks again for the time and effort in reviewing our work.\"}", "{\"title\": \"Response to Reviewer QkGn\", \"comment\": \"Dear Reviewer QkGn,\\n\\nWe sincerely thank you for the careful review and insightful comments/questions. We appreciate your recognition of our work\\u2019s strengths and provide responses to address the comments raised.\\n\\n- **Q1: If the noise ratio is high (such as 50%), wouldn't it be a better option not to use an adapter?**\\n- **A1:** Thanks for raising your insightful concerns. To further demonstrate the robustness of the adapters used in our method, we further perform a detailed evaluation under varying noise ratios (20%, 50%, and 70%) across several scenarios: \\n 1) A random baseline;\\n 2) Selection without using the adapter;\\n 3) Selection using the adapter trained on the corresponding noisy datasets. \\n\\nWe present the results in the table below. It shows that, under our framework, the proportion of noisy samples in the selected datasets remains considerably low across various noise ratios, regardless of whether the adapter is used or not. Although the difference in noise proportion between the settings with and without using the adapter is marginal, our method achieves higher accuracy using the adapter. This highlights that even under high-noise conditions, our approach can still ensure a strategical selection ability while maintaining denoising effectiveness, leading to better performance.\\n\\n**Table A-1**: Comparison of noise proportion and accuracy (%) with CIFAR-100 in high-noise conditions with and without using the adapter. Noise proportion means the introduced noise ratio in the selected datasets.\\n| | Noise Ratio (%) | 20| | 50 | | 70| |\\n|-|-|-|-|-|-|-|-|\\n|| Selection Ratio (%) |20 | 30 | 20 | 30 | 20 | 30 |\\n| Random | Noise Proportion (%)| 20.80 | 19.83 | 20.32 | 30.10 | 20.83| 29.93|\\n| | Acc. (%) | 34.47 | 43.26 | 18.70 | 22.79 | 11.56 | 13.38 |\\n| Ours w/o adapter | Noise Proportion (%)| 0.33 | 0.52 | 1.37 | 0.74 | 1.70 | 6.42 |\\n| | Acc. (%) | 45.37 | 55.82 | 46.08 | 58.68 | 46.54 | 53.05 |\\n| Ours* | Noise Proportion (%)| **0.24** | **0.32** | **0.43** | **0.68** | **0.80** | **4.30** |\\n| | Acc. (%) | **46.05** | **58.34** | **52.56** | **60.72** | **51.50** | **56.80** |\\n\\n**Analysis:**\\n\\nCLIP's strong alignment capabilities, derived from extensive pretraining, make it inherently robust to noise. The adapter, designed for domain-specific transfer, is lightweight, with significantly fewer parameters (a simple linear layer constituting only 0.04% of CLIP ViT-B/32\\u2019s parameters) and minimal training iterations. This ensures the adapter complements rather than overshadows CLIP\\u2019s alignment capabilities. Our analysis further reveals that, across different noisy conditions (even high-noise conditions), the alignment discrepancy between adapters trained on clean and noisy data is negligible (<= 0.02%), validating its robustness in noisy conditions. This leads to minimal impact on the SAS in Eq. 1 and the subsequent optimization module, ensuring our method remains robust and effective, even when fine-tuned on imperfect data. We appreciate your attention to this critical aspect.\\n\\n- **Q2: Does the Actual Selection Costs in Appendix G include the training time for the Adapter? Is the comparison fair?**\\n- **A2**: The selection costs reported in Appendix G **DO** include the training time for the adapter. Due to the lightweight architecture of the adapter (a linear layer) and the fast fine-tuning process, this training cost is minimal. We have highlighted it in the revised version in Appendix G. As all training costs are covered, the comparison is fair. \\n- **Q3: From the ablation experiment, it can be seen that selection loss has a significant impact on the final result. So, what is the sensitivity of this parameter?**\\n- **A3:** Thanks for the question. To further investigate the stability of the parameter in the selection loss, namely \\u03b2 in Eq.(6). We present additional experiment results on CIFAR-100 using ResNet-50 across various \\u03b2 values. In the table below, the results show that the accuracy is not sensitive to different \\u03b2 values, validating the robustness of the parameter.\\n\\n**Table A-2**: Stability analysis of the parameter for the selection loss with CIFAR-100 using ResNet-50.\\n| \\u03b2 | 1.5 | 2 | 3 | 5 | 7 |\\n|-|-|-|-|-|-|\\n| Accuracy (%) | 78.90\\u00b10.05 | 78.98\\u00b10.09 | 78.89\\u00b10.03 | 78.87\\u00b10.02 | 78.90\\u00b10.06 |\"}", "{\"summary\": \"The paper proposes a CLIP-powered framework for data selection that addresses the limitations of traditional single-modality data selection methods by incorporating multimodal information. This framework utilizes a pretrained vision-language model (CLIP) to integrate image and text modalities, thereby enhancing data selection accuracy and robustness. The framework is built on three modules: Dataset Adaptation, Sample Scoring, and Selection Optimization. It leverages both semantic alignment and diversity scores to refine sample selection, removing redundant and noisy data to improve training efficiency and performance on benchmark datasets. Experiments demonstrate the method's superior performance in generalization and robustness compared to existing state-of-the-art approaches, especially in noisy environments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Originality: The use of multimodal data via CLIP for data selection is innovative and distinguishes this work from single-modality approaches.\\n2. Quality: The methodology is backed by rigorous experiments, with a well-defined scoring system for sample selection (SAS and SDS).\\n3. Clarity: Visualizations effectively depict the advantages of multimodal selection over traditional methods.\\n4. Significance: This framework has broad applications, as it improves dataset quality and model robustness, potentially benefiting various domains using machine learning.\", \"weaknesses\": \"1. Complexity: While effective, the Selection Optimization module\\u2019s complexity might hinder its application to extremely large datasets without computational resources.\\n2. Generality: The framework is primarily tested on vision-based datasets; extending it to text or mixed-modality datasets may reveal further limitations. Future work could focus on enhancing the framework\\u2019s versatility across other modalities\\u200b\", \"questions\": \"1. Could the authors provide more detail on the computational efficiency of the Selection Optimization module for very large-scale datasets?\\n2. Has the framework been tested on datasets with high modality imbalance (e.g., more text than images), and how does it handle such scenarios?\\n3. For real-world noisy datasets, could the authors provide more information on how they define and quantify \\\"noisy\\\" samples?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer vU6u,\\n\\nWe would like to express our sincere gratitude to reviewer vU6u for acknowledging our work and providing constructive suggestions. We will incorporate the clarifications and additional material in the final paper. Thanks again for the time and effort in reviewing our work.\"}", "{\"title\": \"Final score\", \"comment\": \"After reading the response, the authors addressed my concerns, and I raised my rating.\"}", "{\"title\": \"Final recommendation\", \"comment\": \"Thank you for the detailed response to my comments. My concerns are well addressed. Please do incorporate the clarifications and additional material in the final paper.\\n\\nI am keeping my score the same.\"}", "{\"comment\": \"I appreciate the authors' efforts to respond to my questions and improve the manuscript. The responses addressed my most concerns; therefore, I will raise the rating.\", \"here_is_a_minor_problem_that_needs_to_be_fixed_in_the_revision\": \"\\\"Ours* in Table C-3 refers to fine-tuning the adapter with imperfect data\\\" somewhat conflicting with Table C-4 and Table A-1, where `ours` perhaps means the updated results of the training adapter with imperfect data. I would like to suggest the authors unify the symbol for fine-tuning with/without imperfect data in the manuscript.\"}", "{\"title\": \"Response to Reviewer HyV4 (3/3)\", \"comment\": \"- **Q3: Methods from 2024 have not been compared, e.g., [c]. Besides, some typical methods good at low sampling ratios are missing, e.g., [d].**\\n- **A3:** Thanks for suggesting additional references and comparisons. \\n\\n**1.** As suggested, we compared our method with the suggested work [c] on benchmark, noisy, and corrupted datasets (Table C-6/8/9), as well as the training efficiency (Table C-7). For CIFAR-10/100, we utilized the reported results from [c] and followed their implementation details to train the same models using our selected datasets. \\n \\n The results show that while our method is slightly lower than TDDS at low selection ratios, ours can achieve higher accuracy at high selection ratios and noisy/corrupted datasets. Notably, our method demonstrates significant computational efficiency. As presented in Table C-7, it achieves a **10x** reduction in computational overhead compared to TDDS when calculating the average cost of obtaining one selected dataset or $k$ selected datasets. \\n Moreover, our method demonstrates significant advantages in other critical aspects. First, as shown in Tables C-8 and C-9, under noisy scenarios using CIFAR-100 and Tiny-ImageNet, our method exhibits superior noise robustness and accuracy compared to TDDS. Similarly, when evaluated on corrupted Tiny-ImageNet (20% corruption ratio) across various selection ratios, our method achieves superior robustness to data corruption.\\n\\n**Table C-6**: Comparison with [c] on CIFAR10/100 with ResNet-50. The accuracy of TDDS is obtained from its reported results.\\n||Selection Ratio (%)|30|50|70|\\n|-|-|-|-|-|\\n|CIFAR-10|TDDS|**93.92**|95.66|95.50|\\n||Ours|91.95|**95.74**|**95.86**|\\n|CIFAR-100|TDDS|**66.56**|76.24|79.53|\\n||Ours|63.79|**77.15**|**79.93**|\\n\\n**Table C-7**: Training efficiency comparison between TDDS and Ours. Preparetion costs is the costs required prior to the selection begins. k is the number of selected datasets. \\n||TDDS|Ours|\\n|-|-|-|\\n|Preparation Costs (h)|4.15|**0.4**|\\n|Selection Costs (h) | ~0|<0.005|\\n|Total (k=1) (h)|4.15|**0.40**|\\n|Total (k=5) (h)|0.83|**0.085**|\\n\\n**Table C-8**: Comparison with [c] under noisy conditions.\\n|||CIFAR-100||Tiny-ImageNet||\\n|-|-|-|-|-|-|\\n||Selection Ratio (%)|20|30|20|30|\\n|TDDS|Noise Proportion (%)|18.54|28.93|17.75|29.59|\\n||Acc. (%)|35.15|45.66|20.67|25.52|\\n|Ours|Noise Proportion (%)|**0.24**|**0.32**|**0.16**|**0.24**|\\n||Acc. (%)|**45.63**|**58.65**|**25.98**|**32.21**|\\n\\n**Table C-9**\\uff1aComparison with [c] on corrupted Tiny-ImageNet with a 20% corruption ratio.\\n|Selection Ratio (%)|20|30|40|60|80|\\n|-|-|-|-|-|-|\\n|TDDS|23.67|29.24|35.13|41.04|44.22|\\n|Ours|**26.05**|**32.13**|**37.66**|**44.05**|**47.30**|\\n\\n **2.** Thank you for suggesting a comparison with [d]. We acknowledge the theoretical significance of this work, particularly its foundational contributions to establishing a strong foundation in connecting submodular functions and information-theoretic measures such as entropy and mutual information. However, as you noted, this work primarily emphasizes theoretical formulations and does not include empirical experiments, implementation details, or practical evaluations on data selection. As an alternative, in our work, we have compared several data selection methods inspired by their theoretical results, such as Glister and CG-Score, in Tab. 1-3 and Fig. 4-6.\\n\\n**3.** We have included the suggested works [c,d] in Section 2 of the revised manuscript. Specifically, \\n \\n >- Sec 2, paragraph 4, line 146: add reference [c] \\\"such as temporal dual-depth scoring Zhang et al. (2024)\\\"\\n \\n >- Sec 2, paragraph 4, line 150: add reference [d] \\\"and submodularity Iyer et al. (2021);\\\" \\n\\nWe hope these additions and clarifications address your concerns. Thank you again for your valuable suggestions.\\n\\n- **Q4: Typo: the symbol for the learnable parameter in Figure 2 should be d rather than w.**\\n- **A4:** We appreciate your careful observation. We have corrected the symbol for the learnable parameter in Figure 2 in the revised version, replacing $w$ with $d$.\\n\\n- **Q5: The text-image alignment is like using CLIP to measure the difficulty in learning the data. Noisy data or corrupted data are difficult to learn. However, will the selection also filter some important yet difficult data?**\\n- **A5:** Thanks for your insightful comments regarding the potential trade-off between filtering noisy data and retaining important but challenging data. We recognize that some complex data may be not selected, as distinguishing between noisy and genuinely difficult data is inherently challenging. However, we would like to clarify that our framework employs a multi-objective optimization strategy that balances semantic alignment (to reduce noise) and diversity to retain a wide variety of representative samples. This dual-focus approach minimizes the risk of discarding critical data while effectively reducing the impact of noisy data.\"}", "{\"title\": \"Response to Reviewer vC5o (2/2)\", \"comment\": \"- **Q4: Has the framework been tested on datasets with high modality imbalance (e.g., more text than images), and how does it handle such scenarios?**\\n- **A4:** Insightful points. We would like to clarify that our work primarily focuses on image datasets, utilizing textual modalities to guide sample selection, where the two modalities are set to be balanced. Our work is a pioneering effort in leveraging cross-modal information for image dataset selection. Compared with existing methods, our framework has the potential to be extended to the selection of multimodal data due to its multimodal nature. However, further optimization for such scenarios might be necessary, such as dynamic modality weight allocation mechanism and augmentation strategies, especially for modality-imbalanced datasets. We believe this will be a promising direction and will include relevant discussions in the paper.\\n\\n- **Q5: For real-world noisy datasets, could the authors provide more information on how they define and quantify \\\"noisy\\\" samples?**\\n- **A5:** Good question. We follow previous works [1,4,5] to introduce symmetric noise into labels and corrutption into images as a way to simulate real-world noisy datasets. The noise is measured by the alignment between images and their associated labels. Results show that using multimodal semantic alignment can greatly improve robustness, as indicated in Table 2 and Figure 5 in the manuscript. However, we acknowledge that rigorously defining or quantifying noise in the real world is inherently challenging, as it often depends on the specific task, dataset, and noise characteristics. Although using multimodal semantics can alleviate the impact of noise, explicitly incorporating the characteristics of noise into the framework may lead to better results, we leave it to future exploration.\\n\\n[1] Moderate coreset: A universal method of data selection for real-world data-efficient deep learning, ICLR, 2023\\n\\n[2] Spanning Training Progress: Temporal Dual-Depth Scoring (TDDS) for Enhanced Dataset Pruning, CVPR, 2024\\n\\n[3] Data pruning via moving-one-sample-out. NeurIPS 2023\\n\\n[4] Confident learning: Estimating uncertainty in dataset labels. Journal of Artificial Intelligence Research. 2021 Apr 14;70:1373-411.\\n\\n[5] Combating noisy labels by agreement: A joint training method with co-regularization. CVPR 2020.\"}", "{\"metareview\": \"This work proposes a new framework for data selection to address the computational overhead and the impact of noisy data when training deep learning models. The key idea is to leverage multimodal information to better select important data and this is achieved by utilising the CLIP model. A set of scores is developed and a multi-objective optimisation is implemented to realise data selection. Experimental study demonstrates the efficacy of the proposed method. Reviewers are overall positive on this work, indicating the strengths on its robust performance, easy to understand, simple and efficient, and its originality and significance. Meanwhile, the reviewers raise the issues related to the clarification of some details, the impact of bias, convergence, the presence of imperfect data, comparison with the last methods, and the complexity and generality of the method. The authors provide a high quality rebuttal and effectively address most of the issues. All the final ratings are on the positive side. By checking the submission, the reviews, and the rebuttals, AC agrees with the reviewers on their observations. Therefore, this work is recommended for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raise the issues related to the clarification of some details, the impact of bias, convergence, the presence of imperfect data, comparison with the last methods, and the complexity and generality of the method. The authors provide a high quality rebuttal and effectively address most of the issues. All the final ratings are on the positive side. By checking the submission, the reviews, and the rebuttals, AC agrees with the reviewers on their observations.\"}", "{\"title\": \"Experimental Setup and Explainations\", \"comment\": [\"Dear Authors,\", \"Thank you for sharing these additional experiments.\", \"Could you please elaborate further on the details of the experiments? Specifically:\", \"Is the data used for training the adapter the same as that used for data selection and task model training?\", \"What are the key differences between Table C-1 and Table C-2?\", \"Why does using imperfect data for training degrade results in some cases while improving performance in others? What kind of data are selected or filtered with the imperfect fine-tuning?\", \"In Table C-3, does `ours*` refer to fine-tuning the adapter with imperfect data or perfect data?\", \"Additionally, it would be helpful if the final paper included examples of the corrupted data and showed visual results of different selection strategies.\", \"Thank you for your time and clarification.\"]}", "{\"title\": \"General Response\", \"comment\": \"# General Description:\\nDear Area Chairs and Reviewers,\\n\\nWe sincerely thank you for the time and effort in reviewing our work. We greatly appreciate the constructive feedback and the recognition from all reviewers\\u2014QkGn (R1), vU6u (R2), HyV4 (R3), and vC5o (R4)\\u2014highlighting key strengths of our work, including (1) novelty (R1, R2, R3, R4), (2) overall presentation clarity (R1, R2, R4), (3) sufficient experiments results (R2, R3), and (4) significance (R4). Besides, the concerns are mainly concentrated on (1) more experiments and analysis (R1, R3, R4), and (2) specific refinements to improve clarity (R2, R3).\\n\\n# Additional Experiments and Analyses:\\nIn the responses, we show additional experimental results and analysis, including:\\n1. Comparison of using and without using the adapter in high-noise conditions (R1: Q1)(Table A-1)\\n2. Evaluation and analysis of the dataset adaptation and selection optimization costs (R2: Q4, R4: Q3 )(Table B-1, D-1)\\n3. Stability analysis of the parameter of selection loss (R1: Q3)(Table A-2)\\n4. The effectiveness of the adapter in noisy conditions (R3: Q1)(Table C-1/2/3)\\n5. Validation of the significance of introducing text modality (R3: Q2)(Table C-4/5)\\n6. Comparison with the suggested work (R3: Q3)(Table C-6/7/8/9)\\n\\nThank you again for your thoughtful feedback and for helping us refine our work further.\\n\\nSincerely,\\n\\nAuthors of Submission 3088\"}", "{\"title\": \"Response to Reviewer HyV4 (2/3)\", \"comment\": \"- **Q2: Intuitively, we can use the average image feature to replace the text feature in Eq. 1 of the proposed method. Therefore, it is essential to discuss these results to convey to readers that introducing text is significant.**\\n- **A2:** Thank you for the insightful comment. To address your suggestion, we conducted additional experiments to evaluate the performance of using average image features as prototypes, replacing the text features in Eq.1. The results, shown in Table C-4 and C-5, show that using average image features results in higher noise ratios in the selected datasets and notably lower accuracy compared to our method. \\n These findings validate that text features provide complementary semantic information, which enhances both noise robustness and accuracy. We have included these results in our ablation study in Section 4.5.\\n\\n**Table C-4**: Performance comparison of using text features (Ours) vs. average image features under varying noise and selection ratios with CIFAR-100. Noise proportion means noise ratio in the selected datasets.\\n| Noise Ratio (%)| | 20 | | 50 | | 70 | |\\n|-|-----|----------|---------|----------|---------|----------|---------|\\n||Selection Ratio (%)|20 | 30 | 20 | 30 | 20 | 30 |\\n|Mean image feat.| Noise Proportion (%)| 16.39 | 25.35 | 20.00| 29.95| 20.22| 30.16|\\n||Acc. (%)|28.42|38.35|16.56|23.19|11.18|14.61|\\n|Ours*|Noise Proportion (%) |**0.24** | **0.32** | **0.43**| **0.68** | **0.80** | **4.30** |\\n||Acc. (%)|**46.05**|**58.34**|**52.56**|**60.72**|**51.50**|**56.80**|\\n\\n**Table C-5**: Performance comparison of using text features (Ours) vs. average image features with a 20% corruption ratio on Tiny-ImageNet.\\n| Selection Ratio (%) | 20 | 30 | 40 | 60 | 80 |\\n|-|-|-|-|-|-|\\n|Avg. Image Feat.|12.74 |21.03|26.89|37.52|36.89|\\n|Ours*|**26.02**|**32.16**|**37.52**|**43.99**|**47.52**|\"}", "{\"summary\": \"This paper argues that current image data selection methods are limited because they are unimodal. It proposes a multimodal method that uses the category texts from pretrained CLIP to complement images for more robust and generalized data selection. The proposed framework consists of three modules (1) dataset adaptation that integrates image and text adapters to transfer prior knowledge to the target data; (2) sample scoring that calculates the semantic alignment and diversity scores based on the multimodal features, measuring the image-text alignment as well as the local pattern variability; (3) Selection optimization that uses the two scores to select semantically representative and diverse samples, and introduces selection optimization to efficiently identify the ideal data subsets given an expected selection ratio through a multi-objective optimization strategy.\", \"post_discussion_comments\": \"The authors have addressed my concerns. With the additional clarifications and material, the paper is in much better shape. I am keeping my score at 8, which correctly reflects the quality of this work.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"The idea of exploiting multimodal features from the CLIP model is interesting and plausible.\\n\\nThe proposed method is simple, which is a strength in my opinion. \\n\\nThe method is also efficient and is able to control the alignment, diversity and selection ratio in a multiobjective optimization efficiently. \\n\\nThe paper is well written and easy to follow. \\n\\nThe results are good.\", \"weaknesses\": \"The proposed method relies on a pretrained CLIP and hence any biases in the CLIP model will propagate to the selected dataset.\\n\\nThe proposed method optimizes alignment and diversity but does it have any indirect effect on bias in the dataset? \\n\\nIs it possible to control bias in the dataset or in the subsequent models that are trained on the selected dataset? \\n\\nDoes the STE cause convergence issues? \\n\\nThe variable d (sample wise parameter) can be easily confused with d (feature dimension). Consider changing one of them. Also, I don\\u2019t see d in Fig. 1. Is \\u201cw: N x 1\\u201d the sample wise parameter d? Or am I missing something? \\n\\nThe caption of Tab.3 needs to be corrected. You can also merge Tab.3 with Tab.2.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Looking forward to the reply\", \"comment\": \"Dear reviewer vC5o:\\n\\nThanks so much again for the time and effort in our work. According to the comments and concerns, we conduct the corresponding experiments and further discuss the related points. Besides, we have revised our paper and added a discussion to the main paper on page 10 in the revised version to further discuss the related points.\\n\\nAs the discussion period is coming to a close, may I know if our rebuttal addresses the concerns? If there are further concerns or questions, please feel free to let us know. Thanks again for taking the time to review our work and provide insightful comments.\"}", "{\"title\": \"Looking forward to the reply\", \"comment\": \"Dear reviewer HyV4:\\n\\nThanks so much again for the time and effort in our work. According to the comments and concerns, we conduct the corresponding experiments and further discuss the related points. Additionally, according to your suggestions, we have revised the results of 4.2 and added additional ablation study results to 4.5 and Appendix J.\\n\\nAs the discussion period is about to close, may I know if our rebuttal addresses the concerns? If there are further concerns or questions, we are willing to address them. Thanks again for taking the time to review our work and provide insightful comments.\"}", "{\"comment\": \"Dear reviewer HyV4,\\n\\nWe would like to express our sincere gratitude to reviewer HyV4 for acknowledging our work and providing constructive suggestions. We will unify symbols in Tables C-4, A-1, and in the revised manuscript. Thanks again for the time and effort in reviewing our work.\"}", "{\"title\": \"Looking forward to the reply\", \"comment\": \"Dear reviewer vU6u:\\n\\nThanks so much again for the time and effort in our work. According to the comments and concerns, we conduct the corresponding experiments and further discuss the related points. Additionally, we have revised our paper for presentation clarity and added a discussion to the main paper on page 10 in the revised version to further discuss the raised points.\\n\\nAs the discussion period is coming to a close, may I know if our rebuttal addresses the concerns? If there are further concerns or questions, please feel free to let us know. Thanks again for taking the time to review our work and provide insightful comments.\"}", "{\"comment\": [\"Dear Reviewer HyV4:\", \"Thank you for providing insightful questions and suggestions on our work.\", \"For the comments and questions, we provide more details of the experiments.\", \"**Q6: Is the data used for training the adapter the same as that used for data selection and task model training?**\", \"**A6:** Thanks for the question. Yes. The data used for training the adapter is the same as that used for selection and model training.\", \"**Q7: What are the key differences between Table C-1 and Table C-2?**\", \"**A7:** Thanks for the insightful comments. There are two types of noisy data in our main paper, noisy labels and corrupted images (which can be seen in Section 4.3 on pages 8 and 9).\", \"Table C-1 uses data with noisy labels, where some sample labels are incorrectly flipped.\", \"Table C-2 uses data with corrupted images, where images are corrupted by Gaussian noise, random occlusion, resolution, fog, and motion blur.\", \"**Q8: Why does using imperfect data for training degrade results in some cases while improving performance in others? What kind of data are selected or filtered with the imperfect fine-tuning?**\", \"**A8:** Thanks for the insightful question. First, we want to emphasize that our method consistently achieves excellent performance, regardless of whether perfect or imperfect data is used for fine-tuning, and significantly outperforms other compared methods (as shown in Table 2 and Fig. 5 in Section 4.3).\", \"The variation in performance when using imperfect data for fine-tuning can be attributed to stochasticity in the training process and selection optimization. Training on noisy data may introduce slight instability, as the noise affects the optimization dynamics. Despite these factors:\", \"1. The observed performance degradation remains minimal, averaging just 0.13%.\", \"2. While minor variations are observed between Ours and Ours*, both consistently outperform other methods by a significant margin. These results underscore the robustness and effectiveness of our approach, even under challenging conditions.\", \"As we have shown in Table C-1 and Table C-3, when using the imperfectly finetuned adapter for selection optimization, the introduced noise proportion remains significantly low. This indicates that the selection process **prioritizes clean data while effectively filtering out most of the noisy samples**, ensuring the robustness and reliability of the selected dataset.\", \"Thank you again for your valuable comment and for providing us with the opportunity to clarify this aspect.\", \"***Q9: In Table C-3, does ours* refer to fine-tuning the adapter with imperfect data or perfect data?**\", \"**A9:** Thanks for the question. Ours* refers to fine-tuning the adapter with imperfect data.\", \"**Q10 : Additionally, it would be helpful if the final paper included examples of the corrupted data and showed visual results of different selection strategies.**\", \"**A10 :** Thanks for the suggestion. We have included examples of the corrupted data in Figure 8 in Appendix G on page 17 in the revised version. Moreover, according to the reviewer's suggestion, we have also included visual results of the selection effectiveness in Figure 9 in Appendix H on page 17 in the revised version.\"]}" ] }
9bLdbp46Q1
Adaptive Retention & Correction: Test-Time Training for Continual Learning
[ "Haoran Chen", "Micah Goldblum", "Zuxuan Wu", "Yu-Gang Jiang" ]
Continual learning, also known as lifelong learning or incremental learning, refers to the process by which a model learns from a stream of incoming data over time. A common problem in continual learning is the classification layer’s bias towards the most recent task. Traditionally, methods have relied on incorporating data from past tasks during training to mitigate this issue. However, the recent shift in continual learning to memory-free environments has rendered these approaches infeasible. In this study, we propose a solution focused on the testing phase. We first introduce a simple Out-of-Task Detection method, OTD, designed to accurately identify samples from past tasks during testing. Leveraging OTD, we then propose: (1) an Adaptive Retention mechanism for dynamically tuning the classifier layer on past task data; (2) an Adaptive Correction mechanism for revising predictions when the model classifies data from previous tasks into classes from the current task. We name our approach Adaptive Retention & Correction (ARC). While designed for memory-free environments, ARC also proves effective in memorybased settings. Extensive experiments show that our proposed method can be plugged in to virtually any existing continual learning approach without requiring any modifications to its training procedure. Specifically, when integrated with state-of-the-art approaches, ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets, respectively
[ "Continual Learning; Computer Vision; Transfer Learning" ]
Accept (Poster)
https://openreview.net/pdf?id=9bLdbp46Q1
https://openreview.net/forum?id=9bLdbp46Q1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wr4CPqGZCy", "wKL0Uz4a5S", "uVLz5mTd9i", "tEzCM9tZrl", "rz4SiPHbxB", "qiFdAjRqXy", "kf6Gc2kmTj", "iY5LQpQ0CT", "hNp2LVvK0f", "h8xxWd7pSd", "gYMTWuToUw", "cJdYk710ZU", "aRWtG1tj32", "aCZi7LLORK", "Ze8iRdAPiW", "YUB0tiK7pl", "SSsJEgBEaq", "QT0Ed0jWKt", "Lgx2wjUFSQ", "Kb6umFAcMv", "HLtQabPR68", "GwbIRcS4ab", "FrVoMqJKbF", "FdMX6wfoTX", "BQaZ39b8ob", "80ZOEccAq1", "7aJAXAvcUZ", "11NTgV5V2f", "0HEXpH84Aq" ], "note_type": [ "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732294271185, 1734627258284, 1730399539778, 1733109488268, 1732117140635, 1732473656374, 1732596265609, 1732117234981, 1732548444174, 1732457481495, 1732116348094, 1732457464410, 1732544489997, 1730720571324, 1732516073901, 1732472007273, 1732116001571, 1737523516136, 1732457496027, 1733109055074, 1739913444640, 1732347138600, 1739885383492, 1730711502345, 1732637481902, 1732115916058, 1732116430827, 1730555075569, 1732543761827 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2645/Reviewer_1cF7" ], [ "ICLR.cc/2025/Conference/Submission2645/Area_Chair_aEgt" ], [ "ICLR.cc/2025/Conference/Submission2645/Reviewer_hK7E" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ], [ "ICLR.cc/2025/Conference/Submission2645/Reviewer_hK7E" ], [ "ICLR.cc/2025/Conference/Submission2645/Reviewer_hK7E" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ], [ "ICLR.cc/2025/Conference/Submission2645/Reviewer_1cF7" ], [ "ICLR.cc/2025/Conference/Submission2645/Reviewer_EsC3" ], [ "ICLR.cc/2025/Conference/Submission2645/Reviewer_ALn4" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ], [ "ICLR.cc/2025/Conference/Submission2645/Area_Chair_aEgt" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ], [ "~Haoran_Chen4" ], [ "ICLR.cc/2025/Conference/Submission2645/Reviewer_ALn4" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ], [ "ICLR.cc/2025/Conference/Submission2645/Reviewer_EsC3" ], [ "ICLR.cc/2025/Conference/Submission2645/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to the authors\", \"comment\": \"Thank you for responding to my concerns and quantifying the extra time required at inference.\\n\\nJust to be clear, I did not expect you to run any additional experiments, I was just curious if you applied ARC on top of more recent methods. Thanks for pointing out MEMO, I missed it in Table 1.\\n\\nARC is well motivated, robust, and seems to lead to improvements on top of a wide range of continual merging methods. While I acknowledge the argument about test-time compute in LLMs, whether the accuracy gain is worth a potential 8-37% hit in latency will depend on the particular application. To make a stronger case for the method, I would recommend testing how it scales with the number of tasks in the training sequence.\\n\\nThank you again for your response. I will maintain my original score.\"}", "{\"metareview\": \"The paper introduces an Out-of-Task Detection method, OTD, designed to accurately identify samples from past tasks during testing, aiming to overcome classification layer\\u2019s bias towards the most recent task in memory-free continual learning.\\n\\n**Strengths**\\n- The idea of adaptively adjusting and correcting model predictions using test samples is interesting and well-suited to continual learning.\\n- The proposed approach achieves good performance. \\n\\n**Weaknesses**\\n- Lack sufficient details of hyperparamter tuning and the training process of the model. \\n- Lack a detailed comparison with existing test-time adaptation approaches. \\n\\nOverall, leveraging test samples to address catastrophic forgetting appears to be a new and effective approach, particularly in the context of memory-free continual learning.\", \"additional_comments_on_reviewer_discussion\": \"Multiple reviewers engaged in an intensive discussion with the authors during the rebuttal process. The authors conducted additional experiments and provided detailed responses, which significantly improved the quality and clarity of the paper.\"}", "{\"summary\": \"This paper proposes to reduce the task-recency bias present in the last layer when training in a Continual Learning environment. This well-known problem remains under-explored in the case of memory-free methods and presents additional challenges. In particular, the authors propose to leverage test data in order to adapt the weights of the last layer by differencing old and newer data using prediction confidence. The authors also propose an alternative to the softmax to rebalance the model\\u2019s logits, which as often bias toward later tasks. Eventually, this paper shows significant improvement by combining their approach with current state-of-the-art methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The idea of focusing on the debiasing the last layer for later prompt-based memory-free methods is interesting and under-explored\", \"The intuition behind focusing on test-time adaptation by taking into context the specifics behaviour of continually trained model is good and intuitive\", \"The method performances are appealing and it seems quite robust to hyper-parameter, which is crucial in Continual Learning where hyper-parameter search is unrealistic\", \"Most sections are well-written and easy to follow\"], \"weaknesses\": [\"### Major weaknesses\", \"There is a lack of detail on the overall training procedure. How many epochs are used for training? Is this correction and retention procedure applied after each task? In that case, this would correspond to storing unlabeled memory data.\", \"I don\\u2019t think the formulae in definition 1 is well defined. The inequality under the max does not make much sense as $s(i-1) \\\\leq i \\\\leq s.i$ is impossible in many cases. Also, taking $T>1$ implies that the output of the softmax will be lower (flattened) for earlier tasks (low $i$ values), but I believe the objective is the opposite; the authors want to obtain higher softmax values for earlier tasks. I also do not really understand the usage of a maximum operator here. Overall, this definition is very confusing.\", \"There is a lack of detail on the overall training procedure. How many epochs are used for training? Is this correction and retention procedure applied after each task? In that case, this would correspond to storing unlabeled memory data.\", \"This method is connected to Test-Time Adaptation and some discussion in the related work would be appreciated. While TTA methods are rightfully compared in section 4.2, such discussion could be included more thoroughly in the related work.\", \"The impact of the temperature is not presented in the paper\", \"The notation for the temperature and the number of tasks is the same, $T$, which is confusing\", \"in Figure 1, is the independent classifier trained for doing classification of one task only? Then is the problem a TIL problem? If this is the case, the performances are comparing a TIL problem to a CIL problem, which is unfair since the CIL problem is much harder than the TIL problem.\", \"l. 202 the distribution are defined as different between tasks but in 4.2, the incremental dataset are defined as IID. I believe this is a mistake, can you elaborate?\", \"### Minor weaknesses\", \"The experiments of figure 3 is quite unclear. It depends on the optimization strategy and the number of epochs for instance. Also, do you use the same fixed learning rate for joint training?\", \"I believe a confusion matrix would be more clearer than Figure 1 at least, maybe even Figure 1 and 2\", \"How much data do you use for linear probe?\", \"I believe the difference in performances in figure 3 between finetune and probe makes sense and justifies the claim that the representation knowledge is still transferable across task. However, I do not see why the joint training performances are lower in most cases. Could you elaborate?\", \"the code is not shared\", \"### typos\", \"The title is spelled wrong.\", \"If the authors can clarify my concerns, I would happily increase my score.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response from authors\", \"comment\": \"Dear reviewer EsC3, as the end of the rebuttal period is approaching, we want to kindly check if our responses have addressed your concerns. If there are any remaining points or statements that you still find unclear, we would be happy to provide further clarification.\"}", "{\"title\": \"Response from authors\", \"comment\": \"We thank reviewer ALn4 for the thoughtful and constructive comments. It is evident that the reviewer spent considerable time carefully reading our work and provided valuable insights and suggestions. We really appreciate the detailed feedback and the recognition of the strengths in our approach. Below, we address the specific points raised by the reviewer.\\n\\n>*Need to report the training time during inference, since it might add overhead when deployed to the real-world.*\\n\\nThanks for the advice! Due to time limitation, we provide the extra time required for ARC on 4 representative methods (same ones we used for hyperparameter ablation):\\n\\n||Additional time required|Performance Gain|\\n|----:|:---:|:---:| \\n|iCarL |11% | 6.6|\\n|Memo |8% |2.0 |\\n|L2P | 37% |2.5|\\n|DualPrompt |32% |1.4|\\n\\nAs shown, there is a certain amount of additional inference time required for ARC. This is expected, as ARC is specifically designed to address classification bias during inference, which naturally incurs some additional computational cost.\\n\\nFurthermore, it is worth noting that research in other areas, such as large language models (LLMs), are increasingly exploring methods that improve performance by utilizing additional inference-time computation. There is a growing consensus that additional inference time is a reasonable trade-off when it leads to significant improvements in model performance. Similarly, we believe the computational cost of our method is well-justified by the substantial performance gains it delivers.\\n\\n>*What if the incoming sample from the past task is not allow to use during inference? For example, it is deployed to the embbedding device that need real-time inference, will additional training add much overhead? Or is it possible to not update at every step? It will be interesting to provide such results.*\\n\\nThank you for raising this question. We would like to emphasize again that our method does not store any test samples in memory. Instead, it operates using on-the-fly test samples and performs a single gradient update as each sample is processed by the model. So whether or not past task sample is allowed to use is irrelevant to our method. \\n\\nRegarding the computational overhead, as noted in our response above, the additional time required for inference varies across methods. In the worst-case scenario with L2P, our method results in a frame-per-second (FPS) drop from approximately 60 to 43 on an NVIDIA 4090 GPU using the ViT-B/16 backbone, which still meets the criteria for real-time inference.\\n\\nReducing the frequency of updates is indeed a promising approach to further minimize the additional inference time. For instance, this could involve quantifying how \\\"biased\\\" the classifier is based solely on a batch of test samples before deciding whether an update is necessary. However, exploring this idea in detail falls outside the scope of this rebuttal. We appreciate your suggestion and consider it an interesting direction for future work.\\n\\n>*Can this approach extend to other computer vision task such as detection and segmentation? It would be great if the methods can also fix the problem in such scenario by re-balancing the classifcation head in object detection.*\\n\\nYes, we believe that the idea of ARC can be applied to any computer vision task where classifier bias is a challenge. In fact, there are already works exploring similar directions, such as [1], which addresses bias in continual semantic segmentation. However, applying ARC to tasks like detection or segmentation would require certain modifications to the implementation details to account for the unique characteristics of these tasks. For example, in object detection, ARC would need to re-balance not only the classification head but also adapt to region proposals or bounding box predictions to mitigate potential biases.\\n\\n[1] RBC: Rectifying the Biased Context in Continual Semantic Segmentation\\n\\n>*At Table 1, it seems strange that for CodaPrompt, the position of the difference for Split CIFAR-100 Inc5 is oppositve. The difference sign should be 5.8 (+0.1) instead of 5.7 (-0.1)*\\n\\nThank you for pointing this out! We agree that your suggested revision would indeed improve the clarity of the table. We will revise accordingly.\"}", "{\"title\": \"Thank you for your rebuttal\", \"comment\": \"I thank the authors for their thorough response, which clarifies most of my concerns. However, there is still one point which I am not sure to understand and is very important for me to quantify the overall contribution of this paper. I will start with this *critical* point first.\\n\\n> Yes, this correction and retention procedure is applied after each task. However, we do not store any unlabeled memory data. Instead, our approach relies solely on on-the-fly test samples during inference.\\n\\nLet us consider the weights of the classification layer, defined as $\\\\theta_{cls}$ in the paper. Now, these weights are updated after each task, let us say that $\\\\theta_{cls}^t$ are the corresponding classifier weight values for a task $t$. Then I think we can agree that $\\\\theta_{cls}^{t+1}$ are computed using Eq 3, starting from the value of $\\\\theta_{cls}^t$ and using **all the testing data from previous task**. This means that you use test data belonging to previous tasks when updating for the current task, and this is done **at the end of each task during training**. This is very different than training a model once, than adapting once during testing. Here you adapt several times at every test iteration, so, unless my understanding is incorrect, you strategy is equivalent to having access to unlabeled memory data at the end of each task.\\nNow, I still consider your work to be interesting and I do like the approach, however, I believe that it cannot be described as \\u201cmemory free\\u201d in its current stage, unless you apply the procedure **only once at the end of training**.\\nUnfortunately, I believe this to be a critical point and I cannot raise my score. **I think the paper cannot be accepted as long as it is presented as a \\u201cmemory-free\\u201d approach without further development regarding the above issue**.\", \"regarding_others_remarks\": \"> We apologize for any confusion. In fact, ARC is not involved in the training process. It is only applied during inference, and thus, details such as the number of training epochs are not applicable. During inference, the classifier is adaptively rebalanced for incoming test samples. This process involves only one gradient update per sample, performed in an online manner. In other words, we only use each sample once for classifier tuning.\\n\\nI understand that ARC occurs during testing only, however, I do not see any details regarding the training procedure of the methods combined with ARC. I believe this including such detailed in the appendix would help the reader have a clear understanding of your training procedure.\\n\\nOther issues I mentioned have been addressed by the authors and I sincerely thank them for their time and effort throughout this rebuttal.\"}", "{\"title\": \"Thanks you for the information\", \"comment\": \"I deeply thank the authors for taking the time to explain and clarify their work throughout this rebuttal.\\n\\n> ARC specifically operates during Step 3 and does not require storing any past-task data. To illustrate, consider a scenario where we initially train a model to distinguish between cats and dogs. Later, we may want to add a new class, such as birds, to the model (as our approach focuses on class-incremental learning). After training the model on bird data and redeploying it, the model\\u2019s practical application is to classify among cats, dogs, and birds. Naturally, during deployment, the model will encounter samples from previous tasks (cats and dogs), allowing ARC to be utilized without storing any past-task data.\\n\\nI see your point and agree with the practicality of such setup. I guess what is debatable is whether you store those \\u201cdata seen during deployment\\u201d. But you could certainly consider that new deployment data of any task come on a regular basis. It would be interesting for future work to consider cases where this test data comes in a separate fashion, e.g., an \\u201cincremental test time adaptation\\u201d. In any case, I agree with the authors and would like to thank them again for their explanation. Maybe such practical example could be included in the introduction to further improve the quality of the paper.\\n\\n> Nonetheless, we greatly appreciate your suggestion and have conducted additional experiments under the proposed setup, where our method is applied only after the final training task. The results are presented in the table below:\\n\\nTo follow up on my previous comment, I really appreciate these experiments which for me remove any doubt regarding the practicality of such method. I advise the authors to include such interesting results in the main draft or the appendix.\\nOverall, I agree with the authors and believe this work to be valuable for the Continual Learning community, the main issue I have with the current version of the paper is the presentation. Improving the presentation as per our discussions would definitely improve the overall quality of the paper. In that sense, I will raise my score to 6.\"}", "{\"title\": \"Response from authors\", \"comment\": \"We thank reviewer 1cF7 for the thoughtful and constructive comments. It is evident that the reviewer spent considerable time carefully reading our work and provided valuable insights and suggestions. We really appreciate the detailed feedback and the recognition of the strengths in our approach. Below, we address the specific points raised by the reviewer.\\n\\n\\n>*Quantify the computational cost of the self-retention procedure*\\n\\nThanks for the advice! Due to time limitation, we provide the extra time required for ARC on 4 representative methods (same ones we used for hyperparameter ablation) in the table below:\\n\\n||Additional time required|Performance Gain|\\n|----:|:---:|:---:| \\n|iCarL |11% | 6.6|\\n|Memo |8% |2.0 |\\n|L2P | 37% |2.5|\\n|DualPrompt |32% |1.4|\\n\\nAs shown, there is a certain amount of additional inference time required for ARC. This is expected, as ARC is specifically designed to address classification bias during inference, which naturally incurs some additional computational cost.\\n\\nFurthermore, it is worth noting that research in other areas, such as large language models (LLMs), are increasingly exploring methods that improve performance by utilizing additional inference-time computation. There is a growing consensus that additional inference time is a reasonable trade-off when it leads to significant improvements in model performance. Similarly, we believe the computational cost of our method is well-justified by the substantial performance gains it delivers.\\n\\n>*Did you run the hyperparameter search for the adaptive self-retention procedure on the same dataset that was used for evaluation? Better to ensure hyperparameter tuning is conducted on a separate dataset from the evaluation dataset.*\\n\\nYes, the hyperparameter search is ran on the same dataset, following previous methods. Furthermore, our experiments in Fig 5 shows that our method is robust to the choice of hyperparameters, and that even using the same hyperparameter across all datasets and methods would still yield performance gains. Therefore, to ensure a fair comparison, we used the best-performing values of $\\\\gamma$ and $\\\\beta$ for each method and dataset. \\n\\n\\n>*Have you tried applying your method to other continual learning algorithms, like BEEF, EASE, MEMO, and FOSTER?*\\n\\nThank you for your question. We actually have applied ARC to MEMO, and the results are reported in Table 1 of the paper. Additionally, ARC has been applied to a total of eight methods, covering a diverse range of approaches, including prompt-based and non-prompt-based methods as well as rehearsal-based and non-rehearsal-based methods. We believe this already demonstrates the versatility and generalizability of our method.\\n\\nDue to time constraints, it may be challenging to provide results for additional methods, such as BEEF, EASE, and FOSTER, within the scope of this submission. However, we consider this an interesting direction for future exploration.\\n\\n>*In line 88, you formulate the two observations as empirical findings. However, in Section 3.3, they are called assumptions. Do you have any empirical results that show to what extent these actually hold?*\\n\\nYes, the empirical validation of assumptions 1 and 2 is included in Table 4 and 5 of the paper.\"}", "{\"title\": \"Response from authors\", \"comment\": \">*However, I am confused about whether the model trained with ARC is used as the starting point for the next task. If this is the case, then this overhead should be accounted for when considering the training cost across all tasks. I think the authors need to provide a clearer explanation of this aspect of the method.*\\n\\nYes, this is indeed how ARC is used. However, we respectfully disagree that the overhead should be accounted for across all tasks. In real-world CL use cases:\\n\\n1. A model is initially trained on a dataset and deployed in practice.\\n2. Over time, new requirements emerge or new data becomes available, necessitating updates to the model.\\n3. The updated model is redeployed and continues to be used in practice.\\n4. This process repeats as additional data or requirements arise.\\n\\nIn this workflow, what matters most is the additional computational cost incurred during Step 3\\u2014when the model is redeployed after being updated. Evaluating overhead across previous cycles seems less relevant, as the focus should be on the efficiency of deployment after updates rather than cumulative overhead from prior tasks.\\n\\nThat said, we appreciate your perspective and will clarify this point further to ensure the process is well-understood. Thank you for bringing this up!\\n\\n>*Is the reported percentage relative to the original inference time?*\\n\\nYes, we will clarify this.\\n\\n>*Reproducibility of results*\\n\\nThank you for the suggestion! We will include it in the appendix. We also promise to open source the code upon acceptance.\\n\\n>*I strongly recommend that the authors revise unclear statements in the paper, particularly in the methodology section, to improve the clarity and accessibility of the work.*\\n\\nThank you for pointing this out! We will carefully revise and clarify all statements identified as unclear to ensure the paper is as accessible as possible.\\n\\nWe\\u2019re also curious to know if our responses have sufficiently addressed your questions. Additionally, are there are other statements you still find unclear? We would greatly appreciate your feedback so we can address them comprehensively.\"}", "{\"title\": \"Response from authors\", \"comment\": \"Dear reviewer, we would be grateful if you could confirm whether our response has addressed your concerns. Please do not hesitate to let us know whether there is anything else you would like to see clarified or improved before the end of the rebuttal period.\"}", "{\"title\": \"Response from authors\", \"comment\": \"We thank reviewer EsC3 for the thoughtful and constructive comments. It is evident that the reviewer spent considerable time carefully reading our work and provided valuable insights and suggestions. We really appreciate the detailed feedback and the recognition of the strengths in our approach. Below, we address the specific points raised by the reviewer.\\n\\n\\n>*While the motivation to correct misclassified past task samples is sound, the proposed Task-based Softmax Score (TSS) may be problematic. For the earlier tasks, as the temperature increases, the probability distribution flattens, which could hinder the model\\u2019s ability to recognize samples from past tasks.*\\n\\nThank you for pointing this out. To clarify, the denominator of TSS does not sum over logits for all classes (as in regular softmax) but is instead restricted to the classes revealed to the model up to task $i$. For earlier tasks, this results in fewer terms in the denominator, which sharpens the probability distribution rather than flattening it. To counterbalance this sharpening effect, we employ a temperature value $T > 1$ (\\\"accommodate the imbalanced denominator for different tasks\\\" as stated in L344). \\n\\nTo further illustrate the impact of the temperature value, we provide the following ablation study, which demonstrates its effect on average accuracy across different methods. As shown, incorporating the temperature parameter improves performance consistently:\\n\\n||Temperature |Avg Acc on Imagenet-R Inc20|\\n|----:|:---:|:---:|\\n|iCarL |&check; |67.6 |\\n|iCarL |&cross; |67.4 |\\n|Memo |&check; |68.2 |\\n|Memo |&cross; |67.8 |\\n|L2P |&check; |75.1 |\\n|L2P |&cross; |74.6 |\\n|DualPrompt |&check; |70.5|\\n|DualPrompt |&cross; |69.9 |\\n\\n>*Although the experiments demonstrate the effectiveness of ARC, the paper lacks comparisons with other test-time adaptation (TTA) benchmarks, such as CoTTA.*\\n\\nWe would like to point out that we **have compared ARC with other TTA benchmarks**, as shown in Table 3, where TENT is included as a representative TTA method. This has also been acknowledged by Reviewer hK7E.\\n\\nRegarding CoTTA, we tried to include it in our experiments. However, our findings reveal that CoTTA is not compatible with prompt-based methods. Directly applying CoTTA to these methods leads to a decrease in performance, as shown below:\\n\\n||Avg Acc on Imagenet-R Inc20|\\n|----:|:---:|\\n|L2P |72.6 |\\n|L2P + CoTTA |72.2 |\\n|DualPrompt | 69.1 |\\n|DualPrompt + CoTTA |67.7 |\\n\\nFor non-prompt-based methods, while CoTTA does provide some improvements, the gains are less significant than those achieved by ARC. This is evident in the following comparison:\\n\\n||Avg Acc on Imagenet-R Inc20|\\n|----:|:---|\\n|iCarL + ARC |67.6 |\\n|iCarL + CoTTA |64.7 |\\n|Memo + ARC |68.2 |\\n|Memo + CoTTA |67.6 |\\n\\nTherefore, we did not report the comparison with CoTTA.\\n\\n>*Additionally, the introduction of extra data for classifier training (referred to as retention) raises concerns about increased training costs that should be addressed.*\\n\\nWe would like to clarify that ARC **does not introduce any additional training costs**. The retention mechanism is applied solely during inference, where the classifier is adaptively rebalanced for incoming test samples. This process involves only one gradient update per sample, performed in an online manner. As a result, the computational overhead introduced by ARC is confined to the inference stage (details in the answer below), without impacting the training process. \\n\\n>*The description of the experimental setup in Figure 1 is inadequate. For instance, is the class-incremental learning setup applied in this toy example? How are the two types of classifiers trained and tested? Which task\\u2019s accuracy is measured in the figure?*\\n\\nWe apologize for the confusion. To clarify: \\n\\n- The independent classifier in Figure 1 is trained and tested under the TIL setting, while the shared classifier is trained and tested under the CIL setting. \\n- $A_i$ in the figure refers to the average accuracy of task $i$, as defined in Equation 4 of the paper. \\n\\nIn fact, the purpose of Figure 1 is to demonstrate why the CIL problem is significantly harder than the TIL problem, i.e., primarily due to classifier bias. This highlights how the way classifiers are constructed can have a much greater impact on overall continual learning performance compared to the backbone architecture, which is a major motivation of our approach. We included this figure to bring attention to a key point that might not be immediately obvious to readers less familiar with continual learning.\"}", "{\"title\": \"Response from authors\", \"comment\": \"Dear reviewer, we would be grateful if you could confirm whether our response has addressed your concerns. Please do not hesitate to let us know whether there is anything else you would like to see clarified or improved before the end of the rebuttal period.\"}", "{\"title\": \"Response from authors\", \"comment\": \"Thank you for taking the time to revisit our work and for your encouraging feedback! We\\u2019re glad to hear that the computational efficiency of ARC aligns with your expectations. We appreciate your suggestion to provide a more detailed discussion on TTA in the related work section, and we will ensure it is addressed in our final revision to offer greater clarity and context.\"}", "{\"summary\": \"The paper presents a novel approach to class-incremental memory-free continual learning by integrating out-of-distribution detection with model self-correction. Specifically, the authors observe that performance degradation in continual learning largely stems from the classifier head and suggest adapting it on test samples using an entropy minimization loss while refining its predictions through a modified softmax score. Experimental results on Split CIFAR and Split ImageNet-R demonstrate the method's robustness and effectiveness across both replay-based and memory-free continual learning techniques.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The submission offers a comprehensive analysis of the dynamics of catastrophic forgetting, with a focus on recency bias, and proposes a simple and effective method to alleviate it that can be combined with any continual learning method.\\n\\nThe experimental evaluation is rigorous, covering relevant baselines, standard datasets, and appropriate ablations. The new approach demonstrates robustness to method-specific hyperparameter choices and provides a performance boost across continual learning algorithms.\\n\\nThe paper is well-written and structured, with clearly presented arguments, consistent notation, and high-quality figures that effectively convey the main points.\", \"weaknesses\": \"The main weakness of the paper is that it makes assumptions about the continual learning setting that are not well justified. Computational constraints are often more critical than memory limitations (see Prabhu et al. 2023, *Computationally Budgeted Continual Learning: What Does Matter* and Roth et al. 2024, *A Practitioner's Guide to Continual Multimodal Pretraining*). A significant limitation of the proposed method is its requirement for additional gradient updates during inference, even if only a single update per sample is needed.\\n\\nAdditionally, performing a hyperparameter search for the adaptive self-retention procedure could lead to overfitting on the test samples.\\n\\nTo improve the evaluation, it would be best to: a) quantify the computational cost of the self-retention procedure and b) ensure hyperparameter tuning is conducted on a separate dataset from the evaluation dataset.\", \"questions\": \"Did you run the hyperparameter search for the adaptive self-retention procedure on the same dataset that was used for evaluation?\\n\\nHave you tried applying your method to other continual learning algorithms, like BEEF, EASE, MEMO, and FOSTER?\\n\\nIn line 88, you formulate the two observations as empirical findings. However, in Section 3.3, they are called assumptions. Do you have any empirical results that show to what extent these actually hold?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your detailed and clear response, which has addressed most of my concerns. However, I still have the following questions and suggestions:\\n1. I now understand that the temperature $\\ud835\\udc47$ in Task-based Softmax Score (TSS) is used to alleviate the issue of probability sharpening for earlier tasks caused by the smaller denominator. The use of $i$ in the original paper by TSS makes this formula somewhat difficult to follow. \\n\\n2. Regarding the additional training costs, I apologize for not expressing my concerns clearly earlier. I understand that the ARC algorithm is applied after training on each task. However, I am confused about whether the model trained with ARC is used as the starting point for the next task. If this is the case, then this overhead should be accounted for when considering the training cost across all $T$ tasks. I think the authors need to provide a clearer explanation of this aspect of the method.\\n\\n3. Thank you for providing the additional experiments, including CoTTA and the analysis of additional time required. I believe these experiments could be included in the appendix to provide a more comprehensive understanding of ARC. Additionally, regarding the additional time required, is the reported percentage relative to the original inference time? This should be clarified.\\n\\n4. To ensure the reproducibility of results, I suggest providing a clearer table of the final hyperparameter choices across different methods and datasets.\\n\\nOverall, I think leveraging confidence scores to detect past task samples and using them to adjust the model in CIL is a reasonable approach. The authors have provided extensive experiments to demonstrate its effectiveness. Based on this, I am willing to raise my score to 5. I strongly recommend that the authors revise unclear statements in the paper, particularly in the methodology section, to improve the clarity and accessibility of the work.\"}", "{\"comment\": \"Thanks for clarifying the concerns. It is good to know that ARC does not need much computational power when deploying. I believe test-time training is a promising approach to address continual learning challenges, especially the cost is reasonable. The authors should provide a more detailed discussion of this in the related work section, emphasizing that the use of additional computational power is generally acceptable. Overall, I find ARC to be a promising method, and I am considering increasing the score.\"}", "{\"title\": \"Response from authors\", \"comment\": \">*l. 202 the distribution are defined as different between tasks but in 4.2, the incremental dataset are defined as IID. I believe this is a mistake, can you elaborate?*\\n\\nThank you for pointing this out! We agree that the use of the word IID is not rigorous and will find a better description. To clarify, when we describe the incremental datasets as IID in Section 4.2, we are referring to the dataset as a whole being drawn from an identical underlying distribution. In contrast, in the 5-dataset setting, the overall dataset exhibits a more significant and complex distribution shift, as it combines data from different datasets with distinct characteristics.\\n\\nFor l. 202, we are referring to scenarios where a dataset is split into subsets with disjoint label spaces (e.g., distinct classes). In such cases, the distributions between these subsets differ due to the label-space partitioning.\\n\\nThis distinction was our intended meaning, and we will clarify this more explicitly in the revised version.\\n\\n>*The experiments of figure 3 is quite unclear. It depends on the optimization strategy and the number of epochs for instance. Also, do you use the same fixed learning rate for joint training?*\\n\\nFor all experiments in Figure 3, we did not use the same optimization strategy, number of epochs, or learning rate. Instead, each method is trained individually to ensure full convergence.\\n\\n>*I believe a confusion matrix would be more clearer than Figure 1 at least, maybe even Figure 1 and 2*\\n\\nThanks for the suggestion! We will modify accordingly.\\n\\n>*How much data do you use for linear probe?*\\n\\nAll of the training data is used.\\n\\n>*I believe the difference in performances in figure 3 between finetune and probe makes sense and justifies the claim that the representation knowledge is still transferable across task. However, I do not see why the joint training performances are lower in most cases. Could you elaborate?*\\n\\nGreat point! There are several possibilites: \\n- Since Imagenet-R is a relatively hard dataset, joint training might struggle to balance learning across different data subsets. In contrast, sequential training with linear probing could mitigate interference by allowing the model to specialize on each subset and then use linear probing to combine the learned features effectively. \\n- Joint training could face optimization difficulties due to things like conflicting gradients from subsets of the data, wheres sequential training with linear probing avoids this by breaking down the optimization problem into smaller, more tractable stages.\\n- Linear probing introduces an additional constraint by freezing the feature extractor, which can act as a form of regularization. This separation can lead to a more robust classifier compared to joint end-to-end optimization.\\n\\nMoreover, in Figure 3, it is evident that for ViTs, the performance of joint training and linear probing is relatively similar. However, for ResNets, particularly as the model scale decreases, joint training consistently underperforms compared to linear probing. This observation aligns with the explanations above, as smaller models are more prone to optimization challenges and interference during joint training, which linear probing can potentially help alleviate.\\n\\n>*The code is not shared*\\n\\nWe promise to release the code upon acceptance.\\n\\n>*Typo in the title*\\n\\nThank you so much for pointing this out! We will correct it in the revised version.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response from authors\", \"comment\": \"Dear reviewer, we would be grateful if you could confirm whether our response has addressed your concerns. Please do not hesitate to let us know whether there is anything else you would like to see clarified or improved before the end of the rebuttal period.\"}", "{\"title\": \"Follow-Up on Review Feedback\", \"comment\": \"Dear reviewer ALn4, we noticed that you mentioned the possibility of raising your score based on the discussion and our rebuttal. If there are any additional points or clarifications you would like us to address to support this process, we'd be happy to provide further details. We appreciate your time and effort in evaluating our work and look forward to your decision.\"}", "{\"comment\": \"Dear Authors,\\n\\nI believe you may directly change the title on OpenReview. Please refer to the following part of the ICLR 2025 Camera Ready Instructions:\\n\\n>**Title, Abstract, Supplemental, etc:** In addition to uploading the final PDF for the paper, you may change the paper title, abstract, keywords, tldr, primary area, and supplemental PDF. Changes to the title should be in response to the reviewer or area chair comments, and must not significantly change the scope of the paper. All these changes can be made directly on OpenReview.\\n\\nBest,\\n\\nAC\"}", "{\"title\": \"Response from authors\", \"comment\": \"We thank you for your recognition of our work and for taking the time to provide thoughtful feedback. Your insights and suggestions are greatly appreciated!\"}", "{\"title\": \"Refinement of paper title\", \"comment\": \"Dear AC,\\n\\nWe thank all reviewers and AC for the insightful review and the constructive feedback provided. We greatly appreciate the positive recognition of our work, particularly the effectiveness of leveraging test samples to address catastrophic forgetting for memory-free continual learning. Based on the feedback, we are considering refining the title of our paper to better reflect its contribution, specifically 'Adaptive Retention & Correction: Test-Time Training for Continual Learning.\\n\\nHowever, we are not sure how we can update the title on the OpenReview page. Could you let us know how to do that?\\n\\nBest,\\nAuthors\"}", "{\"summary\": \"The authors observe that the main issue of forgetting is not in model itself but the classifier layer. Base on such observation, they introduce a way to balance the calssifying layer between current task and previous task. Instead of naively mask the current task output, the authors argue that it is not suitable since the balance between previous class is also imbalance. They come up with a new algorithm to tackle the problem by two assumptions, retention and correction. With the new algorithm ARC, it gain the performance on various datasets and methods.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. The observation of forgetting majorly happen in classifer sounds reasonable and inspring. The proposed method, ARC, also solve it well.\\n2. ARC can be generally applied to different methods is a big advantage that it can plug and play easily with existing countinual learning techniques.\", \"weaknesses\": \"1. Need to report the training time during inference, since it might add overhead when deployed to the real-world.\", \"questions\": \"1. What if the incoming sample from the past task is not allow to use during inference? For example, it is deployed to the embbedding device that need real-time inference, will additional training add much overhead? Or is it possible to not update at every step? It will be interesting to provide such results.\\n2. Can this approach extend to other computer vision task such as detection and segmentation? It would be great if the methods can also fix the problem in such scenario by re-balancing the classifcation head in object detection.\\n\\nMinor Correction\\n1. At Table 1, it seems strange that for CodaPrompt, the position of the difference for Split CIFAR-100 Inc5 is oppositve. The difference sign should be 5.8 $\\\\textcolor{red}{(+0.1)}$ instead of 5.7 $\\\\textcolor{green}{(-0.1)}$.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response from authors\", \"comment\": \"We sincerely thank you for taking the time to engage with our work so thoroughly and for providing such thoughtful feedback! Your suggestion to include these results in the main draft or appendix is greatly appreciated, and we will incorporate them into the revised version. We are also grateful for your support and for raising your score. Your insights have been invaluable in helping us refine and strengthen our work.\"}", "{\"title\": \"Response from authors\", \"comment\": \"We thank reviewer hK7E for the thoughtful and constructive comments. It is evident that the reviewer spent considerable time carefully reading our work and provided valuable insights and suggestions. We really appreciate the detailed feedback and the recognition of the strengths in our approach. Below, we address the specific points raised by the reviewer.\\n\\n>*There is a lack of detail on the overall training procedure. How many epochs are used for training?*\\n\\nWe apologize for any confusion. In fact, ARC is not involved in the training process. It is only applied during inference, and thus, details such as the number of training epochs are not applicable. During inference, the classifier is adaptively rebalanced for incoming test samples. This process involves only one gradient update per sample, performed in an online manner. In other words, we only use each sample once for classifier tuning. \\n\\n>*Is this correction and retention procedure applied after each task? In that case, this would correspond to storing unlabeled memory data.*\\n\\nYes, this correction and retention procedure is applied after each task. However, we do not store any unlabeled memory data. Instead, our approach relies solely on on-the-fly test samples during inference.\\n\\n>*Confusion on the formula of Definition 1*\\n\\nThank you for pointing out the confusion on $i$ in the max operator. We apologize for the typo in the original definition. The corrected definition is as follows: \\n\\n$S_i = \\\\max_{s \\\\cdot (i-1) \\\\leq k < s \\\\cdot i} \\\\frac{ \\\\exp{z_k / T^{(t - i)}}}{\\\\sum_{j=1}^{s \\\\cdot i} \\\\exp{z_j / T^{(t - i)}}}$\\n\\nTo clarify this definition, we can compare it to the standard softmax calculation. In the standard softmax, the numerator involves the logits for all classes. However, in our case, we focus specifically on the highest logit for task $i$, which is why the max operator is used in the numerator. Similarly, for the denominator, instead of summing over logits for all classes, we restrict the sum to only the classes revealed to the model up to task $i$, which is also the reason why we have a temperature value $T > 1$ (\\\"accommodate the imbalanced denominator for different tasks\\\" as stated in L344). In this way we can minimize classification bias both between past and current tasks and among past tasks themselves. \\n\\n>*This method is connected to Test-Time Adaptation and some discussion in the related work would be appreciated. While TTA methods are rightfully compared in section 4.2, such discussion could be included more thoroughly in the related work.*\\n\\nThank you for the suggestion! While we briefly discussed TTA methods in Section 2.3, we agree that this section can be expanded to provide a more thorough discussion. In the revised version, we\\u2019ll make sure to include a more detailed discussion to better connect our work with TTA methods.\\n\\n>*The impact of the temperature is not presented in the paper*\\n\\nThank you for pointing this out. In the table below we present ablation results on the impact of the temperature value. As shown, incorporating the temperature parameter improves performance consistently:\\n\\n||Temperature |Avg Acc on Imagenet-R Inc20|\\n|----:|:---:|:---:|\\n|iCarL |&check; |67.6 |\\n|iCarL |&cross; |67.4 |\\n|Memo |&check; |68.2 |\\n|Memo |&cross; |67.8 |\\n|L2P |&check; |75.1 |\\n|L2P |&cross; |74.6 |\\n|DualPrompt |&check; |70.5|\\n|DualPrompt |&cross; |69.9 |\\n\\n>*The notation for the temperature and the number of tasks is the same which is confusing*\\n\\nThank you for pointing this out! These are supposed to be different notations. We will revise accordingly.\\n\\n>*In Figure 1, is the independent classifier trained for doing classification of one task only? Then is the problem a TIL problem? If this is the case, the performances are comparing a TIL problem to a CIL problem, which is unfair since the CIL problem is much harder than the TIL problem.*\\n\\nYes, you are correct that the independent classifier in Figure 1 is trained under the TIL setting, while the shared classifier is trained under the CIL problem. In fact, the purpose of Figure 1 is to demonstrate why the CIL problem is significantly harder than the TIL problem, i.e., primarily due to classifier bias. This highlights how the way classifiers are constructed can have a much greater impact on overall continual learning performance compared to the backbone architecture, which is a major motivation of our approach. We included this figure to bring attention to a key point that might not be immediately obvious to readers less familiar with continual learning.\"}", "{\"title\": \"Response from authors\", \"comment\": \">*Does the ARC algorithm run after training each task? If so, what is the additional time required for training and inference?*\\n\\nThank you for your question. The ARC algorithm is applied after training on each task. However, we emphasize again that ARC does not interfere with the training process itself, so the additional training time required by ARC is zero.\\n\\nDuring inference, due to time limitation, we provide the extra time required for ARC on 4 representative methods (same ones we used for hyperparameter ablation)\\n\\n||Additional time required|Performance Gain|\\n|----:|:---:|:---:| \\n|iCarL |11% | 6.6|\\n|Memo |8% |2.0 |\\n|L2P | 37% |2.5|\\n|DualPrompt |32% |1.4|\\n\\nAs shown, there is a certain amount of additional inference time required for ARC. This is expected, as ARC is specifically designed to address classification bias during inference, which naturally incurs some additional computational cost.\\n\\nFurthermore, it is worth noting that research in other areas, such as large language models (LLMs), are increasingly exploring methods that enhance performance by utilizing additional inference-time computation. There is a growing consensus that additional inference time is a reasonable trade-off when it leads to significant improvements in model performance. Similarly, we believe the computational cost of our method is well-justified by the substantial performance gains it delivers.\\n\\n\\n>*Line 9 in the pseudocode*\\n\\nThank you for pointing out this typo! We will correct it in the revised version.\\n\\n>*The scaling temperature used in $S_i$ and the total number of tasks are both denoted as $T$. Are these referring to the same setting*\\n\\nThank you again for pointing this out! These are supposed to be different notations. We will revise accordingly.\\n\\n>*The hyperparameters used in the reported results are not fully detailed. Were the same hyperparameters applied across different methods and datasets? It seems that $\\\\beta$ will influence the number of test samples used for training, and there is a trade-off between the number of samples and the accuracy of the pseudo-labels. How were the hyperparameters selected?*\\n\\nWe conducted experiments (shown in Figure 5) to demonstrate how the choice of hyperparemeters influence our method, and results show that:\\n - our method is robust to the choice of hyperparameters, a point also acknowledged by Reviewer 1cF7 and hK7E. \\n - using the same hyperparameters across all datasets and methods would still yield performance gains \\n\\nHowever, we choose not to use the same hyperparameter following previous methods. To ensure a fair evaluation, we used the best-performing values of $\\\\gamma$ and $\\\\beta$ for each method and dataset.\\n\\n>*In the discussion following Assumption 2, the authors state that a low ratio of $c$ and $\\\\hat{c}$ indicates a sample is more likely misclassified as belonging to a past task. However, Figure 5 suggests that a larger corresponds to better performance, which seems contradictory. Could the authors clarify this point?*\\n\\nThank you for your comment. We do not believe the results in Figure 5 are contradictory. The results in Figure 5 demonstrate how low the threshold should be for optimal performance. It would only be contradictory if ARC performed better when the threshold was **larger than 1**, which is not the case.\\n\\nTo further illustrate this, we have conducted additional experiments to evaluate the performance of ARC when $\\\\gamma = 1.1$. The results are as follows:\\n||Avg Acc on Imagenet-R Inc20|\\n|----:|:---:|\\n|iCarL |65.5 (-2.1) |\\n|Memo |67.5 (-0.7) |\\n|L2P | 74.7 (-0.4) |\\n|DualPrompt |70.3 (-0.2) |\"}", "{\"summary\": \"This paper addresses the challenge of reducing classification bias in class incremental learning (CIL) by utilizing test samples. The authors introduce a novel method called Adaptive Retention & Correction (ARC), which dynamically adjusts classifier layers using test samples confidently identified as belonging to previous tasks. Additionally, ARC corrects test samples that are mistakenly classified as part of the current task.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"**Originality:** The method of adaptively adjusting and correcting model predictions using test sample confidence is novel.\", \"**Quality:** Leveraging confidence scores to detect past task samples and using these pseudo-labeled samples to train the model represents a reasonable strategy for mitigating classification bias in CIL.\"], \"weaknesses\": \"1. While the motivation to correct misclassified past task samples is sound, the proposed Task-based Softmax Score (TSS) may be problematic. For the earlier tasks, as the temperature $T^{(t\\u2212i)}$ increases, the probability distribution flattens, which could hinder the model\\u2019s ability to recognize samples from past tasks.\\n\\n2. Although the experiments demonstrate the effectiveness of ARC, the paper lacks comparisons with other test-time adaptation (TTA) benchmarks, such as CoTTA. Additionally, the introduction of extra data for classifier training (referred to as retention) raises concerns about increased training costs that should be addressed.\\n\\n3. The paper lacks clarity in several areas, as detailed below:\\n- 3.1 The description of the experimental setup in Figure 1 is inadequate. For instance, is the class-incremental learning setup applied in this toy example? How are the two types of classifiers trained and tested? Which task\\u2019s accuracy is measured in the figure?\\n- 3.2 Does the ARC algorithm run after training each task? If so, what is the additional time required for training and inference? Additionally, should line 9 in the pseudocode be corrected to $w \\\\leq \\\\gamma$?\\n- 3.3 The scaling temperature used in $S_i$ and the total number of tasks are both denoted as $T$. Are these referring to the same setting?\", \"questions\": \"1. The hyperparameters used in the reported results are not fully detailed. Were the same hyperparameters applied across different methods and datasets? It seems that $\\\\beta$ will influence the number of test samples used for training, and there is a trade-off between the number of samples and the accuracy of the pseudo-labels. How were the hyperparameters selected?\\n\\n2. In the discussion following Assumption 2, the authors state that a low ratio of $c$ and $\\\\hat{c}$ indicates a sample is more likely misclassified as belonging to a past task. However, Figure 5 suggests that a larger corresponds to better performance, which seems contradictory. Could the authors clarify this point?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response from authors\", \"comment\": \"Thank you for your valuable feedback! Regarding the most critical point, we believe there might be a misunderstanding of the continual learning (CL) setting. Specifically, in real-world CL use cases:\\n\\n1. A model is initially trained on a dataset and deployed in practice.\\n2. Over time, new requirements emerge or new data becomes available, necessitating updates to the model.\\n3. The updated model is redeployed and continues to be used in practice.\\n4. This process repeats as additional data or requirements arise.\\n\\nARC specifically operates during Step 3 and does not require storing any past-task data. To illustrate, consider a scenario where we initially train a model to distinguish between cats and dogs. Later, we may want to add a new class, such as birds, to the model (as our approach focuses on class-incremental learning). After training the model on bird data and redeploying it, the model\\u2019s practical application is to classify among cats, dogs, and birds. Naturally, during deployment, the model will encounter samples from previous tasks (cats and dogs), allowing ARC to be utilized without storing any past-task data.\\n\\nSubsequently, we might add another class, such as turtles. After updating the model to include this new class, it is redeployed to classify among four types of animals. Once again, ARC can be applied in a similar manner during deployment, without requiring any storage of past-task data. This cycle reflects typical deployment scenarios, and we believe this is how ARC can be applied after every task in practice. \\n\\nFurthermore, there is also a metric in CL that refers to the **averaged average accuracy**:\\n$\\n\\\\bar{A} = \\\\frac{1}{N}A_B\\n$\\n, where $A_B$ is defined in equation 4 of our paper. This metric explicitly measures cumulative performance by evaluating the model after each task, further validating our design choice of applying our method after each sequential training phase.\\n\\nTherefore, regarding your concern that our approach implicitly stores past-task memory data, we respectfully disagree. Our method does not store any data from past tasks. Instead, as part of the natural process of being deployed in practice, the model may encounter past-task samples, enabling our method to function without violating the \\u201cmemory-free\\u201d claim.\\n\\nNonetheless, we greatly appreciate your suggestion and have conducted additional experiments under the proposed setup, where our method is applied **only after the final training task**. The results are presented in the table below:\\n\\n||Avg Acc on Imagenet-R Inc20|\\n|----:|:---:|\\n|iCarL |61.0 |\\n|iCarL + ARC (after each task) |67.6 |\\n|iCarL + ARC (only after the final task) |67.1 |\\n|Memo |66.2 |\\n|Memo + ARC (after each task) |68.2 |\\n|Memo + ARC (only after the final task) |67.1 |\\n|L2P |72.6 |\\n|L2P + ARC (after each task) |75.1 |\\n|L2P + ARC (only after the final task)|74.3 |\\n|DualPrompt |69.1 |\\n|DualPrompt + ARC (after each task) |70.5 |\\n|DualPrompt + ARC (only after the final task) |69.9 |\\n\\nAs shown in the results, ARC still provides significant improvements even when applied only after the final training task. We stronly believe that these findings support the effectiveness of our approach.\\n\\n>*Details regarding the training procedure*\\n\\nWe appreciate your suggestion and will include these details in the appendix to provide readers with a clearer understanding of our training procedure.\\n\\nIn summary, for non-prompt-based methods, we use the SGD optimizer, and for prompt-based methods, we employ the Adam optimizer. During testing, given a batch of test samples, we calculate the loss using Equation 3 from the paper. Based on this loss, we perform a single gradient update. Theoretically, this process is equivalent to training on the test data for just 1 epoch, where each test sample is used exactly once.\\n\\nWe want to emphasize that this design of performing only one gradient update aligns with our goal of simulating realistic scenarios. In standard testing, a model processes a batch of test samples with a single forward pass to produce predictions. In ARC, we adhere to this principle by limiting the process to a single forward pass followed by one gradient update, maintaining the efficiency and practicality of the testing procedure.\\n\\nWe sincerely thank you once again for your thoughtful feedback. If there is anything else that requires clarification, please let us know before the rebuttal period concludes.\"}" ] }
9aZ2ixiYGd
Vision and Language Synergy for Rehearsal Free Continual Learning
[ "Muhammad Anwar Ma'sum", "Mahardhika Pratama", "Savitha Ramasamy", "Lin Liu", "H Habibullah", "Ryszard Kowalczyk" ]
The prompt-based approach has demonstrated its success for continual learning problems. However, it still suffers from catastrophic forgetting due to inter-task vector similarity and unfitted new components of previously learned tasks. On the other hand, the language-guided approach falls short of its full potential due to minimum utilized knowledge and participation in the prompt tuning process. To correct this problem, we propose a novel prompt-based structure and algorithm that incorporate 4 key concepts (1) language as input for prompt generation (2) task-wise generators (3) limiting matching descriptors search space via soft task-id prediction (4) generated prompt as auxiliary data. Our experimental analysis shows the superiority of our method to existing SOTAs in CIFAR100, ImageNet-R, and CUB datasets with significant margins i.e. up to 30% final average accuracy, 24% cumulative average accuracy, 8% final forgetting measure, and 7% cumulative forgetting measure. Our historical analysis confirms our method successfully maintains the stability-plasticity trade-off in every task. Our robustness analysis shows the proposed method consistently achieves high performances in various prompt lengths, layer depths, and number of generators per task compared to the SOTAs. We provide a comprehensive theoretical analysis, and complete numerical results in appendix sections. The method code is available in https://github.com/anwarmaxsum/LEAPGEN for further study.
[ "continual learning", "prompt dilemma", "language descriptors", "prompt generator", "catasthropic forgetting" ]
Accept (Poster)
https://openreview.net/pdf?id=9aZ2ixiYGd
https://openreview.net/forum?id=9aZ2ixiYGd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ycCytZKK9w", "yXzIG99anI", "yUjQJx93TX", "xCC1BXdlnl", "v7gzrRY0pC", "uXIldBBjrJ", "tbyhSJBQUZ", "qW6ABhE7Xx", "q5PWyIzzTy", "pANWFWKphC", "odOumiGhKf", "mGRUlqQ7vf", "lCaJa6by4Q", "jFIgXMuMtL", "embxQytnkG", "e5j2kpx5YU", "cMEPDZfXBz", "btJHmb750y", "aJc1Vs4nZm", "aFapLpxNt0", "X3c6R4lvhD", "WzfeQfM2Us", "Wb445PEJjJ", "VRoNibWyBV", "T8jcgbaBDB", "R2qSlDeh8d", "Pu8dRHN9eQ", "PCs9L8fQpx", "G4IYNAslir", "F1Xu9nwKUr", "DX68KVL5NT", "DBcwqK3Vwf", "AgriUDf06v", "A0elasXnjs", "9kKz5hbvml", "9PdQCrAqdW", "8ddOan9k1L", "86V2gLU6Eg", "7vqmqEtKlw", "7QGBiv1ETr", "2jnCvGnWJl", "0W51MBShHR", "0TK5GhGSjm", "06aX7dxmCC" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733053755970, 1732312591990, 1732525762591, 1737523784936, 1732240503008, 1732240705483, 1730677303742, 1733221231849, 1732867410633, 1732712856319, 1729603710662, 1732869566064, 1732241414795, 1732863339676, 1732580148466, 1732242431177, 1732312285597, 1732504156977, 1732881704879, 1732312388925, 1733054027721, 1732869080383, 1732239457991, 1733292155765, 1732243700781, 1732863017017, 1732865847638, 1732710278352, 1732864367723, 1733223974968, 1734663825550, 1733282452483, 1733220309314, 1732525789244, 1732866063553, 1730688945462, 1732861890515, 1730448004828, 1732714303164, 1732862192741, 1732876031525, 1733295995118, 1732243909883, 1732711811588 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Area_Chair_6jbB" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Reviewer_J5sH" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Reviewer_bACr" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Reviewer_bACr" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Area_Chair_6jbB" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Reviewer_bACr" ], [ "ICLR.cc/2025/Conference/Submission6684/Area_Chair_6jbB" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Reviewer_ueBy" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Reviewer_UGi2" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Reviewer_J5sH" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ], [ "ICLR.cc/2025/Conference/Submission6684/Authors" ] ], "structured_content_str": [ "{\"title\": \"Fourth Follow Up\", \"comment\": \"Dear Reviewer UGi2,\\n\\nWe would like to follow up on our responses. Could you please kindly review our updated manuscript or previous comments regarding your concerns? \\n\\nWe believe we have addressed all your concerns in our updated paper and previous responses (comments). Thus, we kindly request Reviewer UGi2 to reevaluate the score for our paper.\\n\\nThank you.\\n\\nBest Regards,\\n\\nAuthor of Submission 6684\"}", "{\"title\": \"Follow Up on Author Response and Revised Manuscript\", \"comment\": \"We would like to follow up on our response and revised manuscript. We would appreciate it if Reviewer j5sH could look at our revised manuscript, and offer additional comments.\"}", "{\"comment\": \"Dear Reviewer J5sH,\\n\\nCould you kindly review the rebuttal thoroughly and let us know whether the authors have adequately addressed the issues raised or if you have any further questions.\\n\\nBest,\\n\\nAC of Submission6684\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Autrhor Response to Reviewer ueBy for Q1-Q5\", \"comment\": \"Q1. Between L155-160, under the context of previous task-specific approaches, it is mentioned that two different tasks could produce similar key vectors. It would make a stronger point to justify this statement with some empirical\", \"our_response\": \"Thank you for your question. The absence of a generator means the prompt components are produced directly from selected (the top-k) descriptors i.e. $PC_i = E_i$, and the prompt is produced as $P = \\\\Sigma_i^k PC_i = \\\\Sigma_i^k PC_i = \\\\Sigma_i^k E_i$.\"}", "{\"title\": \"Autrhor Response to Reviewer ueBy for Q6-Sd\", \"comment\": \"Q6. Is the MLP head (classifier) also trained for each task? How can it produce list of softmax values of all classes for equation 7?\", \"our_response\": \"Thank you for the correction. We have revised the symbol into $t$ (without hat).\"}", "{\"summary\": \"This paper proposes a novel method LEAPGen (Language as Prompt Generator) for continual learning that leverages language knowledge for prompt generation.\\nLEAPGen uses language descriptors for each class as input for prompt generation, rather than shared embeddings. For each task, LEAPGen employs a fixed number of generator networks, a learnable task-level key, and learnable class-level parameters associated with the descriptor embeddings. To generate a prompt, it first predicts the task ID using a soft prediction mechanism, then selects top-k matching descriptor embeddings based on cosine similarity to the input. Experimental results on datasets like CIFAR100, ImageNet-R, and CUB demonstrate that LEAPGen significantly outperforms existing state-of-the-art methods in terms of accuracy and forgetting measure, addressing key limitations of previous prompt-based continual learning approaches.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"LEAPGen uses language descriptors as input for prompt generation, leveraging richer semantic information compared to previous methods.\", \"As a prompt-based method, it doesn't require storing and replaying old data.\", \"By using fixed language descriptors as input, LEAPGen potentially reduces catastrophic forgetting compared to methods that continuously update shared embeddings.\", \"Experimental results show significant improvements over state-of-the-art methods.\"], \"weaknesses\": [\"The paper does not explicitly specify some key details about the pre-trained models used. Which Sentence Transformer does the paper employ? Whether this is a pre-trained language model and if any fine-tuning of these models occurs during the continual learning process.\", \"The paper does not provide a clear comparison of the increase in parameters due to the addition of generators for new tasks. Based on the paper, the generator is not a lightweight module. As the proposed method requires adding new generators for some new tasks, even if not all tasks, this could lead to a significant growth in the number of parameters as the number of tasks increases. And also an unfair comparison with related works.\"], \"questions\": \"see above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer bACr\", \"comment\": \"Thank you for your response.\", \"please_kindly_note_that\": [\"**LEAPGen utilizes the same setting and resources (text encoder, GPT-generated descriptors, etc) as ConvPrompt.**\", \"**LEAPGen-lite utilizes even less resources (without GPT-generated descriptors, fewer parameters, less running time, and less storage) that are comparable to the other SOTAs.**\", \"Considering such **comparable resources (LEAPGen vs ConvPrompt, and LEAPGen-lite vs other SOTAs)**, we believe **we have satisfied the fairness aspect.**\"], \"also_please_kindly_note_that\": \"- **Descriptors are optional in our method, our method works excellently and significantly outperforms existing SOTAs by utilizing class names as language modality. Thus, our method doesn't rely on GPT or LLMs.** \\n- **GPT is utilized via online query so we don't save the GPT model. It adds fair additional running time, but not storage. Again, LEAPGen and ConvPrompt are evaluated with the same setting.** Online querying GPT utilizing API and Python script is a common practice. We don't need to download and save the GPT model. This is applicable in real-application as in our simulation.\\n\\nPlease go thoughtfully on our revised **manuscripts (mainly sec. 5.2.g. and Appendix D.3)** and our previous responses (comments). \\n\\nIn summary, once again we emphasize that **the impact of our method is not bounded by GPT, parameters, running time, and storage.**\\n\\nPlease let us know which part you are not sure about, Thank you.\\n\\nBest Regards,\\n\\nAuthor of Submission 6684\"}", "{\"title\": \"Third Follow Up (Cont'd)\", \"comment\": \"W2. Increase of Parameters:\\n- Section 5.2.d., Tables 2,3,4, and Figure 3 i.e. Analysis of LEAPGen-lite (lightweight generators and without descriptors) performance\\n\\n5.2.d) LEAPGen-lite\\u2019s Performance: As shown in Table 2-4 and Figure 3 Despite utilizing far smaller\\n(2.67% params) generators and without class descriptors generated by LLM, LEAPGen-lite still\\noutperforms the existing methods significantly i.e. 4.7-25% FAA and 3-11% CAA in CIFAR100,\\nand 3-30% FAA and 2-26% CAA in ImageNet-R dataset. LEAPGen-lite also achieves a low for-\\ngetting rate in these 2 datasets for all task-settings. In the CUB dataset, LEAPGen-lite archives a\\ncomparable performance to HiDe-Prompt and outperforms other SOTA with a significant margin i.e.\\n6-20% FAA and 3.3-14% CAA. This evidence proves our ideas i.e. language embedding as input\\nfor prompt generation, task-wise generators, soft task-id predictor, and learning with auxiliary data\\nare not bounded by the generated descriptors and the size of generators.\\n\\nHere is the snapshot of the tables, please see our revised paper for the full tables.\\n\\n| **Method** | **ImageNet-R** | | | |\\n|------------------|------------------|------------------|-----------------|-----------------|\\n| | **FAA** | **CAA** | **FFM** | **CFM** |\\n| | 5 Tasks @40c | | | |\\n| L2P | 64.62 \\u00b1 0.32 | 68.01 \\u00b1 0.42 | 3.94 \\u00b1 0.16 | 3.55 \\u00b1 0.20 |\\n| DualPrompt | 69.71 \\u00b1 0.11 | 72.78 \\u00b1 0.14 | 3.32 \\u00b1 0.16 | 2.78 \\u00b1 0.25 |\\n| CODA-P | 74.89 \\u00b1 0.36 | 79.71 \\u00b1 1.27 | 8.89 \\u00b1 0.65 | 7.65 \\u00b1 0.98 |\\n| LGCL | 69.93 \\u00b1 0.21 | 72.91 \\u00b1 0.19 | 3.04 \\u00b1 0.36 | 2.50 \\u00b1 0.38 |\\n| HiDe-Prompt | 75.40 \\u00b1 0.27 | 78.88 \\u00b1 0.04 | 3.15 \\u00b1 0.46 | 2.64 \\u00b1 0.16 |\\n| PGP | 69.71 \\u00b1 0.15 | 72.77 \\u00b1 0.07 | 3.36 \\u00b1 0.23 | 2.85 \\u00b1 0.25 |\\n| EvoPrompt | 77.27 \\u00b1 0.40 | 81.67 \\u00b1 0.18 | 1.79 \\u00b1 0.31 | 1.41 \\u00b1 0.32 |\\n| CPrompt | 78.65 \\u00b1 0.00 | 82.44 \\u00b1 0.00 | 6.00 \\u00b1 0.00 | 5.49 \\u00b1 0.00 |\\n| ConvPrompt | 79.36 \\u00b1 0.08 | 82.93 \\u00b1 0.24 | 3.42 \\u00b1 0.05 | 2.36 \\u00b1 0.16 |\\n| **LEAPGen-lite** | **82.44 \\u00b1 0.63** | **84.37 \\u00b1 0.90** | **0.43 \\u00b1 0.08** | **0.17 \\u00b1 0.06** |\\n| **LEAPGen** | **82.79 \\u00b1 0.32** | **85.06 \\u00b1 0.29** | **0.51 \\u00b1 0.04** | **0.18 \\u00b1 0.07** |\\n| | ImageNet-R | | | |\\n| L2P | 62.50 \\u00b1 0.51 | 67.05 \\u00b1 0.47 | 5.01 \\u00b1 0.40 | 4.41 \\u00b1 0.43 |\\n| DualPrompt | 68.59 \\u00b1 0.24 | 72.18 \\u00b1 0.20 | 4.61 \\u00b1 0.07 | 3.70 \\u00b1 0.18 |\\n| CODA-P | 73.77 \\u00b1 0.50 | 79.38 \\u00b1 1.48 | 7.94 \\u00b1 0.08 | 6.72 \\u00b1 0.79 |\\n| LGCL | 68.65 \\u00b1 0.25 | 72.57 \\u00b1 0.19 | 4.75 \\u00b1 0.33 | 3.38 \\u00b1 0.58 |\\n| HiDe-Prompt | 75.75 \\u00b1 0.40 | 79.27 \\u00b1 0.17 | 2.29 \\u00b1 0.27 | 2.33 \\u00b1 0.17 |\\n| PGP | 68.62 \\u00b1 0.14 | 72.19 \\u00b1 0.20 | 4.53 \\u00b1 0.40 | 3.63 \\u00b1 0.35 |\\n| EvoPrompt | 76.00 \\u00b1 0.26 | 80.97 \\u00b1 0.30 | 4.22 \\u00b1 0.42 | 3.59 \\u00b1 0.52 |\\n| CPrompt | 76.32 \\u00b1 0.53 | 81.50 \\u00b1 0.30 | 6.10 \\u00b1 0.75 | 5.60 \\u00b1 1.35 |\\n| ConvPrompt | 77.08 \\u00b1 0.26 | 81.47 \\u00b1 0.10 | 4.17 \\u00b1 0.04 | 3.11 \\u00b1 0.17 |\\n| **LEAPGen-lite** | **82.38 \\u00b1 1.04** | **85.14 \\u00b1 0.52** | **3.01 \\u00b1 1.19** | **2.13 \\u00b1 0.60** |\\n| **LEAPGen** | **84.09 \\u00b1 0.93** | **85.54 \\u00b1 0.65** | **1.46 \\u00b1 1.25** | **2.11 \\u00b1 1.21** |\\n| | ImageNet-R | | | |\\n| L2P | 57.40 \\u00b1 0.31 | 63.33 \\u00b1 0.21 | 10.76 \\u00b1 0.45 | 7.88 \\u00b1 0.17 |\\n| DualPrompt | 65.19 \\u00b1 0.17 | 70.31 \\u00b1 0.29 | 7.30 \\u00b1 0.18 | 5.16 \\u00b1 0.34 |\\n| CODA-P | 70.55 \\u00b1 0.71 | 77.08 \\u00b1 1.02 | 8.23 \\u00b1 0.86 | 6.95 \\u00b1 0.70 |\\n| LGCL | 64.96 \\u00b1 0.67 | 70.18 \\u00b1 0.37 | 7.35 \\u00b1 0.65 | 5.05 \\u00b1 0.32 |\\n| HiDe-Prompt | - | 81.60 \\u00b1 0.48 | - | 2.23 \\u00b1 0.38 |\\n| PGP | 65.24 \\u00b1 0.25 | 70.36 \\u00b1 0.26 | 7.17 \\u00b1 0.21 | 5.09 \\u00b1 0.25 |\\n| EvoPrompt | 74.93 \\u00b1 0.64 | 79.92 \\u00b1 0.13 | 6.72 \\u00b1 0.90 | 5.67 \\u00b1 0.26 |\\n| CPrompt | 74.23 \\u00b1 0.17 | 79.82 \\u00b1 0.51 | 5.98 \\u00b1 0.24 | 5.54 \\u00b1 0.48 |\\n| ConvPrompt | 73.93 \\u00b1 0.36 | 78.92 \\u00b1 0.37 | 4.87 \\u00b1 0.57 | 3.57 \\u00b1 0.25 |\\n| **LEAPGen-lite** | **83.67 \\u00b1 0.39** | **85.65 \\u00b1 0.33** | **1.06 \\u00b1 0.24** | **0.47\\u00b1 0.14** |\\n| **LEAPGen** | **87.03 \\u00b1 0.12** | **87.81 \\u00b1 0.48** | **2.17 \\u00b1 0.17** | **2.54 \\u00b1 0.77** |\\n\\n**Please Note: The analysis of LEAGen-lite empirically proves that our method doesn't rely on GPT/LLM or complex generator networks (high number of parameters)**\"}", "{\"title\": \"Follow Up\", \"comment\": \"Dear Reviewer bACr,\\n\\nThank you for your feedback on our paper. Could you please kindly review our latest manuscript regarding your concern?\", \"here_are_the_pointers_to_your_points_on_the_use_of_chatgpt\": \"- Section 5.2.d Tables 2,3,4, and Figure 3: Analysis of LEAPGen-lite (lightweight generators and without descriptors) performance that proves the significant performance of our ideas is not bounded by LLM-generated descriptors or the complexity (parameters and size) of the generators.\\n\\n- Section 5.2.g: Analysis of parameters, running time, and storage\\n\\n- Appendix D.3: Detailed Analysis of Performance and Cost TradeOffs.\\n\\nThank you.\\n\\nBest Regards,\\n\\nAuthor of Submission 6684\"}", "{\"summary\": \"This paper introduces a new prompt-based approach for continual learning that take advantage of language guidance. It uses task-wise generators, soft task ID prediction, and generated prompts as auxiliary data. The method outperforms state-of-the-art models on CIFAR100, ImageNetR, and CUB datasets with significant improvements in accuracy and forgetting measures.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper provides a detailed explanation of current prompt-based continual learning methods and analyzes the weakness associated with these methods, effectively leading to the introduction of the authors' proposed method.\\n2. The paper presents a novel language-guided continual learning method and thoroughly demonstrates its effectiveness through extensive experiments.\", \"weaknesses\": \"1. The comparison mentioned in lines 183-184 between \\\"the photo of car\\\" and \\\"the photo of cat,\\\" stating they have 15/16 similarity but 1/16 dissimilarity, is confusing. The authors compared the similarity between these two prompts on a **letter-by-letter** basis and concluded that they are similar. However, the similarity between two prompts should not be evaluated in such a manner. After being processed by the tokenizer, the embeddings of the two prompts are not similar, as their meanings differ significantly.\\n2. The proposed method relies on using ChatGPT to generate descriptive terms for each class and requires an additional Sentence Transformer to obtain embeddings. The introduction of these extra resources creates an unfair comparison with other methods and limits the practical applicability of the approach in real-world scenarios.\\n3. Although the proposed method achieves state-of-the-art performance on multiple datasets, the improvement is minor on some, such as the comparison with HiDe-Prompt on the CUB200 dataset. This is particularly relevant given the substantial additional resources, like ChatGPT, required by the method, which other approaches do not utilize. Moreover, this suggests that the performance gains on fine-grained datasets may not be significant, as the generated descriptive terms for each class are quite similar.\\n4. The authors have used excessive line spacing adjustments, which negatively impact the visual presentation of the paper, such as in lines 437-438 and 446-447. They should revise the layout of the entire paper to provide a better reading experience for the reader.\", \"questions\": \"Please the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Second Follow Up (Cont'd)\", \"comment\": \"W2. Increase of Parameters:\\n- Section 5.2.d Tables 2,3,4, and Figure 3: Analysis of LEAPGen-lite (lightweight generators and without descriptors) performance.\\n\\n5.2.d) LEAPGen-lite\\u2019s Performance: As shown in Table 2-4 and Figure 3 Despite utilizing far smaller\\n(2.67% params) generators and without class descriptors generated by LLM, LEAPGen-lite still\\noutperforms the existing methods significantly i.e. 4.7-25% FAA and 3-11% CAA in CIFAR100,\\nand 3-30% FAA and 2-26% CAA in ImageNet-R dataset. LEAPGen-lite also achieves a low for-\\ngetting rate in these 2 datasets for all task-settings. In the CUB dataset, LEAPGen-lite archives a\\ncomparable performance to HiDe-Prompt and outperforms other SOTA with a significant margin i.e.\\n6-20% FAA and 3.3-14% CAA. This evidence proves our ideas i.e. language embedding as input\\nfor prompt generation, task-wise generators, soft task-id predictor, and learning with auxiliary data\\nare not bounded by the generated descriptors and the size of generators.\\n\\nHere is the snapshot of the tables, please see our revised paper for the full tables.\\n\\n| **Method** | **ImageNet-R** | | | |\\n|------------------|------------------|------------------|-----------------|-----------------|\\n| | **FAA** | **CAA** | **FFM** | **CFM** |\\n| | 5 Tasks @40c | | | |\\n| L2P | 64.62 \\u00b1 0.32 | 68.01 \\u00b1 0.42 | 3.94 \\u00b1 0.16 | 3.55 \\u00b1 0.20 |\\n| DualPrompt | 69.71 \\u00b1 0.11 | 72.78 \\u00b1 0.14 | 3.32 \\u00b1 0.16 | 2.78 \\u00b1 0.25 |\\n| CODA-P | 74.89 \\u00b1 0.36 | 79.71 \\u00b1 1.27 | 8.89 \\u00b1 0.65 | 7.65 \\u00b1 0.98 |\\n| LGCL | 69.93 \\u00b1 0.21 | 72.91 \\u00b1 0.19 | 3.04 \\u00b1 0.36 | 2.50 \\u00b1 0.38 |\\n| HiDe-Prompt | 75.40 \\u00b1 0.27 | 78.88 \\u00b1 0.04 | 3.15 \\u00b1 0.46 | 2.64 \\u00b1 0.16 |\\n| PGP | 69.71 \\u00b1 0.15 | 72.77 \\u00b1 0.07 | 3.36 \\u00b1 0.23 | 2.85 \\u00b1 0.25 |\\n| EvoPrompt | 77.27 \\u00b1 0.40 | 81.67 \\u00b1 0.18 | 1.79 \\u00b1 0.31 | 1.41 \\u00b1 0.32 |\\n| CPrompt | 78.65 \\u00b1 0.00 | 82.44 \\u00b1 0.00 | 6.00 \\u00b1 0.00 | 5.49 \\u00b1 0.00 |\\n| ConvPrompt | 79.36 \\u00b1 0.08 | 82.93 \\u00b1 0.24 | 3.42 \\u00b1 0.05 | 2.36 \\u00b1 0.16 |\\n| **LEAPGen-lite** | **82.44 \\u00b1 0.63** | **84.37 \\u00b1 0.90** | **0.43 \\u00b1 0.08** | **0.17 \\u00b1 0.06** |\\n| **LEAPGen** | **82.79 \\u00b1 0.32** | **85.06 \\u00b1 0.29** | **0.51 \\u00b1 0.04** | **0.18 \\u00b1 0.07** |\\n| | ImageNet-R | | | |\\n| L2P | 62.50 \\u00b1 0.51 | 67.05 \\u00b1 0.47 | 5.01 \\u00b1 0.40 | 4.41 \\u00b1 0.43 |\\n| DualPrompt | 68.59 \\u00b1 0.24 | 72.18 \\u00b1 0.20 | 4.61 \\u00b1 0.07 | 3.70 \\u00b1 0.18 |\\n| CODA-P | 73.77 \\u00b1 0.50 | 79.38 \\u00b1 1.48 | 7.94 \\u00b1 0.08 | 6.72 \\u00b1 0.79 |\\n| LGCL | 68.65 \\u00b1 0.25 | 72.57 \\u00b1 0.19 | 4.75 \\u00b1 0.33 | 3.38 \\u00b1 0.58 |\\n| HiDe-Prompt | 75.75 \\u00b1 0.40 | 79.27 \\u00b1 0.17 | 2.29 \\u00b1 0.27 | 2.33 \\u00b1 0.17 |\\n| PGP | 68.62 \\u00b1 0.14 | 72.19 \\u00b1 0.20 | 4.53 \\u00b1 0.40 | 3.63 \\u00b1 0.35 |\\n| EvoPrompt | 76.00 \\u00b1 0.26 | 80.97 \\u00b1 0.30 | 4.22 \\u00b1 0.42 | 3.59 \\u00b1 0.52 |\\n| CPrompt | 76.32 \\u00b1 0.53 | 81.50 \\u00b1 0.30 | 6.10 \\u00b1 0.75 | 5.60 \\u00b1 1.35 |\\n| ConvPrompt | 77.08 \\u00b1 0.26 | 81.47 \\u00b1 0.10 | 4.17 \\u00b1 0.04 | 3.11 \\u00b1 0.17 |\\n| **LEAPGen-lite** | **82.38 \\u00b1 1.04** | **85.14 \\u00b1 0.52** | **3.01 \\u00b1 1.19** | **2.13 \\u00b1 0.60** |\\n| **LEAPGen** | **84.09 \\u00b1 0.93** | **85.54 \\u00b1 0.65** | **1.46 \\u00b1 1.25** | **2.11 \\u00b1 1.21** |\\n| | ImageNet-R | | | |\\n| L2P | 57.40 \\u00b1 0.31 | 63.33 \\u00b1 0.21 | 10.76 \\u00b1 0.45 | 7.88 \\u00b1 0.17 |\\n| DualPrompt | 65.19 \\u00b1 0.17 | 70.31 \\u00b1 0.29 | 7.30 \\u00b1 0.18 | 5.16 \\u00b1 0.34 |\\n| CODA-P | 70.55 \\u00b1 0.71 | 77.08 \\u00b1 1.02 | 8.23 \\u00b1 0.86 | 6.95 \\u00b1 0.70 |\\n| LGCL | 64.96 \\u00b1 0.67 | 70.18 \\u00b1 0.37 | 7.35 \\u00b1 0.65 | 5.05 \\u00b1 0.32 |\\n| HiDe-Prompt | - | 81.60 \\u00b1 0.48 | - | 2.23 \\u00b1 0.38 |\\n| PGP | 65.24 \\u00b1 0.25 | 70.36 \\u00b1 0.26 | 7.17 \\u00b1 0.21 | 5.09 \\u00b1 0.25 |\\n| EvoPrompt | 74.93 \\u00b1 0.64 | 79.92 \\u00b1 0.13 | 6.72 \\u00b1 0.90 | 5.67 \\u00b1 0.26 |\\n| CPrompt | 74.23 \\u00b1 0.17 | 79.82 \\u00b1 0.51 | 5.98 \\u00b1 0.24 | 5.54 \\u00b1 0.48 |\\n| ConvPrompt | 73.93 \\u00b1 0.36 | 78.92 \\u00b1 0.37 | 4.87 \\u00b1 0.57 | 3.57 \\u00b1 0.25 |\\n| **LEAPGen-lite** | **83.67 \\u00b1 0.39** | **85.65 \\u00b1 0.33** | **1.06 \\u00b1 0.24** | **0.47\\u00b1 0.14** |\\n| **LEAPGen** | **87.03 \\u00b1 0.12** | **87.81 \\u00b1 0.48** | **2.17 \\u00b1 0.17** | **2.54 \\u00b1 0.77** |\\n\\n**Please Note: The analysis of LEAGen-lite empirically proves that our method doesn't rely on GPT/LLM or complex generator networks (high number of parameters)**\\n\\nThank you.\\n\\nBest Regards,\\n\\nAuthor of Submis.6684\"}", "{\"title\": \"Autrhor Response to Reviewer J5sH\", \"comment\": \"Q1/W1. The paper does not explicitly specify some key details about the pre-trained models used. Which Sentence Transformer does the paper employ? Whether this is a pre-trained language model and if any fine-tuning of these models occurs during the continual learning process.\", \"our_response\": [\"Thank you for your concern.\", \"We added a new experiment and analysis on the trade-offs between cost and performance that comprehensively evaluate the proposed methods and existing SOTAs in terms of performance (accuracy and forgetting), number of parameters, and running time. We also compare our proposed method with the large version of SOTAs that executes a higher number of parameters. Please see Appendix D3. Our analysis shows in comparison to the standard versions and large versions of SOTAs, our method achieves better performance (average accuracy and average forgetting), and requires a shorter running time.\", \"Even in comparison with the SOTAs in a smaller number of parameters, our proposed method still achieves a smaller running time. The Smallest number of parameters i.e. HiDe-Prompt doesn't imply shorter running time as It requires mode additional operations i.e. drawing (augmenting) prototypes and constructing uninformed representations that run on many epochs.\", \"To ensure fairness, we evaluated all the methods by applying the best setting i.e. as mentioned in the references and official code manual, including the preferred hyper-parameters. Please note that increasing the number of parameters i.e. by increasing the number of generators, or prompt length doesn't always improve the model performance. As shown in the table above, our sensitivity analysis in sec 5.2. point e and figure 4, after some points, the increase in prompt length or number of generators decreases the model performance.\", \"Thus, we believe, we have evaluated the consolidated methods in a fair way i.e. by applying their respective official setting and parameters.\"]}", "{\"title\": \"Third Follow Up (Cont'd)\", \"comment\": \"- Running time analysis in 5.2.g.\\n\\n| Method | Desc | #Params(M) | Running Time (h) | | | | Storage (MB) |\\n|--------------|:----:|:----------:|:----------------:|:-----:|:------:|:-----:|:------------:|\\n| | | | T.Desc | Inf | Tr+Inf | Total | |\\n| HiDe-Prompt | - | 0.15 | - | 0.019 | 5.40 | 5.40 | 334 |\\n| ConvPrompt | v | 1.28 | 1.07 | 0.033 | 1.04 | 2.11 | 346 |\\n| LEAPGen-lite | - | 0.16 | - | 0.028 | 0.53 | 0.53 | 332 |\\n| LEAPGen | v | 6.35 | 1.07 | 0.025 | 0.72 | 1.79 | 567 |\\n\\nT.Desc, Tr, and Inf denote time for generating descriptors, training, and inference respectively, detailed in Appendix D3.\\n\\n5.2.g) Parameters, Running Time, and Storage: The table above compares the number of parameters, running time (ImageNet-R), and storage of our methods and existing SOTAs. Despite having a higher number of parameters and storage, LEAPGen consumes less running time than existing SOTAs both training+inference and total running time. LEAPGen-lite consumes the least costs in total running time and storage and requires relatively low parameters and inference time. LEAPGen and ConvPrompt require additional time to generate descriptors that increase their total simulation time. Despite having the least parameters, Hide-Prompt requires the longest training and total times since it needs extra operations to generate uninstructed class representations. \\n\\n- Detailed setting in Appendix E, LEAPGen and ConvPrompt use the same setting\\n\\nE DETAILED E XPERIMENTAL S ETTING\", \"existing_methods\": \"The existing SOTAs are run by executing the official implementation (code)\\nof the respective methods. The hyper-parameters setting is chosen based on the official setting.\\nHiDe-Prompt utilizes S-Prompt(Wang et al., 2022a) i.e. similar to DualPrompt but without the\\nglobal (task-shared) prompt. LGCL and PGP don\\u2019t propose a specific prompt structure but rather\\nutilize L2P and DualPrompt. The reported results for PGP and LGCL are obtained with DualPrompt structure which is the best from their result. The other methods i.e. L2P, DualPrompt, CODA-P,\\nEvoPrompt, CPrompt, and ConvPrompt utilize their proposed structure. All the evaluated methods\\nutilize ViT B/16 pre-trained on ImageNet 21K as the backbone model. LGCL utilizes pre-trained\\nCLIP text encoder, while ConvPrompt utilizes SentenceTransformer pre-trained on BERT as its text\\nencoder.\", \"leapgen\": \"Our proposed method is implemented on top of the ViT backbone pre-trained on\\nImageNet-21K. The prompt structure is as defined in section 4. The prompt length is set to 30,\\nand the prefix tuning layers is set to 7 i.e. [0,1,2,3,4,5,6] for all main experiments. We utilize Adam\\noptimizer with a cosine learning rate scheduler. For CIFAR100 dataset, We set 0.01 initial learning\\nrate and 3, 10, and 10 epochs for 5-task, 10-task, and 20-task settings respectively. For ImageNet-\\nR dataset, We choose 5, 10, and 20 epochs for 5-task, 10-task, and 20-task settings respectively.\\nThe initial learning rate is chosen from the best of 0.04,0.05,0.06. For CUB dataset, We choose 20\\nepochs and 0.005 initial learning rate. Similar to ConvPrompt, we utilize SentenceTransformer as\\na text encoder. All the pre-trained models i.e. ViT, SentenceTransformer, and CLIP(LGCL) are kept\\nfrozen (not fine-tuned).\\n\\nW4. Open the descriptors: Please check https://anonymous.4open.science/r/xt124j05/descriptors/.\"}", "{\"title\": \"Response to Reviewer bACr for Additional Comments\", \"comment\": \"Thank you for your suggestion.\\n\\n1. We have added the advised metrics i.e. inference time only, along with total time, including additional time for descriptors generation as utilized by LEAPGen and ConvPrompt. Our method has the lowest total running time, and moderate inference time i.e. lower than ConvPrompt and higher than HiDe-Prompt. We have added the required storage, please see Appendix D.3. \\n\\n2. We would like to emphasize the following points:\\n\\n- Following ConvPrompt, the LLM such as GPT is one of the alternatives to generate descriptors before the training phase. Thus the GPT is not part of the methods. In the case of no classes' descriptors, Our method still works excellently i.e. utilizing class names as descriptors (Appendix D.4).\\n\\n- GPT is utilized via online query, thus we don't need extra storage to save it.\\n\\n- Descriptors generation by GPT indeed consumes a fair amount of time. Even though spending additional time for descriptors generations, our method still has a lower total running time than HiDe-Prompt and ConvPrompt.\"}", "{\"title\": \"Author Response to Reviewer UGi2\", \"comment\": \"W1/Q1. The writing of this paper needs significant improvement. Additionally, the structure is overly compact, affecting readability. For instance, the theorem presented in the paper has weak relevance to the proposed method, and I believe it is unnecessary. I suggest moving it to the supplementary materials. Given the complexity and sophistication of the method, I encourage the authors to provide an overview initially.\", \"our_response\": \"Thank you for your suggestion, We have published all the descriptors including short, long, and narrative descriptors generated by ChatGPT, LLama, and Gemini. As for now, the descriptors can be accessed via https://anonymous.4open.science/r/xt124j05/descriptors/.\", \"our_responses\": [\"We added a new experiment and analysis on the \\\"class name\\\" as descriptors where we have no class descriptors generated by GPT or other LLMs. Our experiment results as presented in Appendix D4. shows that our method outperforms the existing SOTAs by a significant margin in 3 datasets despite substituting class descriptors with the class name.\", \"In our main experimental setting, We follow ConvPrompt which utilizes GPT to generate class descriptors before the training process. Thus we emphasize that our method and ConvPrompt utilize the same external resources that are considered as fair. Please note that in our method, the GPT (LLM) is solely utilized to generate descriptors, It is different from GMM which utilizes an LLM decoder in its learning process.\", \"For the other methods, we can't enforce ChatGPT in them as they were designed to learn without language descriptors. The most feasible setting is evaluating the methods following their official (optimal) setting and hyperparameters that we have done in our study. Thus we believe that we maintain fairness in our evaluation.\", \"As for the relation between descriptors and performance, we have extended our analysis about the performance of our method w.r.t types of descriptors and LLM (GPT, Gemini, and Llama) in comparison to ConvPrompt. The table below shows the performance of our method compared to ConvPrompt in those settings. Please kindly see Appendix D.5 for a detailed sample of descriptors, results, and analysis. In summary, we conclude that long descriptors are preferable over short and narrative descriptors as generally they carry richer visual descriptions of an object without being contaminated by unrelated words. However, our methods can utilize three of the descriptors types, and gain better performance than ConvPrompt.\", \"W4/Q4. Considering the authors (are about to) open-source their code, I suggest making the descriptions obtained via ChatGPT publicly available for deeper analysis. I believe this will enhance the impact of the paper.\"]}", "{\"title\": \"Follow Up on Author Response and Revised Manuscript\", \"comment\": \"We would like to follow up on our response and revised manuscript. We would appreciate it if Reviewer UGi2 could look at our revised manuscript, and we offer additional comments.\"}", "{\"comment\": \"Thanks for your reply. Would you please showcase latency metrics such as inference speed, fps, storage and latency? I am curious about it given that additional ChatGPT are utilized and it plays a crucial role in the pipeline. I do not see them explicitely in the reply and updated manuscripts\"}", "{\"title\": \"Response to Reviewer J5sH\", \"comment\": \"Dear Reviewer J5sH,\\n\\nThank you for the time and effort to review our revised manuscript, and increase the score for our paper. We surely appreciate it. \\n\\nBest Regards,\\n\\nAuthor of Submission 6684\"}", "{\"title\": \"Follow Up on Author Response and Revised Manuscript\", \"comment\": \"We would like to follow up on our response and revised manuscript. We would appreciate it if Reviewer bACr could look at our revised manuscript, and offer additional comments.\"}", "{\"title\": \"Third Follow Up\", \"comment\": \"Dear Reviewer bACr,\\n\\nWe would like to follow up on our responses. Could you please kindly review our updated manuscript or previous comments regarding your concerns?\\n\\nWe believe we have addressed all your concerns in our updated paper and previous responses (comments). Thus, we kindly request Reviewer bACr to reevaluate the score for our paper.\\n\\nThank you.\\n\\nBest Regards,\\n\\nAuthor of Submission 6684\"}", "{\"title\": \"Second Follow Up\", \"comment\": \"Dear Reviewer bACr,\\n\\nThank you for your feedback on our paper. Could you please kindly review our latest manuscript regarding your concern?\", \"here_are_the_pointers_to_your_key_points\": \"We also copied the updates (or snapshots) here, thus you can review them directly. \\n\\n\\n- Section 5.2.g: Analysis of parameters, running time, and storage\\n\\n| Method | Desc | #Params(M) | Running Time (h) | | | | Storage (MB) |\\n|--------------|:----:|:----------:|:----------------:|:-----:|:------:|:-----:|:------------:|\\n| | | | T.Desc | Inf | Tr+Inf | Total | |\\n| HiDe-Prompt | - | 0.15 | - | 0.019 | 5.40 | 5.40 | 334 |\\n| ConvPrompt | v | 1.28 | 1.07 | 0.033 | 1.04 | 2.11 | 346 |\\n| LEAPGen-lite | - | 0.16 | - | 0.028 | 0.53 | 0.53 | 332 |\\n| LEAPGen | v | 6.35 | 1.07 | 0.025 | 0.72 | 1.79 | 567 |\\n\\nT.Desc, Tr, and Inf denote time for generating descriptors, training, and inference respectively, detailed in Appendix D3.\\n\\n5.2.g) Parameters, Running Time, and Storage: The table above compares the number of parameters, running time (ImageNet-R), and storage of our methods and existing SOTAs. Despite having a higher number of parameters and storage, LEAPGen consumes less running time than existing SOTAs both training+inference and total running time. LEAPGen-lite consumes the least costs in total running time and storage and requires relatively low parameters and inference time. LEAPGen and ConvPrompt require additional time to generate descriptors that increase their total simulation time. Despite having the least parameters, Hide-Prompt requires the longest training and total times since it needs extra operations to generate uninstructed class representations. \\n\\n\\n- Appendix D.3: Detailed Analysis of Performance and Cost TradeOffs.\\n\\n(a) Running Time for All Datasets \\n\\n| Method | Time (h) | | | | | | | | | | | |\\n|--------------|:------------:|:-----:|:------:|:-----:|:--------------:|:-----:|:------:|:-----:|:-------:|:-----:|:------:|:-----:|\\n| | CIFAR100 10T | | | | ImageNet-R 10T | | | | CUB 10T | | | |\\n| | T.Desc. | Inf | Tr+Inf | Total | T.Desc. | Inf | Tr+Inf | Total | T.Desc. | Inf | Tr+Inf | Total |\\n| HiDe-Prompt | - | 0.037 | 4.63 | 4.63 | - | 0.019 | 5.40 | 5.40 | - | 0.017 | 3.94 | 3.94 |\\n| ConvPrompt | 0.53 | 0.023 | 2.01 | 2.54 | 1.07 | 0.033 | 1.04 | 2.11 | 1.09 | 0.035 | 8.08 | 9.17 |\\n| LEAPGen-lite | - | 0.016 | 1.2 | 1.2 | - | 0.028 | 0.53 | 0.53 | - | 0.028 | 0.38 | 0.38 |\\n| LEAPGen | 0.53 | 0.017 | 0.98 | 1.51 | 1.07 | 0.025 | 0.72 | 1.79 | 1.09 | 0.025 | 0.32 | 1.41 |\\n\\n(a) Performance vs Storage.\\n\\n| Method | Performance | | Storage(MB) |\\n|-------------------|:-----------:|:-----:|:-----------:|\\n| | FAA | CFA | |\\n| HiDe-Prompt | 75.75 | 79.27 | 334 |\\n| HiDe-Prompt-Large | 74.30 | 78.56 | 797 |\\n| ConvPrompt | 77.08 | 81.47 | 346 |\\n| ConvPrompt-Large | 74.56* | 80.79 | 632 |\\n| LEPGen-lite | 82.38 | 85.14 | 332 |\\n| LEAPGen | 84.09 | 85.54 | 567 |\\n\\nHiDe-Prompt-Large and ConvPrompt-Large is the large version of HiDe-Prompt and ConvPrompt that have 12.2M and 13.3M parameters respectively.\\n\\n**Again, we want to emphasize these points:**\\n\\n- Following ConvPrompt, the LLM such as GPT is one of the alternatives to generate descriptors before the training phase (Please see figure 2 and section 4.2-4.3). Thus the GPT is not part of the methods. In the case of no classes' descriptors, Our method works excellently i.e. utilizing class names as descriptors as demonstrated by LEAPGen-lite and LEAPGen-CN(Appendix D.4).\\n\\n- GPT is utilized via online query, thus we don't need extra storage to save it.\"}", "{\"title\": \"Response to All Reviewers and Summary of Updates\", \"comment\": \"We would like to express our gratitude to all the reviewers for their constructive and insightful comments on our paper. We are honored that all the reviewers (ueBy, J5sH, UGi2, and bACr) found our work to be novel/new. We also appreciate reviewers (ueBy, and bACr) for pointing out the detailed writing of our paper as a positive point. We have revised our paper, following the reviewers' feedback. We have improved the quality and presentation of our paper with the summary of changes as follows:\\n\\n(1). We added a detailed overview of our method as presented in section 4.1. to improve e better presentation and understanding of our proposed method. We emphasized the uniqueness of our method to existing SOTAs in the section.\\n\\n(2). We replace our ablation study with a step-by-step scenario to improve the clarity of each component contribution, as presented in section 5.2.d. \\n\\n(3). We added a new empirical experiment and analysis on the high similarity of task key (K^t) as presented in Appendix D.1. The high similarity of task keys motivates us to develop a more accurate task-id prediction mechanism rather than the conventional way.\\n\\n(4). We added a new empirical experiment and analysis on the high similarity of language embedding generated by the encoded string \\\"the photo of class name\\\" as used in the existing method. The high similarity between classes motivates us to develop a more discriminative way of language-guided learning rather than utilizing the embedding directly as references or loss anchors. Please see Appendix D2. \\n\\n(5). We added a new experiment and analysis on the trade-offs between cost and performance that comprehensively evaluate the proposed methods and existing SOTAs in terms of performance (accuracy and forgetting), number of parameters, and running time. We also compare our proposed method with the large version of SOTAs that executes a higher number of parameters. Please see Appendix D3.\\n\\n(6). We added a new experiment and analysis on the \\\"class name\\\" as descriptors where we have no class descriptors generated by GPT or other LLMs. Our experiment results as presented in Appendix D4. shows that our method outperforms the existing SOTAs with a significant margin in 3 datasets despite substituting class descriptors with the class name. \\n\\n(7). We elaborated the detailed setting for all consolidated methods as presented in Appendix E.\\n\\n(8). We revised Figures 1 and 2 for a better understanding of our paper. \\n\\n(9). We improve the readability and presentation of our paper by ensuring a clear space between a text to another text, caption, figure, or table.\\n\\n(10), Last but not least, We revise the writing errors and typos. \\n\\nThe changes w.r.t. feedback from reviewers 1(ueBy), 2(J5sH), 3(UGij), and 4 (bACr) are highlighted in red, green, orange, and violet colors respectively.\"}", "{\"title\": \"Summary of Revision and Response\", \"comment\": \"Dear Reviewer UGi2,\\n\\nFirst, we thank reviewer UGi2 for your effort and time to review our work.\\nWe believe we have addressed all your concerns in our revised paper and previous responses (comments)\\nSecond, since the discussion phase has ended, We would like to summarize our revision and responses, and emphasize the following points:\\n\\n1. **Revision of Ablation**: We have revised our ablation study in a step-by-step as you advised. It explains the contribution of each component in a more detailed way (sec. 5.2.e). \\n\\n2. **Method Overview and Writing Improvement**: We have added an overview explaining the general ideas of our proposed method in sec 4.1. We have improved the writing of our paper in terms of layout, presentation and clarity. \\n\\n3. **Fairness Setting** : LEAPGen utilizes the same resources as ConvPrompt and LEAPGen-lite(no descriptors with lightweight generators) utilizes less resources than ConvPrompt and comparable resources to existing SOTAs (sec. 5.1, sec. 5.2.g, Appendix E). All methods are evaluated in the same setting and run with respective best (recommended by official paper/code) hyperparameters. Thus we believe we have satisfied the fairness aspect in our study.\\n\\n3. **The presence of GPT** : \\n\\n- **GPT is not part of our method** (sec 4.2.a, and fig. 2), It is an optional part to generate descriptors following ConvPrompt. Even without GPT-generated descriptors. our method works excellently and significantly outperforms the existing SOTAs, as demonstrated by LEAPGen-lite (sec. 5.2.d) and LEAPGen-CN (Appendix D.4). \\n\\n- **Descriptor impact** to the model performance has been demonstrated by the performance difference between LEAPGen and LEAPGen-CN (Appendix D.4). We also extend our analysis of the performance of LEAPGen vs ConvPrompt in various types of descriptors (Appendix D.5).\\n\\n- **LEAPGen-lite** proves the significant impacts of our ideas i.e. (1) language as input for prompt generation, (2) task-wise generators, (3) soft task-id predictor, and (4)learning with auxiliary despite without GPT-generated descriptors, and utilize least parameters, running time and storage. \\n\\nWe believe, these highlights resolve your concerns and prove the significant impacts of our ideas, without being bound to the presence of GPT. \\n\\nBest Regards,\\n\\nAuthor of Submission 6684\"}", "{\"title\": \"Autrhor Response to Reviewer bACr for Q1/W1-Q2/W2\", \"comment\": \"Q1/W1. The comparison mentioned in lines 183-184 between \\\"the photo of car\\\" and \\\"the photo of cat,\\\" stating they have 15/16 similarity but 1/16 dissimilarity, is confusing. The authors compared the similarity between these two prompts on a letter-by-letter basis and concluded that they are similar. However, the similarity between two prompts should not be evaluated in such a manner. After being processed by the tokenizer, the embeddings of the two prompts are not similar, as their meanings differ significantly.\", \"our_response\": \"Thank you for your concern.\\n\\n- About the writing error in lines 183-184. We apologize for our writing error. We have revised the sentences in our revised manuscript. The sentences are revised into: \\\"LGCL produces class prototype $L_c^n$ by encoding string \\u201dthe photo of class name\\u201d. However, the prototypes could be misleading due to high similarity between different classes, e.g. the prototype of class \\u201dGreat White\\nShark\\u201d has 0.9 cosine similarity to the prototypes of class \\u201dTree Frog\\u201d and \\u201dIguana\\u201d, please see Appendix D.2. \\\"\\n\\n- Our statement is supported by our empirical study on the high similarity of language embedding encoded by CLIP as presented in Appendix D.2.\\n\\nQ2/W2. The proposed method relies on using ChatGPT to generate descriptive terms for each class and requires an additional Sentence Transformer to obtain embeddings. The introduction of these extra resources creates an unfair comparison with other methods and limits the practical applicability of the approach in real-world scenarios.\"}", "{\"title\": \"Third Follow Up (Cont'd)\", \"comment\": \"- LEAPGen-lite (lightweight generators and without descriptors) performance analysis in Section 5.2.d, Tables 2,3,4, and Figure 3.\\n\\n\\n| Method | CUB 10-Tasks @20c | | | |\\n|------------------|-------------------|------------------|-----------------|-----------------|\\n| | FAA | CAA | FFM | CFM |\\n| L2P | 66.95 \\u00b1 0.13 | 74.03 \\u00b1 0.32 | 5.18 \\u00b1 0.11 | 7.39 \\u00b1 0.23 |\\n| DualPrompt | 73.95 \\u00b1 0.73 | 80.29 \\u00b1 0.15 | 7.87 \\u00b1 0.76 | 8.60 \\u00b1 0.37 |\\n| CODA-P | 72.99 \\u00b1 0.30 | 82.64 \\u00b1 0.64 | 11.71 \\u00b1 1.49 | 9.91 \\u00b1 1.04 |\\n| LGCL | 79.93 \\u00b1 0.30 | 83.07 \\u00b1 0.23 | 5.45 \\u00b1 0.33 | 5.58 \\u00b1 0.39 |\\n| HiDe-Prompt | 87.21 \\u00b1 0.18 | 87.66 \\u00b1 0.01 | 1.90 \\u00b1 0.45 | 2.66 \\u00b1 0.13 |\\n| PGP | 78.35 \\u00b1 0.68 | 82.21 \\u00b1 0.56 | 5.76 \\u00b1 0.10 | 6.10 \\u00b1 0.26 |\\n| EvoPrompt | 76.23 \\u00b1 0.51 | 81.00 \\u00b1 0.40 | 3.96 \\u00b1 0.43 | 3.57 \\u00b1 0.83 |\\n| CPrompt | 77.14 \\u00b1 1.16 | 85.67 \\u00b1 0.56 | 11.65 \\u00b1 0.47 | 8.69 \\u00b1 0.31 |\\n| ConvPrompt | 80.12 \\u00b1 1.37 | 84.70 \\u00b1 0.64 | 6.04 \\u00b1 0.97 | 4.61 \\u00b1 0.45 |\\n| **LEAPGen-lite** | **86.00 \\u00b1 0.32** | **88.03 \\u00b1 0.47** | **2.46 \\u00b1 1.61** | **2.29 \\u00b1 1.15** |\\n| **LEAPGen** | **88.45 \\u00b1 0.58** | **90.90 \\u00b1 0.81** | **1.32 \\u00b1 0.52** | **1.29 \\u00b1 0.82** |\\n\\n\\n\\nCompared to other approaches\\n\\n| Method | ImageNet-R 5T | ImageNet-R 5T | ImageNet-R 10T | ImageNet-R 10T | ImageNet-R 20T | ImageNet-R 20T | CIFAR100 10T | CIFAR100 10T |\\n|------------------|------------------|------------------|------------------|------------------|------------------|------------------|------------------|------------------|\\n| | FAA | CAA | FAA | CAA | FAA | CAA | FAA | CAA |\\n| C-LoRA | 75.85 \\u00b1 0.31 | 78.85 \\u00b1 0.34 | 71.89 \\u00b1 0.45 | 75.33 \\u00b1 0.28 | 65.71 \\u00b1 0.60 | 70.63 \\u00b1 0.85 | 82.97 \\u00b1 0.47 | 87.06 \\u00b1 0.25 |\\n| LAE | 73.84 \\u00b1 0.14 | 77.29 \\u00b1 0.45 | 71.70 \\u00b1 0.39 | 76.71 \\u00b1 0.10 | 66.98 \\u00b1 0.35 | 73.72 \\u00b1 0.05 | 88.81 \\u00b1 0.34 | 91.59 \\u00b1 0.13 |\\n| InfLoRA | 77.52 \\u00b1 0.37 | 82.01 \\u00b1 0.12 | 75.65 \\u00b1 0.14 | 80.82 \\u00b1 0.24 | 71.01 \\u00b1 0.45 | 77.28 \\u00b1 0.45 | 89.84 \\u00b1 0.03 | 91.70 \\u00b1 0.32 |\\n| SLCA | - | - | 77.00 \\u00b1 0.33 | 81.17 \\u00b1 0.64 | - | - | 91.53 \\u00b1 0.28 | 94.09 \\u00b1 0.87 |\\n| GMM | - | - | 80.72 | - | - | - | 87.59 | - |\\n| **LEAPGen-lite** | **82.44 \\u00b1 0.63** | **84.37 \\u00b1 0.90** | **82.38 \\u00b1 1.04** | **85.14 \\u00b1 0.52** | **83.67 \\u00b1 0.39** | **85.65 \\u00b1 0.33** | **98.58 \\u00b1 0.03** | **98.69 \\u00b1 0.10** |\\n| **LEAPGen** | **82.79 \\u00b1 0.32** | **85.06 \\u00b1 0.29** | **84.09 \\u00b1 0.93** | **85.54 \\u00b1 0.65** | **87.03 \\u00b1 0.12** | **87.81 \\u00b1 0.48** | **98.38 \\u00b1 0.15** | **98.15 \\u00b1 0.39** |\\n\\n\\n5.2.d) LEAPGen-lite\\u2019s Performance: As shown in Table 2-4 and Figure 3 Despite utilizing far smaller\\n(2.67% params) generators and without class descriptors generated by LLM, LEAPGen-lite still\\noutperforms the existing methods significantly i.e. 4.7-25% FAA and 3-11% CAA in CIFAR100,\\nand 3-30% FAA and 2-26% CAA in ImageNet-R dataset. LEAPGen-lite also achieves a low for-\\ngetting rate in these 2 datasets for all task-settings. In the CUB dataset, LEAPGen-lite archives a\\ncomparable performance to HiDe-Prompt and outperforms other SOTA with a significant margin i.e.\\n6-20% FAA and 3.3-14% CAA. This evidence proves our ideas i.e. language embedding as input\\nfor prompt generation, task-wise generators, soft task-id predictor, and learning with auxiliary data\\nare not bounded by the generated descriptors and the size of generators.\\n\\n**Please Note: The analysis of LEAGen-lite empirically proves that our method doesn't rely on GPT/LLM or complex generator networks (high number of parameters)**\"}", "{\"title\": \"Third Follow Up\", \"comment\": \"Dear Reviewer J5sH,\\n\\nWe would like to follow up on our response. Could you please kindly review our latest manuscript regarding your concern?\\nThank you.\", \"here_are_the_pointers_to_your_key_points\": \"We also copied the updates (or snapshots) here, thus you can review them directly. \\n\\nW1. Details of ViT and Text Encoder: Appendix E. Detailed Experimental Setting \\n\\nAll the evaluated methods utilize ViT B/16 pre-trained on ImageNet 21K as the backbone model. LGCL utilizes pre-trained\\nCLIP text encoder, while ConvPrompt utilizes SentenceTransformer pre-trained on BERT as its text\\nencoder.\", \"leapgen\": \"Our proposed method is implemented on top of the ViT backbone pre-trained on\\nImageNet-21K. The prompt structure is as defined in section 4. The prompt length is set to 30,\\nand the prefix tuning layers is set to 7 i.e. [0,1,2,3,4,5,6] for all main experiments. We utilize Adam\\noptimizer with a cosine learning rate scheduler. For CIFAR100 dataset, We set 0.01 initial learning\\nrate and 3, 10, and 10 epochs for 5-task, 10-task, and 20-task settings respectively. For ImageNet-\\nR dataset, We choose 5, 10, and 20 epochs for 5-task, 10-task, and 20-task settings respectively.\\nThe initial learning rate is chosen from the best of 0.04,0.05,0.06. For CUB dataset, We choose 20\\nepochs and 0.005 initial learning rate. Similar to ConvPrompt, we utilize SentenceTransformer as\\ntext encoder. All the pre-trained models i.e. ViT, SentenceTransformer and CLIP(LGCL) are kept\\nfrozen (not fine-tuned).\\n\\n\\nW2. Increase of Parameters:\\n\\n- Section 5.2.g: Analysis of parameters, running time, and storage\\n\\n| Method | Desc | #Params(M) | Running Time (h) | | | | Storage (MB) |\\n|--------------|:----:|:----------:|:----------------:|:-----:|:------:|:-----:|:------------:|\\n| | | | T.Desc | Inf | Tr+Inf | Total | |\\n| HiDe-Prompt | - | 0.15 | - | 0.019 | 5.40 | 5.40 | 334 |\\n| ConvPrompt | v | 1.28 | 1.07 | 0.033 | 1.04 | 2.11 | 346 |\\n| LEAPGen-lite | - | 0.16 | - | 0.028 | 0.53 | 0.53 | 332 |\\n| LEAPGen | v | 6.35 | 1.07 | 0.025 | 0.72 | 1.79 | 567 |\\n\\nT.Desc, Tr, and Inf denote time for generating descriptors, training, and inference respectively, detailed in Appendix D3.\\n\\n5.2.g) Parameters, Running Time, and Storage: The table above compares the number of parameters, running time (ImageNet-R), and storage of our methods and existing SOTAs. Despite having a higher number of parameters and storage, LEAPGen consumes less running time than existing SOTAs both training+inference and total running time. LEAPGen-lite consumes the least costs in total running time and storage and requires relatively low parameters and inference time. LEAPGen and ConvPrompt require additional time to generate descriptors that increase their total simulation time. Despite having the least parameters, Hide-Prompt requires the longest training and total times since it needs extra operations to generate uninstructed class representations.\"}", "{\"title\": \"Additional Updates\", \"comment\": \"Dear Reviewers and Area Chairs,\\n\\nWe added additional updates in our latest manuscript that fully address the comments/concerns of reviewers J5sH, UGi2, and bACr as follows.\\n\\n1. We added experiments and analysis of the lite version of LEAPGen i.e. LEAPGen-lite that utilizes lightweight Conv1d generators and without class descriptors (it uses class names only). LEAPGen lite has far smaller parameters than ConvPrompt (12.7% of ConvPrompt #params) and is similar to Hide-Prompt parameters. Our experiment shows that LEAPGen-lite outperforms the existing SOTAs i.e. 4.7-25% FAA and 3-11% CAA in CIFAR100, 3-30% FAA and 2-26% CAA in ImageNet-R, and 6-20% FAA and 3.3-14% CAA in CUB. This evidence proves that the significant performance of our ideas (language as input for prompt generation, task-wise generators, soft task-id predictor, and learning with auxiliary) is not bounded by LLM-generated descriptors or the complexity (parameters and size) of the generators. Please kindly see the results and analysis in Tables 2,3,4, Figure 3, and Section 5.2.d.\\n\\n2. We added a summarized analysis on #Parameters, Running Time, and Storage as presented in section 5.2.g, detailed in Appendix D.3. The analysis shows that LEAPGen spends moderate inference time, but lower training+inference time and total running time, than ConvPrompt and Hide-Prompt, despite having more parameters and storage. LEAPGen-lite requires far smaller cost in all aspects than ConvPrompt and smaller running time and storage than HiDe-Prompt, with just a slightly higher number of parameters i.e. 0.16M vs 0.15M. We move the analysis of LEAPGen performance on various types of generators to the Appendix for the exchanged space in the main paper. \\n\\nBoth additional updates are presented in brown color in our latest manuscript.\\n\\nThank you.\\n\\nBest Regards\\n\\nAuthor of Submission 6684\"}", "{\"title\": \"Third Follow Up (Cont'd)\", \"comment\": \"W2. Step-by-step Ablation: Section 5.2.e.\\n\\n| Component | Loss | FAA | CAA | FFM | CFM |\\n|-----------------------------------------|-------------------------|:-----:|:-----:|:----:|:----:|\\n| FT | $\\\\mathcal{L}$_intra | 61.53 | 65.31 | 5.87 | 5.62 |\\n| FT+E | $\\\\mathcal{L}$_intra | 72.93 | 78.54 | 5.78 | 5.30 |\\n| FT+E+K | $\\\\mathcal{L}$_intra +$\\\\mathcal{L}_t$ | 74.88 | 79.52 | 2.84 | 3.17 |\\n| FT+G(E)+K | $\\\\mathcal{L}$_intra +$\\\\mathcal{L}_t$ | 76.64 | 80.48 | 1.85 | 2.44 |\\n| FT+G(E)+K+L | $\\\\mathcal{L}$_intra +$\\\\mathcal{L}_t$+$\\\\mathcal{L}_c$ | 77.23 | 81.05 | 0.50 | 1.27 |\\n| FT+G(E)+K+L+Aux | $\\\\mathcal{L}$_intra +$\\\\mathcal{L}_t$+$\\\\mathcal{L}_c$ | 78.38 | 81.98 | 0.49 | 1.15 |\\n| FT+G(E)+K+L+Aux | $\\\\mathcal{L}$_intra +$\\\\mathcal{L}$_inter +$\\\\mathcal{L}_t$+$\\\\mathcal{L}_c$ | 83.70 | 85.98 | 2.46 | 1.93 |\\n| FT+G(E)+K+L+Aux+ Soft Task-ID Predictor | $\\\\mathcal{L}$_intra +$\\\\mathcal{L}$_inter+$\\\\mathcal{L}_t$+$\\\\mathcal{L}_c$ | 84.73 | 86.14 | 0.91 | 1.42 |\\n\\ne) Ablation Study: yable above shows the impacts of the proposed method's components on its performance. \\\\textbf{ (1) Descriptor Embedding $E$} as one of LEAPGen's main components, elevates fine tuning (FT) baseline significantly i.e. $>11\\\\%$ FAA and $>13\\\\%$ CAA. It shows the promising impact of our idea . (2) Task Key $K^t$ and Loss $\\\\mathcal{L}_t$ transform the method into task-wise prompting based on input to $K^t$ and $\\\\mathcal{L}_t$ similarity. They produce a better model with 1-2\\\\% FAA and CAA improvement and reduce FFM and CFM by $>2\\\\%$. It shows the effectiveness of task-wise decomposition as applied in the task-wise prompt approach. \\n(3) Task-wise generator $G^t()$ enhances the model recognition capability by $2\\\\%$ and $1\\\\%$ for FAA and CAA respectively. It also decreases its forgetting rate by 1\\\\%. It means the trainable generator produces a more discriminative prompt than the descriptor embedding only. \\n(4) $L^t_c$ Class-wise Key $L^t_c$ and Loss $\\\\mathcal{L}_c$ for top-k embedding selection i.e. $E_i$ for $i \\\\in [1...k]$ improves the model performance with 1\\\\% margin. It also reduces the model forgetting rate with up to 1\\\\% margin. Thus, measuring the input similarity to $L^t_c$ offers a better embedding selection than to$E^t_c$ directly.\\n(5) Auxiliary} embedding improves the LEAPGen's performance with a more than $1\\\\%$ margin. It shows the secondary contribution of descriptors embedding as an auxiliary modality along with its primary role in prompt generation.\\n6) Inter-tasks loss $\\\\mathcal{L}$_inter improves LEAPGen accuracy significantly i.e. $5.4\\\\%$ and $4.0\\\\%$ for FAA and CAA respectively even though it increases FFM and CFM by $0.5-2\\\\%$. It contributes to balancing the knowledge of previously learned classes (stability) and currently learned classes (plasticity), and emphasizes the risk of forgetting previously learned classes. \\nSoft task-id predictor} substitution to conventional task-id (as in DualPrompt) improves the accuracy with up to $1\\\\%$ and reduces FFM and CFM by $1.5\\\\%$ and $0.5\\\\%$ respectively. It implies that our designed task-id predictor outperforms the existing task-id predictor both in accuracy and forgetting.\\n\\nThank you.\\n\\nBest Regards,\\n\\nAuthor of Submission 6684\"}", "{\"title\": \"Follow Up\", \"comment\": \"Dear Reviewer bACr,\\n\\nWe would like to follow up on our responses. Could you please kindly review our latest response that detailed in our updated manuscript? Could you please point out which part you are not sure regarding the fairness aspect and GPT-generated descriptors? \\n\\nWe believe we have addressed all your concerns in our updated paper and previous responses (comments). Thus, we kindly request Reviewer bACr to reevaluate the score for our paper.\\n\\nThank you.\\n\\nBest Regards,\\n\\nAuthor of Submission 6684\"}", "{\"metareview\": \"(a) The paper proposes LEAPGen, a novel method for continual learning that generates prompts using language descriptors for classes instead of shared embeddings, with a learnable task key and class-level parameters. LEAPGen predicts the task ID and selects top-k matching descriptor embeddings to generate prompts. Experimental results on CIFAR100, ImageNet-R, and CUB show that LEAPGen outperforms state-of-the-art methods in accuracy and mitigating forgetting.\\n\\n(b) Strengths: The strengths of the paper lie in its clear and well-motivated introduction, which effectively highlights limitations of prior works and outlines the proposed approach. LEAPGen introduces a novel use of language descriptors for prompt generation, leveraging richer semantic information and avoiding the need to store or replay old data, thereby reducing catastrophic forgetting. The method achieves significant improvements over state-of-the-art methods and incorporates a unique inter-class loss strategy to enhance performance. The experimental results are robust, with thorough comparisons against existing methods and detailed ablation studies to validate the contributions of individual components. Section 3 and Figure 1 provide a concise recap of prior methods, laying a solid foundation for the proposed approach.\\n\\n(c) Weaknesses: The paper has several weaknesses, including a lack of clarity about key details, such as the specific Sentence Transformer used, whether it is pre-trained, and whether any fine-tuning occurs during the continual learning process. The scalability of the proposed method is a concern, as adding new generators for new tasks could significantly increase the number of parameters, leading to potential unfair comparisons with related works. The writing and structure require improvement, as the compact format affects readability, and some components, like the theorem, seem irrelevant and better suited for supplementary materials. The ablation study lacks rigor and should more systematically validate the contributions of each module starting from a standard baseline. Additionally, the use of ChatGPT for generating descriptions raises concerns about the fairness of comparisons and the source of performance gains, prompting the need for transparency and public availability of these descriptions.\\n\\n(d) The most important reasons for acceptance are the solid methods and compelling performance, along with the sufficient theoretical & empirical analyses. Since this paper has received 2 negative scores (i.e., 3), the AC has checked very carefully about the details and discovered that the major reasons for low scores, i.e., the inclusion of GPT for accurate description, are well addressed by the authors. The authors have designed a lite version (LEAPGen-lite) of the proposed approach which does not rely on GPT entirely. The LEAPGen-lite still shows significantly superior performance to previous sota methods, proving the significant impacts of the ideas. In the AC's view, all the issues have been addressed during the rebuttal period. By the way, Reviewer UGi2 who scores 3 fails to participate in the rebuttal period despite reminders from the AC.\", \"one_more_suggestion_for_the_authors_to_improve_the_presentation_quality\": \"In the current form of this manuscript, the authors show 4 principles. Four principles are too many; for a highly impactful paper, one principle is sufficient\\u2014for example, the residual idea in ResNet. The AC suggests the authors reconsider the core problem this paper aims to address, propose a single principle, and then present multiple methods to support this principle. This approach would enhance the clarity of the motivation and increase the paper's overall impact.\", \"additional_comments_on_reviewer_discussion\": \"(a) Reviewer ueBy notes no major weaknesses but raises questions to improve understanding. They highlight that the auxiliary data and inter-task loss, critical components demonstrated in the ablation study, are underemphasized in the stated contributions. Emphasizing these aspects could strengthen the presentation of the method. The authors have successfully addressed the issues.\\n\\n(b) Reviewer J5sH highlights that the paper lacks clarity on key details about the pre-trained models used, such as the specific Sentence Transformer and whether fine-tuning occurs during continual learning. They also point out the absence of a clear comparison of parameter increases due to adding generators for new tasks, noting that the generators are not lightweight. This could result in significant parameter growth and potentially unfair comparisons with related works. Given the rebuttal, the reviewer clearly understands that the increase in parameters is reasonable, and comparable with other research works. He decides to increase the score to 6.\\n\\n(c) Reviewer UGi2 notes that the paper's writing and overly compact structure hinder readability, suggesting an initial overview and moving the weakly relevant theorem to the supplementary materials. They find the ablation study insufficiently convincing and recommend validating modules step-by-step from a standard baseline. Concerns are raised about the fairness of experimental comparisons involving ChatGPT and the extent of performance gains attributed to accurate descriptions, with a suggestion to make ChatGPT-generated descriptions publicly available to enhance the paper's impact. Unfortunately, the reviewer UGi2 fails to participate in the rebuttal period despite reminders from the AC. The AC has checked the rebuttal and discovered that the authors have addressed the concerns accordingly.\\n\\n(d) Reviewer bACr finds the comparison of prompt similarities (lines 183-184) flawed, as evaluating similarity on a letter-by-letter basis is inappropriate and fails to reflect semantic differences. They highlight concerns about the reliance on ChatGPT and a Sentence Transformer, which introduce additional resources and create unfair comparisons with other methods, limiting real-world applicability. Although the method achieves state-of-the-art performance, gains are minor on fine-grained datasets like CUB200, and the paper's layout requires improvement for better readability. After the initial rebuttal, the reviewer still has the concerns regarding the resources of pre-computations from GPT. However, the AC notes that the LEAPGen-lite proposed by the authors has already removed the GPT-generation, demonstrating that the concern regarding GPT is not an issue.\"}", "{\"title\": \"Summary of Discussion\", \"comment\": \"Dear Reviewer bACr,\\n\\nFirst, we thank reviewer bACr for your effort and time to review our work, rebuttal, and revised paper.\\nWe believe we have addressed all your concerns in our revised paper and previous responses (comments)\\nSecond, since the discussion phase has ended, We would like to summarize our discussion and emphasize the following points:\\n\\n1. **Fairness Setting** : LEAPGen utilizes the same resources as ConvPrompt and LEAPGen-lite(no descriptors with lightweight generators) utilizes less resources than ConvPrompt and comparable resources to existing SOTAs (sec. 5.1, sec. 5.2.g, Appendix E). All methods are evaluated in the same setting and run with respective best (recommended by official paper/code) hyperparameters. Thus we believe we have satisfied the fairness aspect in our study.\\n\\n2. **The presence of GPT** : \\n\\n- **GPT is not part of our method** (sec 4.2.a, and fig. 2), It is an optional part to generate descriptors following ConvPrompt. Even without GPT-generated descriptors. our method works excellently and significantly outperforms the existing SOTAs, as demonstrated by LEAPGen-lite (sec. 5.2.d) and LEAPGen-CN (Appendix D.4). \\n\\n- **GPT is utilized in an online way** (through the Internet), by querying it using API and Python script. It is a common practice nowadays as in our simulation (please see https://platform.openai.com/docs/guides/). Thus we never download and save the GPT model in our storage. LEAPGen indeed saves the descriptors embedding (numerical vector), but not the GPT model. The embedding adds to the whole model's storage size but is still in a reasonable amount, and we have included it in our storage analysis. \\n\\n- **Descriptor generation by GPT** (before training) indeed takes a fair amount of time. However, despite spending extra time on it, our method still has a **lower total running time** than ConvPrompt and HidePrompt.\\n\\n- **LEAPGen-lite** proves the significant impacts of our ideas i.e. (1) language as input for prompt generation, (2) task-wise generators, (3) soft task-id predictor, and (4)learning with auxiliary despite without GPT-generated descriptors, and utilize least parameters, running time and storage. \\n\\nWe believe, these highlights clarify your uncertainties about the fairness setting and (again) prove the significant impacts of our ideas, without being bound to the presence of GPT. \\n\\nBest Regards,\\n\\nAuthor of Submission 6684\"}", "{\"comment\": \"Thanks for your response. Answer to the first question seems to be the primary concern and I still find it not convincing.\\n\\nAs for the utilization of GPT, I understand what the author means, but the actual storage and consumption should not be dismissed even if it is pre-computed. Since GPT is more like an external knowledge base, I'm not sure if it is fair to compare it with existing methods. More substantial improvements are supposed to observe since the introduction of these large-scale models.\"}", "{\"comment\": \"Dear Reviewer UGi2,\\n\\nCould you kindly review the rebuttal thoroughly and let us know whether the authors have adequately addressed the issues raised or if you have any further questions.\\n\\nBest,\\n\\nAC of Submission6684\"}", "{\"title\": \"Third Follow Up\", \"comment\": \"W2. Increase of Parameters:\\n\\n- Appendix D.3. TradeOffs Between Cost and Performance including detailed growing parameters on each task\\n\\n(1). Performance vs #Parameters and Running Time:\\n\\n| Method | Metrics | Value | | | | | | | | | | AVG | RunTime(h) |\\n|-------------------|:---------------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:----------:|\\n| | | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | | |\\n| HiDe-Prompt | Avg.Accuracy | 85.22 | 82.93 | 82.35 | 79.74 | 78.85 | 77.81 | 77.37 | 76.38 | 76.31 | 75.75 | 79.27 | - |\\n| HiDe-Prompt-Large | Avg.Accuracy | 85.76 | 83.12 | 82.03 | 79.49 | 78.55 | 76.55 | 76.00 | 74.82 | 74.98 | 74.30 | 78.56 | - |\\n| ConvPrompt | Avg.Accuracy | 89.53 | 86.24 | 84.40 | 81.88 | 80.79 | 80.04 | 78.80 | 78.15 | 77.74 | 77.08 | 81.47 | - |\\n| ConvPrompt-Large | Avg.Accuracy | 90.12 | 86.22 | 83.96 | 81.45 | 80.52 | 79.73 | 76.40 | 74.19 | 74.56 | - | 80.79 | - |\\n| LEAPGen-lite | Avg.Accuracy | 89.73 | 88.66 | 86.94 | 86.43 | 86.04 | 83.03 | 82.75 | 82.52 | 82.94 | 82.38 | 85.14 | - |\\n| LEAPGen | Avg.Accuracy | 89.83 | 87.63 | 86.46 | 85.97 | 84.71 | 84.79 | 84.01 | 83.77 | 84.12 | 84.09 | 85.54 | - |\\n| HiDe-Prompt | Avg.Forgetting | 3.29 | 1.50 | 2.41 | 2.19 | 2.53 | 2.20 | 2.25 | 2.30 | 2.29 | 2.33 | 2.33 | - |\\n| HiDe-Prompt-Large | Avg.Forgetting | - | 3.05 | 1.89 | 2.14 | 2.15 | 3.25 | 2.91 | 2.93 | 2.66 | 2.70 | 2.63 | - |\\n| ConvPrompt | Avg.Forgetting | - | 2.57 | 1.57 | 2.60 | 2.93 | 3.15 | 3.53 | 3.44 | 4.07 | 4.17 | 3.11 | - |\\n| ConvPrompt-Large | Avg.Forgetting | - | 4.65 | 3.39 | 4.62 | 4.69 | 5.56 | 8.91 | 10.55 | 10.50 | - | 6.61 | - |\\n| LEAPGen-lite | Avg.Forgetting | 1.11 | 0.75 | 0.58 | 0.56 | 4.13 | 3.37 | 2.92 | 2.70 | 3.01 | 2.13 | 2.13 | - |\\n| LEAPGen | Avg.Forgetting | - | 3.97 | 1.96 | 1.38 | 2.63 | 2.24 | 1.99 | 1.77 | 1.60 | 1.46 | 2.11 | - |\\n| HiDe-Prompt | #Parameters (M) | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 | 5.40 |\\n| HiDe-Prompt-Large | #Parameters (M) | 12.29 | 12.29 | 12.29 | 12.29 | 12.29 | 12.29 | 12.29 | 12.29 | 12.29 | 12.29 | 12.29 | 15.15 |\\n| ConvPrompt | #Parameters (M) | 0.55 | 0.60 | 0.70 | 0.80 | 0.85 | 0.89 | 0.99 | 1.09 | 1.19 | 1.28 | 0.89 | 2.11 |\\n| ConvPrompt-Large | #Parameters (M) | 5.18 | 5.82 | 6.65 | 7.28 | 7.91 | 8.60 | 9.81 | 10.94 | 12.01 | 13.23 | 8.74 | 15.28 |\\n| LEAPGen-lite | #Parameters (M) | 0.02 | 0.03 | 0.05 | 0.07 | 0.08 | 0.10 | 0.11 | 0.13 | 0.15 | 0.16 | 0.09 | 0.53 |\\n| LEAPGen | #Parameters (M) | 6.21 | 6.23 | 6.24 | 6.26 | 6.27 | 6.29 | 6.31 | 6.32 | 6.34 | 6.35 | 6.28 | 1.79 |\\n\\n(2). Detailed Running Time: T.Desc, Tr, and Inf denote time for generating descriptors, training, and inference respectively.\\n\\n| Method | Time (h) | | | | | | | | | | | |\\n|--------------|:------------:|:-----:|:------:|:-----:|:--------------:|:-----:|:------:|:-----:|:-------:|:-----:|:------:|:-----:|\\n| | CIFAR100 10T | | | | ImageNet-R 10T | | | | CUB 10T | | | |\\n| | T.Desc. | Inf | Tr+Inf | Total | T.Desc. | Inf | Tr+Inf | Total | T.Desc. | Inf | Tr+Inf | Total |\\n| HiDe-Prompt | - | 0.037 | 4.63 | 4.63 | - | 0.019 | 5.40 | 5.40 | - | 0.017 | 3.94 | 3.94 |\\n| ConvPrompt | 0.53 | 0.023 | 2.01 | 2.54 | 1.07 | 0.033 | 1.04 | 2.11 | 1.09 | 0.035 | 8.08 | 9.17 |\\n| LEAPGen-lite | - | 0.016 | 1.2 | 1.2 | - | 0.028 | 0.53 | 0.53 | - | 0.028 | 0.38 | 0.38 |\\n| LEAPGen | 0.53 | 0.017 | 0.98 | 1.51 | 1.07 | 0.025 | 0.72 | 1.79 | 1.09 | 0.025 | 0.32 | 1.41 |\\n\\n(3). Performance vs Storage:\\n\\n| Method | Performance | | Storage(MB) |\\n|-------------------|:-----------:|:-----:|:-----------:|\\n| | FAA | CFA | |\\n| HiDe-Prompt | 75.75 | 79.27 | 334 |\\n| HiDe-Prompt-Large | 74.30 | 78.56 | 797 |\\n| ConvPrompt | 77.08 | 81.47 | 346 |\\n| ConvPrompt-Large | 74.56* | 80.79 | 632 |\\n| LEPGen-lite | 82.38 | 85.14 | 332 |\\n| LEAPGen | 84.09 | 85.54 | 567 |\"}", "{\"summary\": \"The paper proposes a new method for prompt based continual learning. Unlike previous methods that focus on task-specific prompts or growing prompt components, this work generates prompts using language descriptions of top-k similar classes from all the previously trained tasks to an input image. The generated prompts are plugged in to the Key and Value pairs of the backbone model for the given input image. In addition, the combined encoded language descriptions of the top-k similar classes are appended to the input image embeddings. Result on CIFAR100, ImageNet-R and CUB datasets with pretrained ViT-B/16 backbone shows significant improvements across accuracy and forgetting metrics over previous works. Extensive ablation study on different components provides better insights on the proposed approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"\\u2022\\tIntroduction is clear in terms of discussing drawbacks of prior works and motivating the proposed approach. Contributions are listed clearly.\\n\\n\\u2022\\tI appreciate Section 3 and Figure 1 as it gives provides brief recap of the previous SOTA methods and laid some foundation for discussing the proposed method.\\n\\n\\u2022\\tLeveraging language descriptions and using all language descriptors to find top-k similar classes to generate the prompts is novel in the context of continual learning.\\n\\n\\u2022\\tThe strategy to incorporate inter-class loss helps the method to achieve better results.\\n\\n\\u2022\\tComparisons are made against various previous works and significant improvements are shown with their proposed method.\\n\\n\\u2022\\tAppreciate the detailed ablation study to understand benefits of different components.\", \"weaknesses\": \"I don\\u2019t find any major weakness but have set of questions (see below) to improve the understanding. However, I find that learning with auxiliary data and inter-task loss are critical components of the method (as shown in ablation study), which are not sufficiently emphasized in the contributions.\", \"questions\": \"1. Between L155-160, under the context of previous task-specific approaches, it is mentioned that two different tasks could produce similar key vectors. It would make a stronger point to justify this statement with some empirical analysis.\\n\\n2. What is the value of k for top-k descriptors and how many Generators are chosen for each task? Can we chose a single generator for each task? How {E_i}^t of size |C^t|gets chosen for k set of generators?\\n\\n3. From the Figure 2 or its caption, it is not clear what is the component that is compared against input x to get the cosine similarity. The sentence \\u201csearch space for top-k\\u201d in the figure gives some hint, but the arrows for entire blue block rising the confusion. Please clarify it in the figure or caption.\\n\\n4. Why it is interesting to solve equation 3, if the task is to search for E_i in all tasks [1, 2, \\u2026.t]? Similarly, why consider K^t and why not just use {L_i}^t ?\\n\\n5. Under the ablation study, what does absence of generator mean? How do you get prompts in this approach without generator?\\n\\n6. Is the MLP head (classifier) also trained for each task? How can it produce list of softmax values of all classes for equation 7?\\n\\n7. Please explain why inter-task loss help improve FAA and CAA? What does it tell us about the learned parameters?\\n\\n8. Has the importance of inter-task loss already explored in previous continual learning papers?\", \"minor_suggestions\": \"a. In Figure 1, reposition the labels \\u201cfixed\\u201d, \\u201ctrainable\\u201d, and \\u201cgenerated\\u201d since these labels affect entire Figure, not just the green block.\\n\\nb. In section 3.b, I suggest to use emphasized text style for \\u201cpool-based approach\\u201d, \\u201ctask-specific prompt approach\\u201d, and \\u201cgrowing component approach\\u201d.\\n\\nc. In Line 242, correct the typo \\u201ctem\\u201d -> \\u201cthem\\u201d\\n\\nd. Line 267 mentions t_hat, but eq. 3 uses notation as t, but not t_hat.\\n\\n------------------------------------------------------------------------\", \"final_review\": \"I thank authors for their responses. I find that paper proposes a novel method to leverage language descriptors within the context of continual learning. Authors have conducted thorough experimental evaluation and demonstrated notable improvements across multiple datasets. I read the fellow reviewers concerns and I find that authors have sufficiently addressed the major concerns and revised their manuscript accordingly. In particular, regarding the utilization of GPT, authors have justified that their method does not necessarily rely on compute heavy GPTs or LLMs, and can also work without GPT-generated descriptors. Overall, I believe that this paper would be interesting to the research community and suggest towards acceptance.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Third Follow Up\", \"comment\": \"Dear Reviewer UGi2,\\n\\nWe would like to follow up on our response. Could you please kindly review our updated manuscript regarding your concerns? Thank you.\", \"here_are_the_pointers_to_your_key_points\": \"We also copied the updates here, thus you can review them directly. \\n\\nW1. Proposed Method Overview: Section 4.1'\\n\\nIn this study, we propose a novel LanguagE As Prompt Generator (LEAPGen) accommodating our\\nmain principles that are emphasized in the introduction section. The structure and flow of LEAPGen\\nare visualized in figure 2. LEAPGen generators produce prompts from top-k selected embedding\\nas input. LEAPGen also produces auxiliary (aux) data from the top-k embedding. The prompts are\\nprepended into ViT MSAs while the aux is appended into input patches, thus producing feature and\\nfinal prediction by ViT layers and MLP head respectively. The top-k embedding is selected based on\\nthe cosine similarities between an input and the class-wise keys. LEAPGen limits the search space\\ninto task 1 to predicted task t, by performing soft task-id prediction. In each task of the training\\nphase, LEAPGen updates task-associated learnable parameters i.e. generator, task-wise key, and\\nclass-wise keys. In the inference phase, LEAPGen selects the generators based on the predicted task\\nt. LEAPGen utilizes cross-entropy loss and cosine similarity loss to optimize its parameters. Task-\\nwise generators, language embedding for prompt generation, and soft task-id prediction are unique\\nto recent SOTAs of evolving generator methods e.g. (Roy et al., 2024) and (Kurniawan et al., 2024),\\ntask-wise fixed prompt methods such as (Wang et al., 2024a) and (Gao et al., 2024), and pool-based\\nmethod (Wang et al., 2022c) in terms of prompt generation/selection, task-prediction mechanism,\\nand modality for prompt generation. The detailed architecture, flow, and learning mechanism are\\npresented in sub-section 4.2 and 4.3.\\n\\nW3. The use of GPT:\\n\\n- LEAPGen-lite (lightweight generators and without descriptors) performance analysis in Section 5.2.d, Tables 2,3,4, and Figure 3.\\n\\n| Method | Split-CIFAR100 | | | | Split-ImageNet-R | | | |\\n|------------------|------------------|------------------|-----------------|-----------------|------------------|------------------|-----------------|-----------------|\\n| | FAA | CAA | FFM | CFM | FAA | CAA | FFM | CFM |\\n| | 5 Tasks @20c | | | | 5 Tasks @40c | | | |\\n| L2P | 84.77 \\u00b1 0.48 | 88.67 \\u00b1 0.30 | 6.18 \\u00b1 0.57 | 5.99 \\u00b1 0.29 | 64.62 \\u00b1 0.32 | 68.01 \\u00b1 0.42 | 3.94 \\u00b1 0.16 | 3.55 \\u00b1 0.20 |\\n| DualPrompt | 86.41 \\u00b1 0.21 | 89.95 \\u00b1 0.10 | 5.37 \\u00b1 0.21 | 4.77 \\u00b1 0.46 | 69.71 \\u00b1 0.11 | 72.78 \\u00b1 0.14 | 3.32 \\u00b1 0.16 | 2.78 \\u00b1 0.25 |\\n| CODA-P | 88.22 \\u00b1 1.06 | 92.25 \\u00b1 1.28 | 7.05 \\u00b1 2.18 | 6.06 \\u00b1 2.66 | 74.89 \\u00b1 0.36 | 79.71 \\u00b1 1.27 | 8.89 \\u00b1 0.65 | 7.65 \\u00b1 0.98 |\\n| LGCL | 86.90 \\u00b1 0.40 | 90.45 \\u00b1 0.18 | 5.01 \\u00b1 0.35 | 4.36 \\u00b1 0.13 | 69.93 \\u00b1 0.21 | 72.91 \\u00b1 0.19 | 3.04 \\u00b1 0.36 | 2.50 \\u00b1 0.38 |\\n| HiDe-Prompt | 91.99 \\u00b1 0.03 | 93.95 \\u00b1 0.09 | 2.52 \\u00b1 0.18 | 2.33 \\u00b1 0.15 | 75.40 \\u00b1 0.27 | 78.88 \\u00b1 0.04 | 3.15 \\u00b1 0.46 | 2.64 \\u00b1 0.16 |\\n| PGP | 87.69 \\u00b1 0.06 | 91.26 \\u00b1 0.13 | 5.32 \\u00b1 0.18 | 4.60 \\u00b1 0.15 | 69.71 \\u00b1 0.15 | 72.77 \\u00b1 0.07 | 3.36 \\u00b1 0.23 | 2.85 \\u00b1 0.25 |\\n| EvoPrompt | 89.07 \\u00b1 0.38 | 92.32 \\u00b1 0.26 | 5.25 \\u00b1 0.65 | 5.39 \\u00b1 0.24 | 77.27 \\u00b1 0.40 | 81.67 \\u00b1 0.18 | 1.79 \\u00b1 0.31 | 1.41 \\u00b1 0.32 |\\n| CPrompt | 89.22 \\u00b1 0.05 | 93.09 \\u00b1 0.06 | 5.02 \\u00b1 0.17 | 4.31 \\u00b1 0.35 | 78.65 \\u00b1 0.00 | 82.44 \\u00b1 0.00 | 6.00 \\u00b1 0.00 | 5.49 \\u00b1 0.00 |\\n| ConvPrompt | 90.26 \\u00b1 0.44 | 93.49 \\u00b1 0.19 | 3.64 \\u00b1 0.28 | 3.25 \\u00b1 0.16 | 79.36 \\u00b1 0.08 | 82.93 \\u00b1 0.24 | 3.42 \\u00b1 0.05 | 2.36 \\u00b1 0.16 |\\n| **LEAPGen-lite** | **97.07 \\u00b1 0.08** | **97.28 \\u00b1 0.10** | **0.05 \\u00b1 0.01** | **0.02 \\u00b1 0.01** | **82.44 \\u00b1 0.63** | **84.37 \\u00b1 0.90** | **0.43 \\u00b1 0.08** | **0.17 \\u00b1 0.06** |\\n| **LEAPGen** | **96.84 \\u00b1 0.12** | **96.85 \\u00b1 0.26** | **0.08 \\u00b1 0.07** | **0.06 \\u00b1 0.04** | **82.79 \\u00b1 0.32** | **85.06 \\u00b1 0.29** | **0.51 \\u00b1 0.04** | **0.18 \\u00b1 0.07** |\"}", "{\"summary\": \"This paper presents a novel approach to address the catastrophic forgetting in continual learning through a prompt-based structure. This method incorporates language inputs for prompt generation and utilizes task-wise generators and soft task-ID prediction. The authors highlight the advantages over existing methods across various datasets, showcasing substantial improvements accuracy while minimizing forgetting metrics.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The authors systematically review the limitations of existing prompt-based methods, and Figure 1 commendably illustrates the comparison of these methods.\", \"The proposed method demonstrates performance that far exceeds current methods, which is pleasantly surprising.\", \"The paper introduces the novel and meaningful use of large language models to generate more refined descriptions.\"], \"weaknesses\": [\"The writing of this paper needs significant improvement. Additionally, the structure is overly compact, affecting readability. For instance, the theorem presented in the paper has weak relevance to the proposed method, and I believe it is unnecessary. I suggest moving it to the supplementary materials. Given the complexity and sophistication of the method, I encourage the authors to provide an overview initially.\", \"The provided ablation study is not sufficiently convincing. I recommend that the authors validate the effectiveness of the proposed modules step-by-step starting from a standard baseline.\", \"The inclusion of the ChatGPT model raises the question of how to ensure the fairness of experimental comparisons under such settings, and to what extent the performance gains are due to more accurate descriptions.\", \"Considering the authors (are about to) open-source their code, I suggest making the descriptions obtained via ChatGPT publicly available for deeper analysis. I believe this will enhance the impact of the paper.\"], \"questions\": \"Please see the comments of weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Second Follow Up\", \"comment\": \"Dear Reviewer J5sH,\\n\\nWe would like to follow up on our response to your comments Could you please kindly check our revised manuscript?\", \"here_are_the_pointers_to_your_key_points\": \"W1. Details of ViT and Text Encoder: Appendix E. Detailed Experimental Setting \\n\\nW2. Increase of Parameters:\\n\\n- Section 5.2.g. Analysis of Parameters, Running Time and Storage\\n\\n- Appendix D.3. TradeOffs Between Cost and Performance including detailed growing parameters on each task\\n\\n- Section 5.2.d., Tables 2,3,4, and Figure 3 i.e. Analysis of LEAPGen-lite (lightweight generators and without descriptors) performance that proves the significant performance of our ideas is not bounded by LLM-generated descriptors or the complexity (parameters and size) of the generators. \\n\\nThank you.\\n\\nBest Regards,\\n\\nAuthors of Submission 6684\"}", "{\"title\": \"Third Follow Up (Cont'd)\", \"comment\": \"- LEAPGen-lite (lightweight generators and without descriptors) performance analysis in Section 5.2.d, Tables 2,3,4, and Figure 3.\\n\\n| Method | CIFAR100 | | | | ImageNet-R | | | |\\n|------------------|:----------------:|:----------------:|:---------------:|:---------------:|:----------------:|:----------------:|:---------------:|:---------------:|\\n| | FAA | CAA | FFM | CFM | FAA | CAA | FFM | CFM |\\n| | 10 Tasks @10c | | | | 10 Tasks @20c | | | |\\n| L2P | 83.84 \\u00b1 0.32 | 88.67 \\u00b1 0.16 | 6.55 \\u00b1 0.34 | 5.16 \\u00b1 0.14 | 62.50 \\u00b1 0.51 | 67.05 \\u00b1 0.47 | 5.01 \\u00b1 0.40 | 4.41 \\u00b1 0.43 |\\n| DualPrompt | 85.36 \\u00b1 0.20 | 89.77 \\u00b1 0.20 | 5.41 \\u00b1 0.33 | 4.33 \\u00b1 0.15 | 68.59 \\u00b1 0.24 | 72.18 \\u00b1 0.20 | 4.61 \\u00b1 0.07 | 3.70 \\u00b1 0.18 |\\n| CODA-P | 86.44 \\u00b1 0.16 | 91.27 \\u00b1 0.56 | 6.38 \\u00b1 1.46 | 5.09 \\u00b1 1.19 | 73.77 \\u00b1 0.50 | 79.38 \\u00b1 1.48 | 7.94 \\u00b1 0.08 | 6.72 \\u00b1 0.79 |\\n| LGCL | 85.68 \\u00b1 0.43 | 90.16 \\u00b1 0.29 | 5.46 \\u00b1 0.22 | 4.25 \\u00b1 0.32 | 68.65 \\u00b1 0.25 | 72.57 \\u00b1 0.19 | 4.75 \\u00b1 0.33 | 3.38 \\u00b1 0.58 |\\n| HiDe-Prompt | 92.89 \\u00b1 0.11 | 95.01 \\u00b1 0.08 | 1.98 \\u00b1 0.05 | 1.56 \\u00b1 0.15 | 75.75 \\u00b1 0.40 | 79.27 \\u00b1 0.17 | 2.29 \\u00b1 0.27 | 2.33 \\u00b1 0.17 |\\n| PGP | 86.36 \\u00b1 0.19 | 90.83 \\u00b1 0.17 | 5.49 \\u00b1 0.35 | 4.28 \\u00b1 0.27 | 68.62 \\u00b1 0.14 | 72.19 \\u00b1 0.20 | 4.53 \\u00b1 0.40 | 3.63 \\u00b1 0.35 |\\n| EvoPrompt | 88.17 \\u00b1 0.51 | 92.18 \\u00b1 0.49 | 5.39 \\u00b1 0.45 | 3.97 \\u00b1 0.73 | 76.00 \\u00b1 0.26 | 80.97 \\u00b1 0.30 | 4.22 \\u00b1 0.42 | 3.59 \\u00b1 0.52 |\\n| CPrompt | 86.92 \\u00b1 1.04 | 91.73 \\u00b1 0.66 | 5.43 \\u00b1 0.74 | 4.01 \\u00b1 0.81 | 76.32 \\u00b1 0.53 | 81.50 \\u00b1 0.30 | 6.10 \\u00b1 0.75 | 5.60 \\u00b1 1.35 |\\n| ConvPrompt | 88.77 \\u00b1 0.24 | 92.71 \\u00b1 0.04 | 4.12 \\u00b1 0.44 | 2.67 \\u00b1 0.11 | 77.08 \\u00b1 0.26 | 81.47 \\u00b1 0.10 | 4.17 \\u00b1 0.04 | 3.11 \\u00b1 0.17 |\\n| **LEAPGen-lite** | **98.58 \\u00b1 0.03** | **98.69 \\u00b1 0.10** | **0.11 \\u00b1 0.03** | **0.06 \\u00b1 0.03** | **82.38 \\u00b1 1.04** | **85.14 \\u00b1 0.52** | **3.01 \\u00b1 1.19** | **2.13 \\u00b1 0.60** |\\n| **LEAPGen** | **98.38 \\u00b1 0.15** | **98.15 \\u00b1 0.39** | **0.10 \\u00b1 0.03** | **0.05 \\u00b1 0.00** | **84.09 \\u00b1 0.93** | **85.54 \\u00b1 0.65** | **1.46 \\u00b1 1.25** | **2.11 \\u00b1 1.21** |\\n| | 20 Tasks @5c | | | | 20 Tasks @10c | | | |\\n| L2P | 81.89 \\u00b1 0.38 | 87.16 \\u00b1 0.33 | 8.81 \\u00b1 0.10 | 6.79 \\u00b1 0.33 | 57.40 \\u00b1 0.31 | 63.33 \\u00b1 0.21 | 10.76 \\u00b1 0.45 | 7.88 \\u00b1 0.17 |\\n| DualPrompt | 82.32 \\u00b1 0.22 | 87.47 \\u00b1 0.24 | 6.88 \\u00b1 0.35 | 5.63 \\u00b1 0.23 | 65.19 \\u00b1 0.17 | 70.31 \\u00b1 0.29 | 7.30 \\u00b1 0.18 | 5.16 \\u00b1 0.34 |\\n| CODA-P | 81.29 \\u00b1 0.16 | 87.72 \\u00b1 0.44 | 6.82 \\u00b1 1.60 | 4.98 \\u00b1 0.95 | 70.55 \\u00b1 0.71 | 77.08 \\u00b1 1.02 | 8.23 \\u00b1 0.86 | 6.95 \\u00b1 0.70 |\\n| LGCL | 83.18 \\u00b1 0.40 | 88.63 \\u00b1 0.18 | 7.22 \\u00b1 0.47 | 4.91 \\u00b1 0.56 | 64.96 \\u00b1 0.67 | 70.18 \\u00b1 0.37 | 7.35 \\u00b1 0.65 | 5.05 \\u00b1 0.32 |\\n| HiDe-Prompt | - | 97.62 \\u00b1 0.14 | - | 0.74 \\u00b1 0.03 | - | 81.60 \\u00b1 0.48 | - | 2.23 \\u00b1 0.38 |\\n| PGP | 83.41 \\u00b1 0.35 | 89.23 \\u00b1 0.13 | 7.95 \\u00b1 0.23 | 5.66 \\u00b1 0.22 | 65.24 \\u00b1 0.25 | 70.36 \\u00b1 0.26 | 7.17 \\u00b1 0.21 | 5.09 \\u00b1 0.25 |\\n| EvoPrompt | 84.63 \\u00b1 0.21 | 89.47 \\u00b1 0.21 | 9.19 \\u00b1 0.41 | 7.39 \\u00b1 0.65 | 74.93 \\u00b1 0.64 | 79.92 \\u00b1 0.13 | 6.72 \\u00b1 0.90 | 5.67 \\u00b1 0.26 |\\n| CPrompt | 83.60 \\u00b1 0.00 | 90.10 \\u00b1 0.00 | 6.47 \\u00b1 0.00 | 4.78 \\u00b1 0.00 | 74.23 \\u00b1 0.17 | 79.82 \\u00b1 0.51 | 5.98 \\u00b1 0.24 | 5.54 \\u00b1 0.48 |\\n| ConvPrompt | 87.21 \\u00b1 0.20 | 91.60 \\u00b1 0.36 | 5.47 \\u00b1 0.33 | 3.92 \\u00b1 0.33 | 73.93 \\u00b1 0.36 | 78.92 \\u00b1 0.37 | 4.87 \\u00b1 0.57 | 3.57 \\u00b1 0.25 |\\n| **LEAPGen-lite** | **95.28 \\u00b1 3.37** | **98.38 \\u00b1 1.13** | **1.08 \\u00b1 1.54** | **0.70 \\u00b1 0.99** | **83.67 \\u00b1 0.39** | **85.65 \\u00b1 0.33** | **1.06 \\u00b1 0.24** | **0.47\\u00b1 0.14** |\\n| **LEAPGen** | **96.51 \\u00b1 2.16** | **98.73 \\u00b1 0.26** | **0.66 \\u00b1 0.75** | **0.52 \\u00b1 0.39** | **87.03 \\u00b1 0.12** | **87.81 \\u00b1 0.48** | **2.17 \\u00b1 0.17** | **2.54 \\u00b1 0.77** |\"}", "{\"comment\": \"Thank you for these detailed experiments.\\nFrom the revisited paper, I now clearly understand that the increase in parameters is reasonable, and comparable with other research works. These results answer my questions, and I decide to increase the rate accordingly.\"}", "{\"title\": \"Summary of Revision and Discussion\", \"comment\": \"Dear Program Chairs (PC), Senior Area Chairs (SAC), Area Chairs (AC), and Reviewers,\\n\\n\\nAs the discussion phase has ended,\\n\\n**First**, we thank reviewers PC, SAC, AC, and Reviewers for your effort and time in organizing the review and discussion of our paper. Through these processes, we have improved the technical quality and presentation of our paper significantly. We thank the reviewers ueBy and J5sH for reviewing our paper and/or its revision thoroughly, confirming the novelty and significance of our works. \\n\\n\\n**Second**, we have revised our paper following the reviewers' advice in both writing i.e. **layout, presentation, details, tables, figures, etc**, and technical aspects i.e. **method overview, ablation study, the trade-off between performance and cost, empirical/numerical evidence of task similarity, language embedding similarity, setting details, etc** as we mentioned in our previous official comments.\\n\\n\\n**Third**, we would like to confirm that we have addressed all concerns and questions of the reviewers in our revised paper and responses. However, we would like to emphasize a few key points that reviewers UGi2 and/or bACr may not have reviewed thoroughly in the discussion phase, as follows:\\n\\n\\n1. **Fairness Setting** : LEAPGen utilizes the same resources as ConvPrompt and LEAPGen-lite(no descriptors with lightweight generators) utilizes less resources than ConvPrompt and comparable resources to the other SOTAs (sec. 5.1, sec. 5.2.g, Appendix E). All methods are evaluated in the same setting and run with respective best (recommended by official paper/code) hyperparameters. Thus we believe we have satisfied the fairness aspect in our study.\\n\\n\\n2. **The Non-Mandatory Presence of GPT** : \\n\\n- **GPT is not part of our method** (sec 4.2.a, and fig. 2), It is one of the options to generate class descriptors following ConvPrompt. Our method accommodates both class descriptors (as in ConvPrompt) and class names (as in LGCL) language text. Even without GPT-generated descriptors. our method works excellently and significantly outperforms the existing SOTAs, as demonstrated by LEAPGen-lite (sec. 5.2.d) and LEAPGen-CN (Appendix D.4). \\n\\n- **GPT is utilized in an online way** (through the Internet), by querying it using API and Python script. It is a common practice nowadays as in our simulation (please see https://platform.openai.com/docs/guides/). Thus we never download and save the GPT model in our storage. LEAPGen indeed saves the descriptors embedding (numerical vector), but not the GPT model. The embedding adds to the whole model's storage size but is still in a reasonable amount, and we have included it in our storage analysis. \\n\\n- **Descriptor generation by GPT** (before training) via only query indeed takes a fair amount of time. However, despite spending extra time on it, our method still has a **lower total running time** than existing SOTAs, i.e. ConvPrompt and HidePrompt. (sec. 5.2.g, Appendix D.3)\\n\\n- **Descriptor impact** to the model performance has been demonstrated by the performance difference between LEAPGen and LEAPGen-CN (Appendix D.4). We also extend our analysis of the performance of LEAPGen vs ConvPrompt in various types of descriptors (Appendix D.5).\\n\\n\\n3. **LEAPGen-lite** proves the significant impacts of our ideas i.e. (1) language as input for prompt generation, (2) task-wise generators, (3) soft task-id predictor, and (4)learning with auxiliary despite without GPT-generated descriptors (only utilizes class names), and utilize least parameters, running time and storage. (sec. 5.2.d, sec. 5.2.g, Appendix D.3) \\n\\n\\nWe believe these key points **resolve all concerns** regarding the **fairness setting** and **GPT (non-mandatory) presence**, and prove the **significant impacts** of our ideas i.e. (1) language as input for prompt generation, (2) task-wise generators, (3) soft task-id predictor, and (4) learning with auxiliary, without being bound to the presence of GPT. \\n\\n\\nI think that's all, once again thank you to all the committee, we pray that the hard work and dedication of the committee will be paid off by the success and impact of this year's ICLR, Thank you.\\n\\n\\nBest Regards,\\n\\nAuthor of Submission 6684\"}", "{\"title\": \"Autrhor Response to Reviewer bACr for Q3/W3-Q4/W4\", \"comment\": \"Q3/W3. Although the proposed method achieves state-of-the-art performance on multiple datasets, the improvement is minor on some, such as the comparison with HiDe-Prompt on the CUB200 dataset. This is particularly relevant given the substantial additional resources, like ChatGPT, required by the method, which other approaches do not utilize. Moreover, this suggests that the performance gains on fine-grained datasets may not be significant, as the generated descriptive terms for each class are quite similar.\", \"our_response\": \"Thank you for your suggestion. We have ensured the the clear spacing for the mentioned parts and other components to improve the visualization of our paper. Please kindly see our latest manuscript.\"}", "{\"title\": \"Second Follow Up\", \"comment\": \"Dear Reviewer UGi2,\\n\\nWe would like to follow up on our response. Could you please kindly review our updated manuscript regarding your concerns?\", \"here_are_the_pointers_to_your_key_points\": \"W1. Proposed Method Overview: Section 4.1\\n\\nW1. Presentation Improvement: Please check our latest manuscript.\\n\\n\\nW2. Step-by-step Ablation: Section 5.2.e.\\n\\n\\nW3. The use of GPT: \\n\\n- LEAPGen-lite (lightweight generators and without descriptors) performance analysis in Section 5.2.d, Tables 2,3,4, and Figure 3 that proves the significant performance of our ideas is not bounded by LLM-generated descriptors or the complexity (parameters and size) of the generators. \\n\\n- Running time analysis in 5.2.g.\\n\\n- Detailed setting in Appendix E, LEAPGen and ConvPrompt use the same setting\\n \\nW4. Open the descriptors: Please check https://anonymous.4open.science/r/xt124j05/descriptors/.\\n\\n\\nThank you.\\n\\nBest Regards, \\n\\nAuthor of Submission 6684\"}" ] }
9aTZf71uiD
Sports-Traj: A Unified Trajectory Generation Model for Multi-Agent Movement in Sports
[ "Yi Xu", "Yun Fu" ]
Understanding multi-agent movement is critical across various fields. The conventional approaches typically focus on separate tasks such as trajectory prediction, imputation, or spatial-temporal recovery. Considering the unique formulation and constraint of each task, most existing methods are tailored for only one, limiting the ability to handle multiple tasks simultaneously, which is a common requirement in real-world scenarios. Another limitation is that widely used public datasets mainly focus on pedestrian movements with casual, loosely connected patterns, where interactions between individuals are not always present, especially at a long distance, making them less representative of more structured environments. To overcome these limitations, we propose a Unified Trajectory Generation model, UniTraj, that processes arbitrary trajectories as masked inputs, adaptable to diverse scenarios in the domain of sports games. Specifically, we introduce a Ghost Spatial Masking (GSM) module, embedded within a Transformer encoder, for spatial feature extraction. We further extend recent State Space Models (SSMs), known as the Mamba model, into a Bidirectional Temporal Mamba (BTM) to better capture temporal dependencies. Additionally, we incorporate a Bidirectional Temporal Scaled (BTS) module to thoroughly scan trajectories while preserving temporal missing relationships. Furthermore, we curate and benchmark three practical sports datasets, Basketball-U, Football-U, and Soccer-U, for evaluation. Extensive experiments demonstrate the superior performance of our model. We hope that our work can advance the understanding of human movement in real-world applications, particularly in sports. Our datasets, code, and model weights are available here https://github.com/colorfulfuture/UniTraj-pytorch.
[ "Trajectory Modeling", "Trajectory Generation", "Trajectory Prediction", "Trajectory Imputation" ]
Accept (Poster)
https://openreview.net/pdf?id=9aTZf71uiD
https://openreview.net/forum?id=9aTZf71uiD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zR5E7pcJ5J", "w7ubtjvU5T", "v6FuHsJ69a", "s66ubGplVi", "rARIKrB1IG", "pqvdR1ZEIV", "mG91BGnaXS", "ldYyvcH7WR", "lcz1Sy4tTr", "javihAZZvj", "fX9eFVgYzf", "fOyhIdu13G", "e0uSJ6irwi", "agwuuK1u9l", "X2eeXVjRD9", "UkKHbqWE0T", "Tqc7anhAqC", "PStJUZlizs", "Jxq1MICjJz", "JD4pAeCGIh", "HYVDP26Hvh", "FBG44JoIiq", "CPJbrdFN5L", "Af9HbJzx97", "6zjO3QTud8", "5tdrj8fk4N", "2ftxASxHVh", "0gmyndUs9n" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732333025605, 1733186637376, 1729954664301, 1732435678502, 1732938138654, 1732391493506, 1732123074056, 1732648519393, 1732613010432, 1734677998461, 1732344687768, 1732463593032, 1737523652238, 1732390243208, 1729758883579, 1730608125481, 1732397575456, 1732343751559, 1732435476694, 1732844759823, 1732475965856, 1732343769967, 1732453542580, 1729842340998, 1732342892950, 1732397308071, 1732391517682, 1732476181770 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4628/Reviewer_ZPa2" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ], [ "ICLR.cc/2025/Conference/Submission4628/Reviewer_HqF9" ], [ "ICLR.cc/2025/Conference/Submission4628/Reviewer_YWxk" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ], [ "ICLR.cc/2025/Conference/Submission4628/Reviewer_uin8" ], [ "ICLR.cc/2025/Conference/Submission4628/Area_Chair_7sW3" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ], [ "ICLR.cc/2025/Conference/Submission4628/Reviewer_uin8" ], [ "ICLR.cc/2025/Conference/Submission4628/Reviewer_YWxk" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ], [ "ICLR.cc/2025/Conference/Submission4628/Reviewer_YWxk" ], [ "ICLR.cc/2025/Conference/Submission4628/Reviewer_YWxk" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ], [ "ICLR.cc/2025/Conference/Submission4628/Reviewer_HqF9" ], [ "ICLR.cc/2025/Conference/Submission4628/Reviewer_ZPa2" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ], [ "ICLR.cc/2025/Conference/Submission4628/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response\", \"comment\": \"Thank you so much. Your response has addressed my concerns.\"}", "{\"comment\": \"Dear Reviewer uin8,\\n\\nThank you once again for your valuable feedback. As the discussion deadline is approaching, if you have any further questions or concerns, we would be more than happy to address them.\\n\\nBest regards,\\n\\nAuthors of Submission 4628\"}", "{\"summary\": \"This paper proposed a Unified Trajectory Generation model, UniTraj, that processes arbitrary trajectories as masked inputs, adaptable to diverse scenarios in the domain of sports games, integrating trajectory prediction, imputation, and spatial-temporal recovery into a single framework. Key contributions include the development of a Ghost Spatial Masking module for spatial feature extraction, a Bidirectional Temporal Mamba encoder for enhanced temporal modeling, and the curation of three new sports datasets for robust evaluation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"originality: This paper proposes a novel approach for handling multiple tasks, including trajectory prediction, imputation, and spatial-temporal recovery for multi-agent movement analysis. It introduces the innovative Ghost Spatial Masking module and extends the Mamba model with a new Bidirectional Temporal Scaled module to enhance the extraction of comprehensive spatial-temporal features from various incomplete trajectory inputs.\", \"quality\": \"The quality of this is generelly good. The method is well explained.\", \"clarity\": \"The paper is well-written and organized logically. The use of figures, especially the architectural diagrams and flowcharts, effectively aids in understanding the model's components and their interactions. Each section of the paper builds upon the previous one, leading to a cohesive narrative from problem formulation to experimental validation.\", \"significance\": \"The significance of this research lies in its potential impact on sports analytics and related fields requiring accurate multi-agent trajectory analysis.\", \"weaknesses\": \"The research problem is not well explained. The paper does not explain why all-in-one methods for the three tasks, trajectory prediction, imputation, and spatial-temporal recovery, have not been proposed. What are the challenges? The proposed methods should be compared separately with relevant methods for each task. Although the paper tests the model on three sports datasets, sports differ significantly in their dynamics and player interactions. More varied tests are needed.\", \"questions\": \"1. Why have all-in-one methods for the three tasks, trajectory prediction, imputation, and spatial-temporal recovery, not been proposed.?\\n2. What are the challenges? \\n3. Comparisons with other methods should be given.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The new experiments on ETH/UCY and SDD are helpful to understand the performance of the proposed method on usual trajectory prediction benchmarks. I will take the new experiments into consideration when rethinking the review rating.\"}", "{\"comment\": \"Thank you for your feedback and recognizing our work with an improved score!\\n\\nYour review has been incredibly helpful in improving the quality of our paper. We will definitely include the additional experiments and discussion of related works in our final version.\\n\\nIf you have any further questions, we are more than happy to answer them.\"}", "{\"title\": \"Response to Reviewer YWxk (Part 2/3)\", \"comment\": \"**[Q2] Ball and Category**\\n\\n**[A2]** That\\u2019s an insightful point. Following your suggestion, we conducted three experiments: \\n(1) keeping the ball while removing the offensive/defensive player categories,\\n(2) removing the ball while keeping and concatenating the offensive/defensive player categories, \\n(3) removing both the ball and the offensive/defensive player categories. \\nThe results are shown in the following tables.\\n\\n| Basketball | Variant | | | | | | | | \\n|--------------------|:-----------:|:--------------:|:--------------:|:-------------:|:-----------:|:------:|:--------:|:---------:|\\nMethod | Ball | Category | minADE$_{20}$ | OOB | Step | Path-L | Path-D |\\n| UniTraj | $\\\\checkmark$ | $\\\\times$ | 4.80 | 6.53e-04 | 0.29 | 35.23 | 143.05 |\\n| UniTraj | $\\\\times$ | $\\\\checkmark$ | 4.67 | 4.51e-04 | 0.27 | 31.95 | 113.17 |\\n| UniTraj | $\\\\times$ | $\\\\times$ | 4.65 | 3.59e-04 | 0.26 | 32.56 | 131.67 |\\n| UniTraj (Ours) | $\\\\checkmark$ | $\\\\checkmark$ | 4.77 | 6.12e-04 | 0.27 | 34.25 | 240.83 |\\n\\n\\n| Football| Variant | | | | | | | | \\n|--------------------|:-----------:|:--------------:|:--------------:|:-------------:|:-----------:|:------:|:--------:|:---------:|\\nMethod | Ball | Category | minADE$_{20}$ | OOB | Step | Path-L | Path-D |\\n| UniTraj | $\\\\checkmark$ | $\\\\times$ | 3.60 | 1.56e-04 | 0.24 | 19.34 | 138.99 |\\n| UniTraj | $\\\\times$ | $\\\\checkmark$ | 3.38 | 1.30e-04 | 0.23 | 18.85 | 111.43 |\\n| UniTraj | $\\\\times$ | $\\\\times$ |3.41 | 8.74e-05 | 0.24 | 19.13 | 122.83 |\\n| UniTraj (Ours) | $\\\\checkmark$ | $\\\\checkmark$ | 3.55 | 1.12e-04 | 0.23 | 19.26 | 114.58 |\\n\\n\\n| Soccer| Variant | | | | | | | | \\n|--------------------|:-----------:|:--------------:|:--------------:|:-------------:|:-----------:|:------:|:--------:|:---------:|\\nMethod | Ball | Category | minADE$_{20}$ | OOB | Step | Path-L | Path-D |\\n| UniTraj | $\\\\checkmark$ | $\\\\times$ | 96.09 | 1.66e-06 | 4.44 | 336.14 | 9146.07 |\\n| UniTraj | $\\\\times$ | $\\\\checkmark$ | 92.69 | 3.43e-07 | 3.70 | 289.46 | 1343.07 |\\n| UniTraj | $\\\\times$ | $\\\\times$ | 89.37 | 6.85e-07 | 4.03 | 304.41 | 2189.38 |\\n| UniTraj (Ours) | $\\\\checkmark$ | $\\\\checkmark$ | 94.59 | 3.31e-06 | 4.52 | 349.73 | 2805.79 |\\n\\nWe observe that removing the category information leads to a performance drop, as this information plays an important role in sports. Interestingly, better performance is achieved when removing the ball. A potential reason is that the ball's trajectory is often unstable and influenced by external forces, introducing randomness and outliers that may disrupt the model's ability to learn overall movement patterns. The gap in dynamic characteristics between the ball and player trajectories impacts learning.\\n\\nOverall, our proposed method achieves the best performance when both the ball and category information are removed. We will include these results for a fair comparison and provide additional analysis in the final version.\"}", "{\"title\": \"Response to Reviewer ZPa2\", \"comment\": \"**Dear Reviewer ZPa2**,\\n\\n***We sincerely appreciate your recognition of our contributions and your constructive suggestions to improve our manuscript.***\\nBelow, we provide detailed responses to address your concerns.\\n\\n**[Q1] LSTM/CNN replace MLP**\\n\\n**[A1]** If we understand correctly, your suggestion is to replace the MLP decoder with LSTM or CNN, as mentioned in the limitation section. Following your suggestion, we implemented more powerful networks, including LSTM and CNN decoders, to replace the MLP decoder. The results are as follows:\\n|Basketball | | | | | |\\n|:----------|:-------------------:|:------------:|:----------:|:---------:|:----------:|\\n| **Method** | **minADE$_{20}$** | **OOB** | **Step** | **Path-L** | **Path-D** |\\n| UniTraj | 4.77 | 6.12e-04 | 0.27 | 34.25 | 240.83 |\\n| w/ LSTM | 4.54 | 0 | 0.18 | 32.03 | 135.40 |\\n| w/ CNN | 4.62 | 5.57e-05 | 0.17 | 30.86 | 126.54 |\\n\\n|Football | | | | | |\\n|:----------|:-------------------:|:------------:|:----------:|:---------:|:----------:|\\n| **Method** | **minADE$_{20}$** | **OOB** | **Step** | **Path-L** | **Path-D** |\\n| UniTraj | 3.55 | 1.12e-04 | 0.23 | 19.26 | 114.58 |\\n| w/ LSTM | 2.93 | 0 | 0.15 | 15.03 | 81.97 |\\n| w/ CNN | 3.32 | 8.87e-05 | 0.14 | 15.57 | 77.88 |\\n\\n|Soccer| | | | | |\\n|:----------|:-------------------:|:------------:|:----------:|:---------:|:----------:|\\n| **Method** | **minADE$_{20}$** | **OOB** | **Step** | **Path-L** | **Path-D** |\\n| UniTraj | 94.59 | 3.31e-06 | 4.52 | 349.73 | 2805.79 |\\n| w/ LSTM | 77.90 | 0 | 2.77 | 223.54 | 1195.83 |\\n| w/ CNN | 92.13 | 3.91e-05 | 2.79 | 244.08 | 1065.09 |\\n\\nFor the w/ LSTM variant, we replace the MLP decoder with a single-layer LSTM with a hidden dimension of 128. For the w/ CNN variant, we replace the MLP decoder with a three-layer Conv2D network, where the first two layers use 64 filters with a kernel size of (3, 1), and the last layer uses 64 filters with a kernel size of (1, 1).\\n\\nOur results show that using more powerful decoders improves performance across all three datasets. Among the tested decoders, LSTM outperforms CNN, as it better captures temporal dependencies between different time steps. However, since the primary contribution of our work is introducing the new trajectory generation setting, we did not devote too much to optimizing the module designs. We hope that our work inspires the community to develop more advanced network architectures for this task. We'll add those discussions in the final version.\\n\\n**[Q2] Equations**\\n\\n**[A2]** Thank you for your careful review. We revise Eq. 7 as follows:\\n$$\\\\overset{\\\\leftrightarrow}{F}_{bts}= 1/\\\\exp(\\\\varphi_s(\\\\overset{\\\\leftrightarrow}{S};\\\\mathbf{W}_s))$$\\nWe will double-check and further polish the draft before the final version.\\n\\n**[Q3] Params and FLOPs**\\n\\n**[A3]** Thank you for your insightful comments. Below are the results of the number of parameters and GFLOPs on the Basketball dataset, compared with advanced baselines.\\n\\n| Method | #Params | GFLOPs |\\n|:----------|:-----------:|:--------:|\\n| MAT | 9.21M | 0.39 |\\n| Naomi | $\\\\underline{2.21}$M | **0.21** |\\n| INAM | 2.40M | 0.52 |\\n| SSSD | 48.37M | 0.99 |\\n| GC-VRNN | 2.76M | 0.48 |\\n| UniTraj | **1.77M** | $\\\\underline{0.33}$ |\\n\\nOur method has a total of 1.77M model parameters and 0.33 GFLOPs. Among all the advanced baselines, our model has the smallest number of parameters and the second lowest FLOPs, slightly higher than the baseline Naomi. This is because Naomi's backbone is an RNN, which requires fewer computations compared to our Transformer and Mamba structure. These results validate the efficiency of our method. We'll add those results in the final version.\\n\\n\\n***We sincerely appreciate your valuable suggestions and insightful comments. We hope our response effectively addresses your concerns.***\"}", "{\"title\": \"Details and Insights of Addressing the Issue of Erroneous Data\", \"comment\": \"Thank you for your encouraging feedback. We greatly appreciate your interest in our proposed directions and agree that providing more concrete details will strengthen the paper\\u2019s contribution. Below, we elaborate on our initial ideas:\\n\\n* **Data Representation and Simulation**\\n\\nIn our current work, we use a binary mask (0 or 1), which does not fully capture the complexity of erroneous data. To better mimic real-world scenarios, we propose incorporating pre-trained detection models with images/videos as input to generate detection results for the ball and players. These detection results, while realistic, may inherently introduce errors such as misidentifications (e.g., detecting referees as players or false ball detections). This approach would enable us to build a dataset that better reflects real-world conditions and serves as a foundation for developing robust trajectory generation models.\\n\\n* **Probabilistic Modeling and Uncertainty Quantification**\\n\\nBuilding on the enriched dataset, we propose extending our current framework by using a Gaussian Mixture Model (GMM) instead of a single Gaussian distribution to represent the trajectory distribution (as in Equation 9 of our work).\\n\\n1. GMM Parameters: The model would output both the parameters of each Gaussian component and the corresponding weights $p_k$\\u200b, representing the probability of each component. For example, we can set K=20 to align with the complexity of multi-modal trajectories.\\n\\n2. Training: We can retain the Winner-Take-All loss to minimize the distance between the best Gaussian component and the ground truth while adding a cross-entropy loss to maximize the probability of the selected component.\\n\\n3. Inference: During inference, this setup provides not only generated trajectories but also their associated confidence levels (uncertainties), offering deeper insights into trajectory reliability.\\n\\n* **Pre-Filtering Module**\\nAnother potential extension involves integrating a pre-filtering module, such as an anomaly detection approach. This module could classify detected agents and objects into normal and abnormal categories, thereby mitigating error propagation into the trajectory generation process.\\n\\nWhile these ideas extend beyond the scope of our current work, they represent promising pathways to address real-world challenges effectively. We will add a dedicated discussion section to outline these future directions, emphasizing their importance and potential impact on the domain in our final version.\\n\\nWe hope this additional detail clarifies our vision and demonstrates our commitment to advancing the applicability of our approach. Thank you again for your insightful comments.\"}", "{\"title\": \"Please provide more concrete details on addressing the issue of erroneous data in real-world scenarios.\", \"comment\": \"Thank you for sharing your initial thoughts on addressing the issue of erroneous data in real-world scenarios. The proposed directions, such as developing a probabilistic model for representing errors and incorporating uncertainty measures into trajectory generation, are intriguing and could significantly enhance the applicability of your work. Please provide more concrete details on these ideas, as they would strengthen the paper and demonstrate the potential of your approach to handle real-world challenges effectively.\"}", "{\"metareview\": \"The authors designed a novel approach for handling multiple tasks, which include trajectory prediction, imputation, and spatial-temporal recovery, for multi-agent motion analysis. All the four reviewers pointed out that the method is good and well designed, and thus all recommended acceptance. In the camera ready version, authors need to carefully improve the paper following reviewers' comments.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, authors provided additional experiments, and additional details and justifications. Reviewers are generally satisfied with the reply.\"}", "{\"title\": \"Further Response to Reviewer ZPa2\", \"comment\": \"Dear Reviewer ZPa2,\\n\\nThank you for helping us improve our paper so far. We are very glad that we've addressed your concern. May we know if you have further questions? If not, would you consider increasing the score as a recognition of our rebuttal effort so far? Thank you so much!\\n\\nBest regards,\\n\\nAuthors of Submission 4628\"}", "{\"title\": \"Further Response to Reviewer HqF9\", \"comment\": \"Dear Reviewer HqF9,\\n\\nThank you for your valuable comments, which have greatly helped us improve our paper. We are very glad to address your concerns and would be happy to answer any further questions you might have. If there are no additional questions, we kindly ask if you might consider increasing the score as recognition of our rebuttal efforts. Thank you so much!\\n\\nBest regards,\\n\\nAuthors of Submission 4628\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer YWxk (Part 1/3)\", \"comment\": \"Dear Reviewer YWxk,\\n\\n***Thank you so much for your valuable suggestions and detailed comments.***\\nWe provide the following detailed responses to address your concerns.\\n\\n**[Q1] Generalizability**\\n\\n**[A1]** Thank you for your constructive comment. To assess the generalizability of our method, we followed your suggestions and conducted experiments on the ETH/UCY and SDD datasets for the trajectory prediction task. The results of stochastic predictions with K=20 are shown below:\\n\\nETH-UCY|ADE/FDE (K=20)| | | | | |\\n|---------------|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|\\n| **Method** | **ETH** | **Hotel** | **Univ** | **Zara1** | **Zara2** | **Average** |\\n| FlowChain[3] | 0.55/0.99 | 0.20/0.35 | 0.29/0.54 | 0.22/0.40 | 0.20/0.34 | 0.29/0.52 |\\n| MemoNet[2] | 0.40/0.61 | 0.11/0.17 | 0.24/0.43 | 0.18/0.32 | 0.14/0.24 | 0.21/0.35 |\\n| EqMotion[1] | 0.40/0.61 | 0.12/0.18 | 0.23/0.43 | 0.18/0.32 | 0.13/0.23 | 0.21/0.35 |\\n| UniTraj | 0.43/0.62 | 0.13/0.19 | 0.25/0.43 | 0.20/0.33 | 0.16/0.24 | 0.23/0.36 |\\n\\nSDD || \\n|---------------|:-----------:|\\n| **Method** | **ADE/FDE (K=20)**|\\n| FlowChain[3] | 9.93/17.17 |\\n| MemoNet[2] | 8.56/12.66 |\\n| UniTraj | 8.68/12.78 |\\n\\nOur method demonstrates better performance than FlowChain[3] and achieves results comparable to, though slightly worse than, the state-of-the-art baselines MemoNet[2] and EqMotion[3].\\n\\nOne reason is that our modules are specifically designed for sports datasets, which feature more structured interactions among players, whereas pedestrian movement patterns tend to be more casual and random. Despite this difference, our results show that the proposed modules effectively capture spatial-temporal features from observed trajectories, further validating the generalizability of our approach. \\n\\nAdditionally, our method is applicable to other trajectory-relevant tasks. In the final version, we will include more baseline comparisons for trajectory prediction to further emphasize the generalizability of our method.\"}", "{\"summary\": \"The contributions of the paper are as follows:\\n(1) The authors unified trajectory prediction, imputation, and spatial-temporal recovery into a single framework.\\n(2) They introduced the UniTraj model, capable of processing arbitrary trajectories as masked inputs, making it adaptable to diverse and incomplete datasets, particularly useful in sports scenarios.\\n(3) The paper shows how the Ghost Spatial Masking (GSM) module and the Bidirectional Temporal Scaled (BTS) module help achieve state-of-the-art performance by improving spatial feature extraction and preserving temporal relationships in trajectory data.\\n(4) The authors curated and benchmarked three practical sports datasets, namely Basketball-U, Football-U, and Soccer-U, which will serve as valuable resources for other researchers working in the field of sports analytics.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper presents an original idea by unifying three tasks\\u2014trajectory prediction, imputation, and spatial-temporal recovery\\u2014into a single framework. The selection of network components, such as the Ghost Spatial Masking (GSM) module and the Bidirectional Temporal Scaled (BTS) module, is well justified, and their effectiveness is thoroughly evaluated through extensive experiments. The model's performance is further validated using three distinct sports datasets, demonstrating its robustness and applicability across different scenarios.\", \"weaknesses\": \"In real-world scenarios, trajectory masking isn't limited to binary values (0 or 1); detection errors often lead to incorrect trajectories. For instance, a referee might be mistakenly tracked as a player, or an incorrect ball-like object could be detected as the ball instead of the actual one. These errors highlight the need for models to handle not only missing but also erroneous data in trajectory prediction and recovery tasks.\", \"questions\": \"(1) At line 474, should it read \\\"The 'w/o BTS' variant excludes the Bidirectional Temporal Scaled (BTS) module\\\" instead of \\\"w/o GSM\\\"?\\n\\n(2) At line 413, why is a total of K=20 trajectories generated? What is the variability among these 20 trajectories? I understand that generating multiple trajectories is necessary due to the model being generative, but I would like to know if selecting different values for K affects the experimental results (such as minADE). Additionally, were the baseline models evaluated under similar conditions, such as using K=20 trajectories?\\n\\n(3) In Table 2, the minADE for UniTraj is reported as 94.59 pixels. What is the size of the soccer field in pixels? I want to understand how significant this error is in real-world measurements. Also, will you provide visual examples to demonstrate the quality of the generated trajectories, particularly for average cases?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors focus on the trajectory prediction in sport scenes. They propose a trajectory generation model. They extend the Mamba model into a bidirectional temporal Mamba for the purpose to enhance the temporal dependencies with a transformer encoder for feature extractor. The proposed method aims to. solve the trajectory prediction, imputation and spatial-temporal recovery tasks in a unified paradigm. The applied methods are designed specified to the sports scenarios but some proposed modules can be appliable to more general tasks. The authors construct several sport scenario focused datasets with existing datasets. On the proposed benchmarking platform, the proposed method achieves good experimental results.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"the proposed method is well designed for the sport scenes, such as soccer or basketball. With also tracking and forecasting the ball motion, the proposed method can be useful for the related sport activities..\", \"The proposed ghost masking embedding used to replace the usual head token can provide the order-invariant properties to the agent features, which can be extended to many related applications without losing generalization.\", \"Built upon the usual Mamba blocks, the proposed Bidirectional Temporal Mamba provides an improved fashion of processing spatial-temporal features with good adaptation to handle features on missing time steps.\", \"On the sports datasets, Basketball, Soccer and Football videos, the proposed method shows good quantitative results when compared to other related works.\", \"Overall, the paper is well written and the technical section can be easily followed.\"], \"weaknesses\": \"1. My main concern about the proposed method is its generalizability. To be precise, the proposed method is evaluated on the three datasets Basketball-U, Soccer-U and Football-U, which are built by the authors themselves thus lacking a well established benchmarking. Therefore, it is hard to estimate the significance of the experimental performance by the provided benchmarking results. I would suggest the authors to add the experiments on the existing benchmarks, such as SDD, ETH/UCY or HM3.6M and include the recently published trajectory prediction methods into the comparison for a more convincing and well-established benchmarking.\\n2. The proposed methods model the ball trajectory and offensive/defensive player position explicitly. It is not clear whether the other baselines also follow this convention. For many previous works on NBA benchmark, based on which Basketball-U dataset is constructed, only the players position is considered. Such details of implementation alignment between the proposed method and baseline methods included in the benchmark results is critical to conduct a fair experiment and provide reliable experimental evidence.\\n3. There are many more recent related works, though they are mostly published on the benchmarking of ETH/UCY and SDD, should have been included in the experimental comparison, to provide a up-to-date evaluation of the quantitative significance of the proposed method. To name some: Eqmotion[1], MemoiNet[2], Flowchain[3].\", \"reference\": \"[1] \\u201c**EqMotion: Equivariant Multi-Agent Motion Prediction with Invariant Interaction Reasoning\\u201d, CVPR 2023**\\n\\n[2] \\u201c**Remember Intentions: Retrospective-Memory-based Trajectory Prediction\\u201d, CVPR 2022**\\n\\n[3 \\u201c**Fast Inference and Update of Probabilistic Density Estimation on Trajectory Prediction\\u201d, ICCV 2023**\", \"questions\": \"My questions to be answered and my concerns to be addressed have been discussed in the `weakness` section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer HqF9 (Part 2/2)\", \"comment\": \"**[Q3] Comparison**\\n\\n**[A3]** Thank you for your valuable suggestions. We conducted experiments to separately compare our method with relevant approaches for each task.\\n\\nFor the prediction task, we evaluated our method on the pedestrian datasets ETH-UCY and SDD. The results are shown as follows:\\n\\nETH-UCY|ADE/FDE (K=20)| | | | | |\\n|---------------|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|\\n| **Method** | **ETH** | **Hotel** | **Univ** | **Zara1** | **Zara2** | **Average** |\\n| FlowChain[5] | 0.55/0.99 | 0.20/0.35 | 0.29/0.54 | 0.22/0.40 | 0.20/0.34 | 0.29/0.52 |\\n| MemoNet[4] | 0.40/0.61 | 0.11/0.17 | 0.24/0.43 | 0.18/0.32 | 0.14/0.24 | 0.21/0.35 |\\n| EqMotion[3] | 0.40/0.61 | 0.12/0.18 | 0.23/0.43 | 0.18/0.32 | 0.13/0.23 | 0.21/0.35 |\\n| UniTraj | 0.43/0.62 | 0.13/0.19 | 0.25/0.43 | 0.20/0.33 | 0.16/0.24 | 0.23/0.36 |\\n\\nSDD || \\n|---------------|:-----------:|\\n| **Method** | **ADE/FDE (K=20)**|\\n| FlowChain[5] | 9.93/17.17 |\\n| MemoNet[4] | 8.56/12.66 |\\n| UniTraj | 8.68/12.78 |\\n\\nOur method outperforms FlowChain [5] and achieves results comparable to, though slightly worse than, the state-of-the-art baselines MemoNet[3] and EqMotion[4]. However, these methods are challenging to adapt to other tasks, such as trajectory imputation, because their designs are specifically tailored for prediction tasks and are not well-suited for broader applications. Additionally, our focus is on modeling players within the sports domain, which differs from the pedestrian scenarios addressed by these methods.\\n\\nFor the imputation and recovery tasks, we conducted experiments on the recently open-sourced time-series imputation dataset Traffic-Guangzhou, comparing our method with the latest imputation approaches, CSBI[6] and BayOTIDE[7]. The results are shown as follows:\\n\\nTraffic-GuangZhou|||\\n|---------------|:-----------:|:-----------:|\\n| **Method** | **RMSE**| **MAE**|\\n|CSBI[6] | 4.790 | 3.182|\\n| BayOTIDE[7] | 3.820 | 2.687|\\n| UniTraj | 3.942 | 2.784|\\n\\nOur method outperforms CSBI[6] in both RMSE and MAE and achieves results comparable to, though slightly worse than, BayOTIDE[7]. One reason for this is that time-series datasets, unlike multi-agent datasets, lack the dense and structured interactions present among sports players. Additionally, these baselines are challenging to adapt to the trajectory prediction task.\\n\\nIn our submission, the baselines we compared, such as INAM[8] and GC-VRNN[1], were originally proposed for joint imputation and prediction tasks, and the empirical results demonstrate the superiority of our method. We will include these experiments and provide additional discussions in our final version to further showcase the effectiveness and generalizability of our proposed approach.\\n\\n[1] Uncovering the Missing Pattern: Unified Framework Towards Trajectory Imputation and Prediction. CVPR 2023. \\n[2] Multiple-Level Point Embedding for Solving Human Trajectory Imputation with Prediction. TSAS 2023. \\n[3] EqMotion: Equivariant Multi-Agent Motion Prediction with Invariant Interaction Reasoning. CVPR 2023. \\n[4] Remember Intentions: Retrospective-Memory-based Trajectory Prediction. CVPR 2022. \\n[5] Fast Inference and Update of Probabilistic Density Estimation on Trajectory Prediction. ICCV 2023. \\n[6] Provably Convergent Schr\\u00f6dinger Bridge with Applications to Probabilistic Time Series Imputation. ICML 2023. \\n[7] BayOTIDE: Bayesian Online Multivariate Time series Imputation with functional decomposition.ICML 2024. \\n[8] Imitative Non-Autoregressive Modeling for Trajectory Forecasting and Imputation. CVPR 2020. \\n\\n\\n***We sincerely appreciate your valuable suggestions and insightful comments.*** We hope our response effectively addresses your concerns.\"}", "{\"title\": \"Response to Reviewer uin8 (Part 2/3)\", \"comment\": \"**[Q2] Sampling K Trajectories**\\n\\n**[A2]** That\\u2019s a very insightful question. We follow pioneering trajectory prediction works[1][2] in setting K=20 for multiple trajectory generation to account for inherent multimodality, where multiple plausible paths can exist. Based on your suggestion, we evaluated our trained models with different values of K (K=10, K=20, K=30) and reported the mean and standard deviation (std) of all metrics across three datasets. **UniTraj*** indicates that the results are presented as mean\\u00b1std.\\n\\n|Basketball | | | | | | |\\n|------------|------------|:---------------------|:-------------------|:----------------|:-----------------------|:------------------------|\\n| **Method** | **K**|**minADE$_{20}$** | **OOB** | **Step** | **Path-L** | **Path-D** |\\n| UniTraj |K=10 | 4.7667 | 6.11e-04 | 0.27 | 34.25 | 241.06 |\\n| **UniTraj*** |K=10 | 4.7671\\u00b12.09e-05 | 6.10e-04\\u00b11.39e-06 | 0.27\\u00b18.61e-06 | 34.25\\u00b13.62e-04 | 240.95\\u00b10.22 |\\n| UniTraj |K=20 | 4.7668 | 6.12e-04 | 0.27 | 34.25 | 240.83 |\\n| **UniTraj***| K=20 | 4.7671\\u00b1-1.74e-05 | 6.11e-04\\u00b12.17e-06 | 0.27\\u00b12.32e-06 | 34.25\\u00b13.50e-04 | 240.92\\u00b10.27 |\\n| UniTraj |K=30 | 4.7666 | 6.08e-04 | 0.27 | 34.25 | 240.84 |\\n| **UniTraj*** |K=30 | 4.7670\\u00b1-1.56e-05 | 6.10e-04\\u00b11.84e-06 | 0.27\\u00b12.24e-06 | 34.25\\u00b13.65e-04 | 240.99\\u00b10.25 |\\n\\n|Football| | | | | | |\\n|------------|------------|:---------------------|:-------------------|:----------------|:-----------------------|:------------------------|\\n| **Method** | **K**|**minADE$_{20}$** | **OOB** | **Step** | **Path-L** | **Path-D** |\\n| UniTraj |K=10 | 3.5497 | 1.11e-04 | 0.23 | 19.27 | 115.74 |\\n| **UniTraj*** |K=10 | 3.5502 \\u00b1 5.84e-05 | 1.12e-04 \\u00b1 9.84e-07 | 0.23 \\u00b1 5.14e-05 | 19.27 \\u00b1 1.39e-03 | 116.39 \\u00b1 0.63 |\\n| UniTraj |K=20 | 3.5499 | 1.12e-04 | 0.23 | 19.26 | 114.58 |\\n| **UniTraj*** |K=20 | 3.5502 \\u00b1 5.90e-05 | 1.12e-04 \\u00b1 9.35e-07 | 0.23 \\u00b1 5.77e-05 | 19.27 \\u00b1 1.97e-03 | 115.88 \\u00b1 0.52 |\\n| UniTraj |K=30 | 3.5495 | 1.11e-04 | 0.23 | 19.27 | 116.30 |\\n| **UniTraj*** |K=30 | 3.5502 \\u00b1 6.99e-05 | 1.12e-04 \\u00b1 9.95e-07 | 0.23 \\u00b1 5.50e-05 | 19.27 \\u00b1 2.13e-03 | 115.79 \\u00b1 0.60 |\\n\\n|Soccer| | | | | | |\\n|------------|------------|:---------------------|:-------------------|:----------------|:-----------------------|:------------------------|\\n| **Method** | **K**|**minADE$_{20}$** | **OOB** | **Step** | **Path-L** | **Path-D** |\\n| UniTraj | K=10 |94.5840 | 3.65e-06 | 4.52 | 349.67 | 2778.11 |\\n| **UniTraj*** | K=10 |94.5994 \\u00b1 1.58e-03 | 3.48e-06 \\u00b1 1.66e-07 | 4.52 \\u00b1 1.20e-03 | 349.72 \\u00b1 0.08 | 2765.24 \\u00b1 24.59 |\\n| UniTraj | K=20 |94.5909 | 3.31e-06 | 4.52 | 349.73 | 2805.79 |\\n| **UniTraj*** | K=20 |94.5991 \\u00b1 1.53e-03 | 3.38e-06 \\u00b1 1.99e-07 | 4.52 \\u00b1 1.18e-03 | 349.68 \\u00b1 0.06 | 2753.72 \\u00b1 20.97 |\\n| UniTraj | K=30 |94.5778 | 3.65e-06 | 4.52 | 349.61 | 2783.75 |\\n| **UniTraj*** | K=30 |94.5994 \\u00b1 1.83e-03 | 3.41e-06 \\u00b1 2.12e-07 | 4.51 \\u00b1 1.32e-03 | 349.69 \\u00b1 0.07 | 2771.65 \\u00b1 18.72 |\\n\\nThe results for all five metrics remain very similar across the three datasets with different values of K. In addition, both minADE$_{K}$\\nand mean ADE are very close, with consistently low standard deviations.\\n\\nFor the baselines, we implemented method MAT, Naomi, INAM, and SSSD for the deterministic generation, we implemented GC-VRNN with $K=20$ by sampling $Z$ from their prior distribution and reporting the minimum metrics.\\n\\nWe will include all these results and further discuss this point in the Experiment Section. Additionally, we will provide supplementary implementation details for the baselines in the final version\"}", "{\"comment\": \"I appreciate the experiments with the ablation of the ball and player category.\\n\\nYes, this is truly interesting that removing ball trajectory from the method makes a better performance. The new experiments provide a more clear and stronger evidence about the experimental advantage of the proposed method on the sports datasets.\"}", "{\"comment\": \"I appreciate the feedback extra experiments and discussion provided by the authors.\\n\\nIn the original review, my main concern is about generalizability as the paper originally only applies experiments on the newly built datasets. However the authors provided extra experiments on the more canonical and standard benchmarks during the rebuttal. Though the performance is not very significant anymore, the extra experiments do relieve my concern about the generalizability.\\n\\nAlso, the authors promised to add a discussion about recent related works, the lack of which made another part of my concern originally.\\n\\nGiven the improvement from the authors above, I have adjusted my rating score to the paper.\"}", "{\"title\": \"Further Evaluations on Removing Ball\", \"comment\": \"Dear Reviewer YWxk,\\n\\nThank you for your recognition of our efforts in conducting ablation experiments. To further explore and provide deeper insights into this interesting finding, we conducted additional evaluations by removing the ball.\\n\\nSpecifically, we evaluated our trained models by separately assessing the metrics for \\u201conly players\\u201d and \\u201conly ball\\u201d across three datasets. The corresponding ground truth (GT) values are also in the following tables for reference.\\n\\n| Basketball Evaluation| | | | | | | | \\n|--------------------|-----------|:--------------:|:-------------:|:-----------:|:------:|:--------:|:---------:|\\n|**Method** | **Variant** | **minADE$_{20}$** | **OOB** | **Step** | **Path-L** | **Path-D** |\\n| GT | complete | 0 | 0 | 0.17 | 37.61 | 269.49 |\\n| UniTraj | complete | 4.77 | 6.12e-04 | 0.27 | 34.25 | 240.83 |\\n| GT | only players | 0 | 0 | 0.12 | 34.13 | 261.86 |\\n| UniTraj | only players | 4.57 | 6.53e-04 | 0.25 | 31.99 | 117.83 |\\n| GT | only ball | 0 | 0 | 0.68 | 72.36 | 261.86 |\\n| UniTraj | only ball| 6.48 | 1.61e-04 | 0.52 | 56.82 | 233.53 |\\n\\n| Football Evaluation| | | | | | | | \\n|--------------------|-----------|:--------------:|:-------------:|:-----------:|:------:|:--------:|:---------:|\\n|**Method** | **Variant** | **minADE$_{20}$** | **OOB** | **Step** | **Path-L** | **Path-D** |\\n| GT | complete | 0 | 0 | 0.03 | 12.56 | 76.68 |\\n| UniTraj | complete | 3.55 | 1.12e-04 | 0.23 | 19.26 | 114.58 |\\n| GT | only players | 0 | 0 | 0.02 | 11.95 | 49.73 |\\n| UniTraj | only players | 3.47 | 5.94e-04 | 0.23 | 19.07 | 114.17 |\\n| GT | only ball | 0 | 0 | 0.14 | 26.03 | 76.68 |\\n| UniTraj | only ball| 4.93 | 5.94e-04 | 0.27 | 23.69 | 115.11 |\\n\\n| Soccer Evaluation| | | | | | | | \\n|--------------------|:-----------|:--------------:|:-------------:|:-----------:|:------:|:--------:|:---------:|\\n|**Method** | **Variant** | **minADE$_{20}$** | **OOB** | **Step** | **Path-L** | **Path-D** |\\n| GT | complete | 0 | 0 | 0.52 | 112.92 | 951.00 |\\n| UniTraj | complete | 94.59 | 3.31e-06 | 4.52 | 349.73 | 2805.79 |\\n| GT | only players | 0 | 0 | 0.52 | 105.82 | 951.00 |\\n| UniTraj | only players | 87.25 | 3.43e-06 | 4.40 | 339.99 | 2724.56 |\\n| GT | only ball | 0 | 0 | 0.40 | 269.00 | 922.33 |\\n| UniTraj | only ball| 218.71 | 0 | 7.17 | 557.76 | 2058.87 |\\n\\nWe can observe that the evaluation results for minADE of \\\"only ball\\\" are much worse than those of \\\"complete\\\" and \\\"only players\\\". \\n\\nLooking into the ground truth (GT) of these variants, we find that in the \\\"only ball\\\" variant, the metrics Step, Path-L, and Path-D differ significantly from those of the \\\"only players\\\" variant in all three datasets. Specifically, Step measures the average change in step size, Path-L measures the average trajectory length, and Path-D measures the maximum difference in trajectory lengths. These differences indicate that the ball's movement is more dynamic and unstable, with a huge gap compared to players' movements. The ball's motion is often influenced by external forces, making it more challenging to predict than the players' movements.\", \"another_important_aspect_is_the_dataset_itself\": \"the number of ball trajectories is relatively smaller than that of player trajectories, adding an additional layer of difficulty. Therefore, developing a method that balances both the ball and players would be an interesting direction to explore. The ball\\u2019s movement, to some extent, reflects the players' actions or intentions during an offensive sequence, which is critical for real-world sports analysis and should not be overlooked.\\n\\nWe will include these experiments and discussions in our final version to provide deeper insights into the sports analysis domain. Thank you once again for your valuable comments, which have inspired us to delve deeper into this topic and uncover additional findings.\"}", "{\"title\": \"Response to Reviewer uin8 (Part 3/3)\", \"comment\": \"**[Q3] Soccer Metrics and Visualizations**\\n\\n**[A3]** According to the dataset[3], a soccer field measures 105 meters in length and 68 meters in width, mapped to a pixel resolution of 3840 x 2160. However, the image does not perfectly align with the soccer field dimensions, as noted in the dataset, and only a rough calculation can be made with some margin of error.\\n\\n\\u00b7 94.59 pixels converted to the horizontal direction (length): approximately 2.59 meters\\n\\n\\u00b7 94.59 pixels converted to the vertical direction (width): approximately 2.98 meters\\n\\nWe have included two visualization examples from the Basketball dataset, along with analysis, in the **Appendix** of our **updated submission**. The results show that the trajectories generated by our method are more accurate and smoother compared to the baselines, further validating the effectiveness of our proposed approach. We will include more qualitative results in the final version.\\n\\n[1] Social-Gan: Socially Acceptable Trajectories with Generative Adversarial Networks. CVPR 2018. \\n[2] Trajectron++: Dynamically-Feasible Trajectory Forecasting with Heterogeneous Data. ECCV 2020. \\n[3] SoccerTrack: A Dataset and Tracking Algorithm for Soccer with Fish-eye and Drone Videos. CVPR 2022.\\n\\n***We sincerely appreciate your valuable suggestions and insightful comments. We hope our response effectively addresses your concerns.***\"}", "{\"comment\": \"Thanks for your efforts to address my concerns.\"}", "{\"summary\": \"They focus on the domain of sports and address the problem of modeling multi-agent trajectories by considering various situations in real practice, emphasizing the need for a general approach. To accommodate diverse real-world scenarios, they introduce a unified trajectory generation task that simultaneously handles multiple input situations.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper introduces a new problem (a unified trajectory generation task) along with several new datasets, with relatively comprehensive experiments and a substantial amount of work. They propose a Unifed lrajectoryenyironments.Generation model, UniTraj, that processes arbitrary trajectories as masked inputs.adaptable to diverse scenarios in the domain of sports games. They further extend recent State Space models (SSMs), known as the Mamba model, into a Bidirectional Temporal Mamba (BTM) to better capture temporal dependencies.\", \"weaknesses\": \"1.Could you please try LSTM/CNN or any other backbones instead of MLP to achieve better performance?\\n\\n2.Some equations are very similar, so you can combine them together (such as Eq.7). It's unnecessary to write them again.\\n\\n3.You can also compare the FLOPs and total parameters with state-of-the-art methods to demonstrate your models\\u2019 efficiency.\", \"questions\": \"See weakness above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer uin8 (Part 1/3)\", \"comment\": \"**Dear Reviewer uin8**,\\n\\n***We sincerely appreciate your recognition of our contributions and your constructive suggestions to improve our manuscript.***\\nBelow, we provide detailed responses to address your concerns.\\n\\n**[Weakness] Real-world Scenarios**\\n\\n**[A0]** That's a really insightful point, and we agree that addressing problems with erroneous data is a highly valuable direction, especially in the sports domain. While our current work focuses on a relatively simpler problem, we believe it still holds significant value for sports analysis.\", \"we_have_some_initial_thoughts_on_addressing_the_erroneous_data_scenario\": \"1. We could develop a probabilistic model to represent the erroneous data and incorporate approaches to quantify the uncertainty in the locations of the agents.\\n2. Beyond generating trajectories, we could also output the probabilistic distributions and uncertainty measures for these generated locations, which could provide deeper insights for sports analysis.\\n\\nWe will include these points in our final version to better highlight the potential and importance of this problem.\\n\\n**[Q1] Typo**\\n\\n**[A1]** Thank you for your careful review. That's a typo, it should be \\\"w/o BTS\\\" instead. We have corrected it in the updated submission on Line 474, highlighted in blue. We will thoroughly review and polish the entire draft twice before the final version.\"}", "{\"title\": \"Response to Reviewer HqF9 (Part 1/2)\", \"comment\": \"Dear Reviewer HqF9,\\n\\n***We sincerely appreciate your recognition of our contributions and your constructive suggestions to improve our manuscript.*** \\nWe provide the following detailed responses to address your concerns.\\n\\n**[Q1] Motivation**\\n\\n**[A1]** That\\u2019s a good question. The three tasks, trajectory prediction, imputation, and spatial-temporal recovery, are related but different, with differences in input formats, objectives, and methodologies. For example, trajectory prediction focuses on forecasting future values, imputation deals with filling in missing data, and spatial-temporal recovery simultaneously reconstructs both spatial and temporal patterns. In real-world applications such as sports analytics, these scenarios often occur together, creating a strong motivation for an all-in-one approach. A unified framework would seamlessly address these overlapping demands, reducing the need for separate pipelines and improving overall efficiency. While some related methods [1][2] attempt to tackle prediction and imputation simultaneously, their masking strategies fall short in handling diverse missing patterns. Empirical results further demonstrate the superiority of our proposed method.\\n\\nWe will enhance the introduction section in our final version to further strengthen and clarify the motivation.\\n\\n\\n\\n\\n\\n\\n**[Q2] Challenges**\\n\\n**[A2]** The challenges of developing an all-in-one method include the following:\\n\\n1. Different missing data patterns: Trajectory prediction requires extrapolation of future values, imputation often deals with random missing data, and spatial-temporal recovery involves complex dependencies in both space and time.\\n\\n2. Balancing context and forecasting: A unified model must balance local context modeling for imputation and spatial-temporal recovery with long-term forecasting for trajectory prediction, which is inherently challenging.\\n\\n3. Dynamic interaction modeling: Player interactions in sports vary significantly across datasets, requiring the model to generalize across diverse scenarios.\\n\\nThese challenges highlight why such an all-in-one approach has not been extensively explored and underscore the contributions of our work in addressing these issues. We will incorporate these discussions into the final version to emphasize the challenges.\"}", "{\"title\": \"Response to Reviewer YWxk (Part 3/3)\", \"comment\": \"**[Q3] Prediction References**\\n\\n**[A3]** Thank you for providing those excellent papers relevant to our work. We have compared our results with them in our response **[A1]** and will also cite and discuss them in the related work section.\\n\\nSpecifically, in[1], a prediction model named EqMotion is proposed, which integrates equivariant geometric and invariant pattern feature learning with an invariant interaction reasoning module, achieving state-of-the-art performance across various tasks. EqMotion is lightweight and effective for diverse motion prediction scenarios.\\n\\nMemoNet[2] is a trajectory prediction framework inspired by retrospective memory in neuropsychology. It utilizes memory banks to store representative past-future pairs and a trainable addresser to recall relevant instances, enabling more accurate and interpretable predictions. MemoNet achieves state-of-the-art results on multiple datasets, significantly improving prediction accuracy and diversity.\\n\\nFlowChain[3] is a normalizing flow-based model designed for fast and accurate trajectory prediction and density estimation. By leveraging conditional continuously-indexed flows (CIFs), it can achieve promising performance.\\n\\nWe will include additional references on trajectory prediction in the related work section in our final version.\\n\\n\\n[1] EqMotion: Equivariant Multi-Agent Motion Prediction with Invariant Interaction Reasoning. CVPR 2023. \\n[2] Remember Intentions: Retrospective-Memory-based Trajectory Prediction. CVPR 2022. \\n[3] Fast Inference and Update of Probabilistic Density Estimation on Trajectory Prediction. ICCV 2023. \\n\\n\\n\\n***We sincerely appreciate your valuable comments and insightful suggestions. We hope our response effectively addresses your concerns.***\"}", "{\"title\": \"Further Response to Reviewer YWxk\", \"comment\": \"Dear Reviewer YWxk,\\n\\nThank you for your valuable comments and recognition of our trajectory prediction experiments. Your feedback has greatly helped us improve our paper, and we are glad to address your concerns. We would be happy to answer any further questions you might have. \\n\\nThank you so much for your thoughtful review!\\n\\n\\n\\nBest regards,\\n\\nAuthors of Submission 4628\"}" ] }
9aIlDR7hjq
Augmented Conditioning is Enough for Effective Training Image Generation
[ "Jiahui Chen", "Amy Zhang", "Adriana Romero-Soriano" ]
Image generation abilities of text-to-image diffusion models have significantly advanced, yielding highly photo-realistic images from descriptive text and increasing the viability of leveraging synthetic images to train computer vision models. To serve as effective training data, generated images must be highly realistic while also sufficiently diverse within the support of the target data distribution. Yet, state-of-the-art conditional image generation models have been primarily optimized for creative applications, prioritizing image realism and prompt adherence over conditional diversity. In this paper, we investigate how to improve the diversity of generated images with the goal of increasing their effectiveness to train downstream image classification models, without fine-tuning the image generation model. We find that conditioning the generation process on an augmented real image and text prompt produces generations that serve as effective synthetic datasets for downstream training. Conditioning on real training images contextualizes the generation process to produce images that are in-domain with the real image distribution, while data augmentations introduce visual diversity that improves the performance of the downstream classifier. We validate augmentation-conditioning on a total of five established long-tail and few-shot im- age classification benchmarks and show that leveraging augmentations to condition the generation process results in consistent improvements over the state-of-the-art on the long-tailed benchmark and remarkable gains in extreme few-shot regimes of the remaining four benchmarks. These results constitute an important step towards effectively leveraging synthetic data for downstream training.
[ "Synthetic Training Datasets", "Image Generation", "Generative Models", "Diffusion" ]
https://openreview.net/pdf?id=9aIlDR7hjq
https://openreview.net/forum?id=9aIlDR7hjq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ymJe0fSz5K", "WcWk0zbfGX", "Twm7ecTciu", "NzDiXlKltk", "DfIjOJ4YIc" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1729013161312, 1730700012036, 1732394616675, 1730612316268, 1730699406969 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7317/Reviewer_nirE" ], [ "ICLR.cc/2025/Conference/Submission7317/Reviewer_CfJe" ], [ "ICLR.cc/2025/Conference/Submission7317/Authors" ], [ "ICLR.cc/2025/Conference/Submission7317/Reviewer_ySzm" ], [ "ICLR.cc/2025/Conference/Submission7317/Reviewer_mZw1" ] ], "structured_content_str": [ "{\"summary\": \"While synthetic training images generated by diffusion models can be effective for training, diversity and fidelity are challenges. Fidelity can be addressed by image conditioning with few-shot images, but many previous works fine-tune the diffusion model, which can be expensive. The authors propose a frozen alternative to increase the diversity, which conditions the diffusion model not only on few-shot images (done previously) but augmentations (novelty).\\n\\nThey evaluate this across different standard data augmentation techniques, comparing downstream training accuracy and FID to find the most effective augmentation techniques. They also evaluate the effects of CFG scale, compare their method to previous SOTA in long-tailed distributions, and across 4 datasets to previous few-shot SOTA by number of shots.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"A) Writing is overall quite clear.\\n\\nB) Experiments are varied across areas (long-tail, few-shot).\\n\\nC) Method section shows many qualitative examples to supplement the quantitative results in the experimental section.\\n\\nD) Some of the results show improvement.\", \"weaknesses\": \"Points are ordered roughly according to my perceived scale, with more important points being listed first.\\n\\nA) The method itself is quite simplistic from a novelty perspective (simply adding augmentations to the conditioning). I would consider this a strength, if the results were consistent (B) and strong (B, D, E, F) with a clear storyline for effective use-cases (C). However, I do not see this as being the case (see following points for details, as indicated in the corresponding parentheses).\\n\\nB) The results are mixed. Examples: 1) In table 1, the random image baseline actually has the lowest FID. I do not see this discussed, with lines 323 and 339 pointing out that \\\"the best-performing augmentation-conditioning method has one of the lowest FID scores, supporting our claim...\\\", which is misleading. 2) In table 3, unintuitive bolding hides that your method underperforms in some categories (with LDM t&i, medium is worse and few is tied). 3) In Figure 6, by 16 shots, the novel method is already underperforming on 2 datasets. Once again, the writing does not properly address this.\\n\\nC) Given the mixed results, there should be an in-depth analysis / explanation to understand when this method is most useful, but this seems to be missing. I want to stress that if done well, this could potentially make up for weakness B.\\n\\nD) The comparisons with Fill-Up are not clear, as \\\"Ours\\\" does worse but uses less data--it would be better to compare against Fill-Up with the same amount of data as well, otherwise you have not shown that you are beating SOTA.\\n\\nE) There seems to be some baselines missing in the few-shot section that could strengthen the context. 1) as the ResNet50 is pre-trained, it would be helpful to know the starting accuracy. 2) Figure 6 is missing the random-image baseline included in Table 1.\\n\\nF) I do not find the CFG scale experiments as adding significant value, although they take up a page in total. They are consistent with previous work, which I don't find unsurprising. While they don't really 'hurt' anything in themselves, they overall weaken the experimental section by taking the space of what could have otherwise been more interesting / surprising / novel results, and in my opinion they water down the impact of the experimental section.\\n\\nG) In the introduction, it is claimed that methods that fine-tune the diffusion model (Azizi 2023, Trabucco 2023, Shin 2023) are too expensive. However, this is never supported with numbers--it would make the claim stronger to quantify method costs. Especially because these methods have vastly different costs (e.g. in line 134, you claim that Shin 2023 uses textual inversion--this should be less expensive than full fine-tuning, correct? And what about methods that use PEFT?)\\n\\nH) Table 3 is misleading. The way the sections are split, the entire lower section should be compared. However, Fill-Up is a separate class with more synthetic data, which you are not comparing against. This is not very clear by the visual, and is confusing why the highest numbers are not the ones bolded (as Fill-Up is excluded). Sometimes the bolding seems entirely wrong--e.g. in medium, \\\"ours\\\" is bolded, but LDM (txt and image) is clearly better, and also at comparable synthetic data counts.\", \"questions\": \"A) In the description of Figure 4, you claim in line 236 that \\\"Augmentation-conditioned generations show more visual diversity in the coloration, pose, and angle of the hamster.\\\" Could you please elaborate on what you mean? Because I do not see this as being the case. To me, 1) coloration looks consistent across all images, 2) pose only shows clear differences for CutMix and CutMix-Dropout. All others are primarily face photos staring at the camera and 3) with the same exceptions as 2, angle is primarily straight-on.\\n\\nB) In table 1, it is shown that the improvements are the most significant on the few classes--this is potentially interesting. Is there some analysis, interpretation, explanation, etc. that could be added on this subject?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper focuses on improving the effectiveness of synthetic images generated by text-to-image diffusion models for training image classification models. By introducing \\u201caugmentation-conditioning\\u201d, the authors leverage real images with data augmentations as conditioning inputs, creating synthetic images that are not only realistic but also visually diverse. This approach enhances downstream classifier performance, particularly in long-tail and few-shot classification settings, without the need for fine-tuning the diffusion model itself. The method was validated across five benchmarks, showing consistent improvements over previous techniques.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces an approach of augmentation-conditioning, which leverages real images with data augmentations to create synthetic images that are both realistic and diverse. This method bridges the domain gap between synthetic and real data, enhancing downstream classification performance without requiring extensive fine-tuning of the diffusion model.\\n2. The method\\u2019s effectiveness is demonstrated across multiple challenging benchmarks, including long-tail and few-shot classification tasks.\", \"weaknesses\": \"1. The technical novelty of the proposed method seems limited, as it mainly combines existing data augmentations, like Mixup, before inputting images into an existing diffusion model. More discussion of method\\u2019s novelty is necessary. Besides, to better demonstrate the effectiveness of the proposed method, it would be beneficial to consider more recent tuning-free approaches for diffusion models, such as [1]. Additional discussion and experiments comparing the superiority of the proposed method with current works would strengthen the paper.\\n\\n2. While the proposed augmentation-conditioning method focuses only on tuning-free augmentation approaches, there are low-cost tuning methods, such as [2] using LoRA, that also deliver strong performance. Why prioritize tuning-free methods when low-cost tuning options might achieve better results with only a minor increase in computational cost?\\n\\n3. The application scope of the proposed method appears limited (i.e., primarily for long-tail or few-shot classification), and the experimental validation seems insufficient.\\n\\na) In Table 1, the performance gain is more pronounced in the few-shot setting (fewer than 20 images, 55.3 -> 63.5) than in the many-shot setting (100 or more images, 72.4 -> 72.0). This suggests the method may not be as effective in many-shot contexts, even with only 100 images. A clearer explanation of this phenomenon would enhance the method\\u2019s credibility.\\n\\nb) All experiments are conducted on the ImageNet-LT dataset, which alone cannot comprehensively verify the method\\u2019s performance. Testing on a wider range of datasets, including general datasets like Tiny-ImageNet-200, CIFAR-100, and full ImageNet as in [1], as well as domain-specific long-tail datasets like CUB-LT and Flower-LT as in [2], would provide more robust evidence.\\n\\n[1] DIFFUSEMIX: Label-Preserving Data Augmentation with Diffusion Models. CVPR, 2024.\\n\\n[2] Enhance Image Classification via Inter-Class Image Mixup with Diffusion Model. CVPR, 2024.\", \"questions\": \"Please refer to Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The author explores a novel method for training image generation, where the proposed approach produces in-domain images with enhanced visual diversity by conditioning the generation process on augmented real training images. This strategy is validated across five established long-tail and few-shot image classification benchmarks, demonstrating consistent improvements over state-of-the-art results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The problems and the proposed method are clearly presented and easy to understand.\", \"The proposed method is effective and straightforward in practice, with extensive ablation experiments conducted to support its design.\", \"The synthesized training data demonstrates strong performance in few-shot classification.\"], \"weaknesses\": [\"In Table 3, the Fill-Up method demonstrates higher accuracy than the proposed method. Although the positive correlation between accuracy and training dataset scale is discussed in line 376, it remains unclear whether the proposed method can outperform Fill-Up. Given the computing constraints, I suggest the authors:\", \"Illustrate the positive relationship between accuracy and dataset scale using data synthesized by the proposed method.\", \"Provide results for Fill-Up with different, smaller synthetic dataset scales that are more feasible.\"], \"questions\": \"See above weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper demonstrates that conditioning the generation process on an augmented real image and a text prompt produces effective synthetic datasets. These synthetic datasets benefit downstream tasks, particularly for long-tailed (LT) classification and few-shot classification.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The approach appears valid and effective for LT and few-shot classification, as demonstrated in the experiments. The authors also tested various augmentations.\", \"Conditioning on both the augmented image and text prompt seems effective for improving performance on classification tasks.\", \"Experiments on different values of classifier-free guidance (CFG) are interesting, especially regarding how the optimal scale varies by task.\"], \"weaknesses\": [\"The technical novelty of this paper is unclear. The concept of combining both augmented images and text prompts seems useful for LT and few-shot classification but lacks novelty. If this approach is not technically original, the paper should at least show a broad variety of downstream tasks that benefit from it, which it did not.\", \"The contribution is not clearly articulated. Although it\\u2019s evident that the synthetic dataset is effective, it\\u2019s unclear for which specific tasks it is most useful. The focus is confined to LT and few-shot classification. Could this approach also aid in other areas, such as image generation? Expanding the application scenarios would improve the paper\\u2019s impact.\", \"LT classification works, particularly those focused on algorithmic improvements, were not compared in the evaluation. While the paper\\u2019s approach differs by aiming to improve classification via synthetic data, it\\u2019s worth questioning if this is truly beneficial. For example, the paper mentions that fine-tuning for classification is time-consuming. However, both fine-tuning and generating a synthetic dataset have costs - generating 1.16M images seems likely to take more time than fine-tuning.\"], \"questions\": [\"In conclusion, what do you suggest as the best approach among the trials (e.g., Mixup-Dropout, Embed-Mixup-Dropout, Embed-CutMix-Dropout)? In Table 3, the last method performs best, but in Figure 8, Embed-Mixup Dropout seems to lead in many cases. Does the optimal choice depend on the task and dataset? Are there any noticeable patterns?\", \"Why did you focus specifically on LT and few-shot classification? Couldn\\u2019t this approach and synthetic dataset also benefit other discriminative downstream tasks or image generation?\", \"How long did it take to generate approximately 1 million synthetic images (as in Table 3)? Did you use T=1000 for generation, or T=50? Does the choice of timestep T affect the downstream performance (e.g., classification accuracy)?\", \"How do you support the claim that your dataset has the highest diversity? In Table 1, the FID scores are provided, but FID does not solely calculate diversity. Also, FID of your approach and the baseline (random images) are nearly the same, with the baseline slightly lower. To substantiate the claim of high diversity, perhaps an additional metric, such as recall, would be helpful.\", \"In the bottom-right graph of Figure 5, why are the performances of the three methods almost identical? Is there any underlying interpretation for this?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
9Zq8fRF4am
Generating Model Parameters for Controlling: Parameter Diffusion for Controllable Multi-Task Recommendation
[ "Chenglei Shen", "Jiahao Zhao", "Xiao Zhang", "Weijie Yu", "Ming He", "Jianping Fan" ]
Commercial recommender systems face the challenge that task requirements from platforms or users often change dynamically (e.g., varying preferences for accuracy or diversity). Ideally, the model should be re-trained after resetting a new objective function, adapting to these changes in task requirements. However, in practice, the high computational costs associated with retraining make this process impractical for models already deployed to online environments. This raises a new challenging problem: how to efficiently adapt the learning model to different task requirements by controlling model parameters after deployment, without the need for retraining. To address this issue, we propose a novel controllable learning approach via Parameter Diffusion for controllable multi-task Recommendation (PaDiRec), which allows the customization and adaptation of recommendation model parameters to new task requirements without retraining. Specifically, we first obtain the optimized model parameters through adapter tunning based on the feasible task requirements. Then, we utilize the diffusion model as a parameter generator, employing classifier-free guidance in conditional training to learn the distribution of optimized model parameters under various task requirements. Finally, the diffusion model is applied to effectively generate model parameters in a test-time adaptation manner given task requirements. As a model-agnostic approach, PaDiRec can leverage existing recommendation models as backbones to enhance their controllability. Extensive experiments on public datasets and a dataset from a commercial app, indicate that PaDiRec can effectively enhance controllability through efficient model parameter generation. The code is released at https://anonymous.4open.science/r/PaDiRec-DD13e.
[ "recommender systems", "generative model", "multi-task learning" ]
Reject
https://openreview.net/pdf?id=9Zq8fRF4am
https://openreview.net/forum?id=9Zq8fRF4am
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vs7p1naciR", "vMB0zWfuCO", "sglfwXSyRl", "rI847iQYof", "pybO92WLde", "owqcdyLIVM", "od3sQt58YD", "id6jBfDQGy", "gfeLEZqhbu", "gNb04QpoNy", "g1mMTk8psg", "fm9nywhZSe", "dW7P3JIgPN", "d3QHWkUUbd", "ZKeWswXxse", "Y7lDd6uPfX", "Y5tom3O13Z", "XkWbobcCVj", "WndVUdQZDy", "WZxqY6sptJ", "VI0K5wLvDs", "U8hNbz4uVp", "QpFJnuyjkv", "PlKEx99z6a", "POBSNH5lXf", "MGzaKaE2YJ", "JaUQ2YY8jo", "JAOfWB4S2L", "G7B6mnqqXi", "EePQkFma8x", "614a0nzChb", "2gX6t39bFk", "2DazoTwx9C", "0oJ9lgKUdj" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732038556328, 1732918356606, 1733194432493, 1732039359424, 1737523632635, 1735032761626, 1732039571297, 1732039284261, 1732039693356, 1732039178159, 1732687139619, 1733194932071, 1732693766452, 1732039405381, 1733194387379, 1733198054896, 1732694765810, 1732039121039, 1732938754537, 1730652772771, 1732039058211, 1730817980079, 1732629188184, 1732698927628, 1732540977440, 1730059772516, 1732462242675, 1730444483432, 1732651270640, 1732039535169, 1732676860052, 1732038996006, 1732614468719, 1732798535394 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Reviewer_gFtT" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4319/Area_Chair_1xec" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Reviewer_oQdP" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Reviewer_oQdP" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Reviewer_gFtT" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Reviewer_8b4P" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Reviewer_kFtK" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Reviewer_oQdP" ], [ "ICLR.cc/2025/Conference/Submission4319/Reviewer_kFtK" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Reviewer_oQdP" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ], [ "ICLR.cc/2025/Conference/Submission4319/Reviewer_oQdP" ], [ "ICLR.cc/2025/Conference/Submission4319/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you very much for your recognition of our work! Here, we respond to each of your concerns regarding our study.\\n\\n# W1: Inference Complexity\\nAs shown in Table 2, the diffusion model significantly reduces the time overhead from receiving a new task instruction to completing the model construction, **compared to traditional retraining approaches**. \\n\\nTechnically, our algorithm design has taken the inference cost into account from two aspects. One is the design to structure the recommendation model as a **backbone + adapter** architecture. The diffusion model is used solely to generate the adapter parameters, to reduce the training and generation overhead of diffusion. The other is the design of the denoising model of diffusion. We **only stack 4 attention** to act as a denoising model. The details of the diffusion model have been added to the PDF file and can be found in **Appendix \\u00a7 A.13, Details of Diffusion.** Additionally, we have **added** a comprehensive analysis of the computational cost, provided in **Appendix \\u00a7 A.14 Diffusion Transformer FLOPs Calculation.**\\n\\nThe conclusion of Appendix \\u00a7 A.14shows that our diffusion model requires **only 0.9085 TFLOPs for the entire 500-step** sampling process. Using the RTX 3090 as an example, which achieves 35.58 TFLOPS per second, the inference process for the diffusion model takes approximately **0.026 seconds**. Including some data storage overhead, the total time remains **within the order of seconds** (as shown in Table 2, about 2.68s of SASRec). In real-world recommendation scenarios, it is generally **acceptable** for users to wait 2-3 seconds to customize a more personalized model.\\n\\n# W2: Joint Optimization\\nIn \\u00a72 Problem Formulation and Analysis, we define controllable multi-task recommendation (**CMTR**) and distinguish it from multi-task recommendation (MTR). Under the definition of CMTR, a single task is defined as **an optimization problem with a specific set of preference weights** for multiple objectives. Therefore, a \\\"task\\\" under CMTR inherently involves the joint optimization of multiple objectives.\"}", "{\"comment\": \"Thank you for addressing my comments and concerns in your rebuttal. I appreciate the effort you put into clarifying my doubts and putting user group fairness into your metrics, which in my mind have made the paper more robust.\\n\\nI want to emphasize that my initial score already reflected my positive view of the paper's quality and potential. While the rebuttal has resolved my concerns and improved the paper, there were no substantial changes that would warrant increasing the score further. As such, I have decided to maintain my original evaluation. Thank you for authors contributions to ICLR community.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe kindly inquire whether our responses have adequately addressed your concerns. If there are any remaining misunderstandings or uncertainties, we would greatly value the opportunity to discuss and clarify them further with you. \\n\\nBest regards, \\nThe Authors\"}", "{\"comment\": \"# W2: Embedding Size:\\nOur primary motivation is **controllability**. Simply improving the accuracy of recommendation models is not the primary goal of this work. Widely recognized frameworks like RecBole [1] set the default embedding size to 64. In fact, we maintain a consistent embedding size across all models to **ensure fairness** among baselines, as different embedding sizes would **alter the adapter dimensions**, potentially **impacting the diffusion model's ability** to learn the adapter parameters effectively. Nevertheless, based on your suggestion, we conducted additional experiments using SASRec as the backbone of the Movielens dataset with other embedding sizes (embedding size = 128, 256). The experimental results are presented below. \\n\\n| Emb. size | Acc. weight | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1.0 |\\n|-----------|-------------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|\\n| | **Acc.(NDCG@10)** |\\n| 64 | | 0.0857 | 0.1327 | 0.1643 | 0.1917 | 0.2276 | 0.2717 | 0.3204 | 0.3656 | 0.3945 | 0.4083 | 0.4165 |\\n| 128 | | 0.0961 | 0.1137 | 0.1328 | 0.1596 | 0.1911 | 0.2373 | 0.3013 | 0.3541 | 0.3841 | 0.4066 | 0.4154 |\\n| 256 | | 0.0883 | 0.1108 | 0.1275 | 0.1593 | 0.1967 | 0.2482 | 0.3232 | 0.3694 | 0.3943 | 0.4158 | **0.4174** |\\n| | **Div.(a-NDCG@10)** |\\n| 64 | | 0.1184 | 0.1175 | 0.1169 | 0.1161 | 0.1152 | 0.1129 | 0.1087 | 0.1022 | 0.0940 | 0.0856 | 0.0787 |\\n| 128 | | 0.1186 | 0.1193 | 0.1192 | 0.1189 | 0.1182 | 0.1162 | 0.1117 | 0.1043 | 0.0960 | 0.0871 | 0.0779 |\\n| 256 | | **0.1221** | 0.1221 | 0.1222 | 0.1217 | 0.1206 | 0.1183 | 0.112 | 0.1036 | 0.0952 | 0.0858 | 0.0773 |\\n\\nWhen training focuses solely on accuracy (i.e., acc. weight = 1.0), NDCG@10 achieves its highest value when the embedding size is set to 256. Similarly, when training focuses solely on diversity (i.e., div. weight = 1.0), a-NDCG@10 reaches its peak at an embedding size of 256. These results suggest that the current setting of embedding size = 64 has room for improvement. Based on these embedding sizes, we conducted subsequent diffusion-based conditional training and generation. The results are shown below. The figure has been added in **Appendix \\u00a7 A.8 The Embedding Size Problem.**\\n\\n\\n| Backbone | Algorithm | Avg.HV | Pearson r-a | Pearson r-d |\\n|-----------------|-----------|--------|--------------|-------------|\\n| 128 emb_size SASRec | Retrain | 0.2160 | - | - |\\n| 128 emb_size SASRec | CMR | 0.2132 | 0.9676 | *0.9782 |\\n| 128 emb_size SASRec | Soup | *0.2228 | *0.9824 | 0.9626 |\\n| 128 emb_size SASRec | MMR | 0.1802 | 0.8958 | 0.9031 |\\n| 128 emb_size SASRec | PadiRec | **0.2334** | **0.9971** | **0.9970** |\\n| - | LLM | 0.0625 | -0.1028 | 0.0805 |\\n\\n\\n| Backbone | Algorithm | Avg.HV | Pearson r-a | Pearson r-d |\\n|----------------------|-----------|---------|-------------|-------------|\\n| 256 emb_size SASRec | Retrain | 0.2214 | - | - |\\n| 256 emb_size SASRec | CMR | 0.2120 | 0.9504 | 0.9610 |\\n| 256 emb_size SASRec | Soup | **0.2286** | *0.9833 | *0.9680 |\\n| 256 emb_size SASRec | MMR | 0.1793 | 0.9133 | 0.8457 |\\n| 256 emb_size SASRec | PadiRec | *0.2262 | **0.9926** | **0.9959** |\\n| - | LLM | 0.0625 | -0.0754 | 0.0754 |\\n\\n\\n\\n[1] Towards a More User-Friendly and Easy-to-Use Benchmark Library for Recommender System. SIGIR, 2023.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper introduces a new controllable learnign approach for multi-task recommendation, which allows the customization and adaptation of recommendation model parameters to new task requirements without retraining. The authors have performed experiments on three datasets to demonstrate the effectiveness of the proposed method. However, for this paper, the technical novelty of the proposed method seems limited. Some settings of the proposed method (e.g., predefined preference weights) also limits the application of the proposed method in real scenarios. Moreover, the experimental evaluation setting in this paper also raises some concerns. For example, the evaluation dataset Movielens-1M is a little small to validate the effectiveness of the proposed method, and Movielens dataset is also not suitable for evaluating sequential recommendation models.\", \"additional_comments_on_reviewer_discussion\": \"In the rebuttal, the authors discussed 1) the reasonability of applying Movielens datasets for evaluating sequential recommendation models, 2) the scalability of the proposed method for handling more optimisation objectives, 3) the inference cost of the proposed method, 4) the backbone models, 5) the technical novelty of the proposed method comparing with existing works about diffusion models. The authors have provided addition experimental experiments, and addressed some concerns of the reviewers. However, the concerns regarding to the technical novelty and the experimental evaluation have not been addressed.\"}", "{\"comment\": \"# W4: Adding Fairness metrics\\nThank you for your valuable suggestions. We have added user group fairness as a controllable metric. Specifically, we use the NDCG GAP@10 between male and female groups as the metric (a smaller value indicates greater fairness in NDCG@10 between the two groups) to evaluate the impact of fairness weights on the metric under different settings. \\n\\nGiven the large number of possible weight combinations for the three objectives, we explored the impact of fairness through a controlled variable approach:\\n\\n1. Investigating the impact of fairness weight on other metrics (NDCG@10 and a-NDCG@10).\\n\\n2. Examining the effect of fairness weight on its metric (NDCG-GAP@10).\\n\\nThe experimental results are presented below. The table records the performance of three metrics\\u2014accuracy (NDCG@10), diversity (a-NDCG@10), and fairness (NDCG-GAP@10)\\u2014under two conditions: **unfair** (fairness weight = 0.1) and **fair** (fairness weight = 1), as accuracy weight varies (constrained diversity weight = 1 - accuracy weight).\\n\\n\\n| | Acc. weight | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1.0 |\\n|-------|-------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|\\n| Fair | NDCG@10 | 0.1062 | 0.1217 | 0.1288 | 0.1531 | 0.1760 | 0.2300 | 0.2959 | 0.3482 | 0.3814 | 0.4013 | 0.4072 |\\n| | a-NDCG@10 | 0.1147 | 0.1151 | 0.1158 | 0.1147 | 0.1150 | 0.1136 | 0.1100 | 0.1027 | 0.0959 | 0.0854 | 0.0798 |\\n| | NDCG_GAP | 0.0234 | 0.0197 | 0.0161 | 0.0114 | 0.0036 | 0.0018 | 0.0119 | 0.0285 | 0.0197 | 0.0162 | 0.0245 |\\n| UnFair| NDCG@10 | 0.1048 | 0.1103 | 0.1185 | 0.1518 | 0.1800 | 0.2375 | 0.3034 | 0.3455 | 0.3807 | 0.4029 | 0.4054 |\\n| | a-NDCG@10 | 0.1170 | 0.1168 | 0.1162 | 0.1164 | 0.1158 | 0.1130 | 0.1085 | 0.1019 | 0.0934 | 0.0844 | 0.0765 |\\n| | NDCG_GAP@10 | 0.0236 | 0.0180 | 0.0127 | 0.0187 | 0.0045 | 0.0096 | 0.0267 | 0.0366 | 0.0313 | 0.0311 | 0.0293 |\\n\\nThe line charts for the above metrics are presented in the PDF file, **Appendix \\u00a7 A.9 More Objectives (Accuracy, Diversity, and Fairness), Figure 15**. The following conclusions can be drawn:\\n\\n1. Figures 15(a) and 15(b) show that the unfair and fair conditions have **minimal impact** on both **NDCG@10** and **a-NDCG@10** individually, as well as on the trade-off relationship between these two metrics.\\n\\n2. Figure 15(c) demonstrates that the unfair and fair conditions **have an impact** on **NDCG-GAP@10**. Across multiple settings of accuracy weights, the NDCG-GAP@10 under the fair condition **is consistently smaller** than that under the unfair condition. This indicates that the control under the **fair/unfair** condition is effective.\\n\\nTo explore the **fine-grained control** of fairness weight, we investigated the performance of PadiRec on the three objectives under fairness weights of 0.1, 0.4, 0.7, and 1.0 when accuracy weight is set to 0.6 and 0.7 (at these point, both NDCG@10 and alpha-NDCG@10 demonstrate not bad performance). The results are shown in the table below (we have added it to **Appendix \\u00a7 A.9 More Objectives (Accuracy, Diversity, and Fairness)**):\\n\\n\\n| Acc. weight | Fair. weight | NDCG@10 | a-NDCG@10 | NDCG_GAP@10 |\\n|-------------|--------------|---------|-----------|----------|\\n| 0.6 | 0.1 | 0.3034 | 0.1085 | 0.0267 |\\n| | 0.4 | 0.2910 | 0.1096 | 0.0253 |\\n| | 0.7 | 0.2945 | 0.1094 | 0.0175 |\\n| | 1.0 | 0.2959 | 0.1100 | 0.0119 |\\n|-------------|--------------|---------|-----------|----------|\\n| 0.7 | 0.1 | 0.3455 | 0.1019 | 0.0366 |\\n| | 0.4 | 0.3448 | 0.1019 | 0.0299 |\\n| | 0.7 | 0.3395 | 0.1045 | 0.0286 |\\n| | 1.0 | 0.3482 | 0.1027 | 0.0285 |\", \"conclusion\": \"As the **fairness weight increases**, accuracy (NDCG@10) and diversity (a-NDCG@10) show minimal fluctuation, while NDCG-GAP@10 **steadily decreases**, indicating improved fairness. This demonstrates that even under multiple objectives, PadiRec exhibits strong controllability.\"}", "{\"comment\": \"Thank you for taking the time to review our paper, especially given your busy schedule. Your suggestions are highly detailed, and your perspectives are insightful! However, some parts of your feedback reflect misunderstandings of our work. We have provided detailed explanations for each of your concerns and hope to clarify any misconceptions.\\n\\n\\n\\n# W1: Evaluation of Movielens \\nValuable suggestions! The paper you mentioned [1] by Sun et al. reveals an interesting phenomenon and highlights some **limitations of the MovieLens** dataset. However, it was **published on (October 18, 2024)** **after the submission deadline for ICLR 2025 (October 1, 2024)**. Therefore, the insights from this work cannot be considered as a reference for our ICLR submission.\\n\\nAdditionally, many classic sequential recommendation papers [2, 3, 4, 5] have included experiments based on the MovieLens dataset. Among them, the methods **[2, 3]serve as our backbones.** To **align with these studies**, we also adopt the MovieLens dataset. Besides, this paper also presents experiments on the **Amazon dataset and real-world industrial datasets,** further demonstrating the generalizability of PadiRec. However, we will take your feedback regarding the MovieLens evaluation into account in our future work. Thanks again for your **valuable** suggestions!\\n\\n[1] Our Model Achieves Excellent Performance on MovieLens: What Does It Mean? TOIS, 2024 \\n\\n[2] Self-Attentive Sequential Recommendation. ICDM, 2018 \\n\\n[3] TiSASRec: Time Interval Aware Self-Attention for Sequential Recommendation. WSDM, 2020 \\n\\n[4] BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer. CIKM, 2019 \\n\\n[5] Generative Sequential Recommendation with GPTRec. SIGIR, 2023\"}", "{\"comment\": \"# Q1: Pearson r-d\\nIn this paper, \\\"Pearson r-d\\\" refers to the Pearson correlation coefficient between the retraining method and other methods with respect to the variable \\\"diversity\\\". The Pearson correlation coefficient is commonly used to **measure the correlation between two variables** and has been widely applied in previous works [1]. Given that Controllable Multi-Task Recommendation (CMTR) tasks **focus on controllability on multiple objectives**, and there is a significant lack of research in this area, we borrowed the concept of Hypervolume from Multi-Task Recommendation (MTR) tasks to measure the overall performance across multiple objectives. As for **controllability**, we expect the algorithm's performance under given preference weights to closely resemble that of the optimal (i.e., retrain on those preference weights) approach. Therefore, we use the Pearson correlation coefficient as a metric to assess controllability. This is an area that still requires further improvement, and we look forward to the development of more effective and refined metrics in the future.\\n\\n[1] Demystifying Causal Features on Adversarial Examples and Causal Inoculation for Robust Network by Adversarial Instrumental Variable Regression. CVPR, 2023.\\n\\n[2] Analyzing and Evaluating Correlation Measures in NLG Meta-Evaluation. arxiv, 2024.\\n\\n# Q2: Details of DIffusion Training\\nThis is a key point. In calculating the reported efficiency, the diffusion training time is not included in the results. This is because the time reported here refers to the time **from receiving a new requirement** (i.e., new preference weights) **to**the completion of the **model construction**. The diffusion model in **Padirec** requires only a single training session before deployment, with **no additional training** needed during the inference process. After deployment, Padirec can **directly generate** a customized model based on the preference weights. In contrast, traditional optimization methods usually require **retraining** the model from scratch according to the new preference weights, which is quite a time and computation-consuming.\"}", "{\"comment\": \"# W2: Inference Cost of Diffusion\\nAs shown in Table 2, the diffusion model significantly reduces the time overhead, from receiving a new task instruction to completing the model construction, compared to traditional retraining approaches. \\n\\nAs for the inference cost of diffusion, Our algorithm design has taken this into account from two aspects. One is the design to structure the recommendation model as a **backbone + adapter architecture****.** The diffusion model is used solely to generate the adapter parameters, with the goal of reducing the training and generation overhead of diffusion. The other is the design of the denoising model of diffusion. We **only stack** **4** **attention** to act as a denoising model. The details of the diffusion model have been added to the PDF file and can be found in **Appendix \\u00a7 A.12, Details of Diffusion**. Additionally, we have included a comprehensive analysis of the computational cost, provided in **Appendix \\u00a7 A.13, Diffusion Transformer FLOPs Calculation.**\\n\\nThe conclusion of **Appendix \\u00a7 A.13, shows** that our diffusion model requires **only 0.9085 TFLOPs for the entire 500-step** sampling process. Using the RTX 3090 as an example, which achieves 35.58 TFLOPS per second, the inference process for the diffusion model takes approximately **0.026 seconds**. Including some data storage overhead, the total time remains **within the order of seconds** (as shown in Table 2, about 2.68s of SASRec). In real-world recommendation scenarios, a single recommendation typically occurs within milliseconds. However, waiting 2-3 seconds to customize a more personalized model is generally **acceptable** for users.\\n\\n# Q1: Conditioning Strategies\\nIn fact, our experiments (Figure. 5) show that while there are differences in performance across different strategies, these differences are not significant. When applied to other recommendation scenarios, there may be no need to differentiate between strategies, or they could be treated as hyperparameters to be selected through experimentation.\"}", "{\"title\": \"Regarding the ICLR topics and offline evaluation\", \"comment\": \"Thanks for your response.\\n\\nThe ICLR community allows, encourages, and supports applied research papers. For instance, ICLR 2025 explicitly lists topics including, but not limited to:\\n\\n> + Applications in audio, speech, robotics, neuroscience, biology, or any other field.\\n\\nExamples of applied works include those focusing on recommendation systems and retrieval [1, 2, 3, 4, 5], as well as studies effectively integrating diffusion models with various subfields based on their unique characteristics [6, 7, 8, 9]. Additionally, other trending applied research directions, such as LLM agents [10, 11, 12], are also actively encouraged by ICLR.\\n\\nIt is crucial to emphasize that our paper focuses on **controllable multi-task recommendation**, specifically addressing **instant responsiveness and control** based on **dynamic changes in recommendation objectives.** Parameter diffusion aligns perfectly with this goal, enabling **test-time model customization** via the \\\"one instruction, one model\\\" paradigm. Furthermore, **conducting only offline evaluations** on widely recognized datasets for recommendation algorithms is a standard practice [1, 2, 4, 5]. Moreover, the third dataset used in this study\\u2014industrial data\\u2014is an **online dataset** collected from a real-world production environment, further validating the practical value of our method.\\n\\nTherefore, both in terms of **topic** and **evaluation**, this paper is **well-aligned with ICLR's scope** and standards. If there are additional technical concerns, we will do our best to address them.\\n\\n\\n\\n[1] Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond. ICLR, 2024\\n\\n[2] Federated Recommendation with Additive Personalization. ICLR, 2024\\n\\n[3] Sentence-level Prompts Benefit Composed Image Retrieval. ICLR, 2024\\n\\n[4] LightGCL: Simple Yet Effective Graph Contrastive Learning for Recommendation. ICLR, 2023\\n\\n[5] StableDR: Stabilized Doubly Robust Learning for Recommendation on Data Missing Not at Random. ICLR, 2023\\n\\n[6] Spatio-Temporal Few-Shot Learning via Diffusive Neural Network Generation. ICLR, 2024\\n\\n[7] DiffAR: Denoising Diffusion Autoregressive Model for Raw Speech Waveform Generation. ICLR, 2024\\n\\n[8] Multi-Source Diffusion Models for Simultaneous Music Generation and Separation. ICLR, 2024\\n\\n[9] Training-free Multi-objective Diffusion Model for 3D Molecule Generation. ICLR, 2024\\n\\n[10] AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation. ICLR, 2024\\n\\n[11] ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate. ICLR, 2024\\n\\n[12] Plug-and-Play Policy Planner for Large Language Model Powered Dialogue Agents. ICLR,2024\"}", "{\"title\": \"I keep the original score\", \"comment\": \"I appreciate the authors' response, but I do not think my key concerns have been solved. Overall, I evaluated this work based on how it advances the field.\"}", "{\"title\": \"Regarding dataset and our focus\", \"comment\": \"Thank you for your feedback.\\n\\nWe appreciate your concerns regarding the collection methods of public datasets. However, we believe this discussion somewhat **exceeds the scope of our work**. We would like to reiterate that this study primarily focuses on controllable multi-task recommendation, rather than sequential recommendation. The user behavior sequences are included in experiments solely as user features, and our **focus lies on the adaptive adjustment and control of model parameters** in response to **dynamic changes in recommendation objectives** based on parameter diffusion. The exploration of user sequence behavior patterns is not within the scope of this study, and the user behavior sequences could be replaced by other user features (e.g., collaborative filtering features).\\n\\nAdditionally, regarding offline evaluation, we **do not merely emphasize \\\"improving offline accuracy metrics\\\"** as the reviewer noted. Instead, we argue that at test time, improving accuracy alone is insufficient. It is crucial to enable the model to **adapt to dynamic changes in platform or user requirements across multiple metrics,** which is the **core topic of this work**. Offline testing is an essential step for any model being deployed in practice, as it is widely recognized to correlate positively with online performance. Furthermore, as the reviewer suggested, the **third dataset used in this study**\\u2014industrial data\\u2014is an **online dataset** collected from a real-world product environment, further validating the practical value of our proposed method.\\n\\nFinally, classic sequential recommendation works [2, 3, 4, 5] utilized MovieLens. Some submissions to ICLR 2025 [1] have used sequence models on MovieLens and received high scores. **Moreover, [6] points out the limitations of MovieLens but it advocates more datasets including Movielens.** Therefore, using MovieLens is not a valid reason for rejection. To reiterate, we employed **THREE(!) datasets** in our experiments. Our focus is not on improving accuracy but on **CONTROLLABILITY(!)**. We look forward to engaging in further technical discussions.\\n\\nOnce again, thank you for your feedback. We welcome further academic discussions relevant to the theme of this study.\\n\\n\\n\\n[1] Preference Diffusion for Recommendation. under review ICLR,2025\\n\\n[2] Self-Attentive Sequential Recommendation. ICDM, 2018 \\n\\n[3] TiSASRec: Time Interval Aware Self-Attention for Sequential Recommendation. WSDM, 2020 \\n\\n[4] BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer. CIKM, 2019 \\n\\n[5] Generative Sequential Recommendation with GPTRec. SIGIR, 2023\\n\\n[6] Our Model Achieves Excellent Performance on MovieLens: What Does It Mean? TOIS, 2024\"}", "{\"comment\": \"# W3: Sequential Recommendation Task is Crucial for Multi-task Learning.\\nRecall is the foundation of ranking tasks. For example, **neglecting diversity (and other factors) during recall** can result in a more homogeneous set of candidates, which inherently **limits the upper bound** of diversity achievable in ranking.\\n\\nOn the other hand, sequential recommendation models indeed could serve both recall and ranking. Many **research works** distinguish the two primarily based on whether the candidates in the experimental setting are a **universal set** or a **subset**. Notably, numerous studies adopt this setting and utilize sequential recommendation models for ranking tasks [1, 2, 3].\\n\\nWhat's more, Padirec is a framework designed to enhance the controllability of downstream models. Our focus is on controlling the downstream models rather than on a specific recommendation model itself.\\n\\n[1] gSASRec: Reducing Overconfidence in Sequential Recommendation Trained with Negative Sampling. RecSys, 2023\\n\\n[2] Self-Attentive Sequential Recommendation. ICDM, 2018\\n\\n[3] GRU4Rec: Session-based Recommendations with Recurrent Neural Networks. ICLR, 2016\\n\\n[4] Large Language Models are Zero-Shot Rankers for Recommender Systems. ECIR, 2024\\n\\n# W4: We Make Recommendation Model Not More Powerful But Controllable\\nWe quite agree that having a more powerful sequential recommendation model may not be that important. In fact, our focus is **not on endlessly improving the recommendation accuracy**, but rather on **expanding the controllability** to the downstream recommendation model. That is, enabling the platform to avoid repeatedly retraining recommendation models based on frequently changing business metrics,and instead, shifting towards a task where models can be quickly generated with the required capabilities based on specific instructions. This paradigm offers several advantages, such as eliminating training costs and accelerating the response time to new instructions. This is a bold attempt on our part, and we are fortunate that it aligns with your thoughts.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe kindly inquire whether our responses have adequately addressed your concerns. If there are any remaining misunderstandings or uncertainties, we would greatly value the opportunity to discuss and clarify them further with you. \\n\\nBest regards, \\nThe Authors\"}", "{\"title\": \"Summary\", \"comment\": \"Dear Reviewer,\\n\\nIt seems there is still a misunderstanding, and we would like to summarize our points here in the hope of providing a clearer understanding.\\n\\n1. **The reviewer believes that the MovieLens dataset is unsuitable for sequential recommendation.**\\n\\n - **Our opinion:**\\n - In fact, we have validated the effectiveness of our algorithm **across multiple datasets**, not just MovieLens.\\n - The paper discussing the limitations of MovieLens was published after our ICLR submission.\\n - As the reviewer mentioned, this issue pertains to the broader field of Recommender Systems. The MovieLens dataset has been widely used in many classic works on sequential recommendation. We are just following their settings widely adopted in the field to provide **a fair and consistent reference**.\\n\\n2. **The reviewer believes that improving offline accuracy metrics has diminishing returns.**\\n\\n - **Our opinion:**\\n - Our primary contribution is not merely improving accuracy but rather enhancing **controllability**. \\n We argue that improving accuracy alone at test time is insufficient. Instead, it is critical to enable the model to adapt dynamically to changes in platform or user requirements across multiple metrics (e.g., **diversity, accuracy, fairness**). This adaptability is the core focus of our work.\\n\\nIn summary, we believe that the reviewer\\u2019s concerns about the dataset and the evaluation of the entire field may overshadow our primary contribution to advancing controllability in the recommendation community.\\n\\nBest regards, \\nThe Authors\"}", "{\"title\": \"see below\", \"comment\": \"\\u201cFinally, classic sequential recommendation works [2, 3, 4, 5] utilized MovieLens. \\u201d\\n\\nIn the field of Recsys, there are a lot of inappropriate evaluations even we only consider the offline settings (see Recsys or SIGIR's reproducibility track). Using MovieLens for sequential recommendation is a VERY big problem. I have explained the reasons above.\\n\\n \\\"the third dataset used in this study\\u2014industrial data\\u2014is an online dataset collected from a real-world product environment, further validating the practical value of our proposed method.\\\"\\n\\nI have not seen your description about the online setting like A/B test. It seems that the dataset is still an offline dataset, but from an industry system. Your Amazon is also an industrial dataset. Using an Industrial datasets does not mean you evaluate your model on the online setting.\", \"btw\": \"I did not find details about how you split the data into training and testing sets. Please clarify your data partitioning strategy.\"}", "{\"comment\": \"Thank you for your time, effort, and valuable suggestions! We have provided detailed explanations and supplemented the experiments based on your suggestions. We hope this is helpful to you, and once again, we sincerely appreciate your effort in reviewing our paper despite your busy schedule.\\n\\n# W1: More Utilities:\\nThank you for your valuable suggestions. We have added user group fairness as a controllable metric. Specifically, we use the NDCG GAP@10 between male and female groups as the metric (a smaller value indicates greater fairness in NDCG@10 between the two groups) to evaluate the impact of fairness weights on the metric under different settings. \\n\\nGiven the large number of possible weight combinations for the three objectives, we explored the impact of fairness through a controlled variable approach:\\n\\n1. Investigating the impact of fairness weight on other metrics (NDCG@10 and a-NDCG@10).\\n\\n2. Examining the effect of fairness weight on its metric (NDCG-GAP@10).\\n\\nThe experimental results are presented below. The table records the performance of three metrics\\u2014accuracy (NDCG@10), diversity (a-NDCG@10), and fairness (NDCG-GAP@10)\\u2014under two conditions: **unfair** (fairness weight = 0.1) and **fair** (fairness weight = 1), as accuracy weight varies (constrained diversity weight = 1 - accuracy weight).\\n\\n\\n| | Acc. weight | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1.0 |\\n|-------|-------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|\\n| Fair | NDCG@10 | 0.1062 | 0.1217 | 0.1288 | 0.1531 | 0.1760 | 0.2300 | 0.2959 | 0.3482 | 0.3814 | 0.4013 | 0.4072 |\\n| | a-NDCG@10 | 0.1147 | 0.1151 | 0.1158 | 0.1147 | 0.1150 | 0.1136 | 0.1100 | 0.1027 | 0.0959 | 0.0854 | 0.0798 |\\n| | NDCG_GAP | 0.0234 | 0.0197 | 0.0161 | 0.0114 | 0.0036 | 0.0018 | 0.0119 | 0.0285 | 0.0197 | 0.0162 | 0.0245 |\\n| UnFair| NDCG@10 | 0.1048 | 0.1103 | 0.1185 | 0.1518 | 0.1800 | 0.2375 | 0.3034 | 0.3455 | 0.3807 | 0.4029 | 0.4054 |\\n| | a-NDCG@10 | 0.1170 | 0.1168 | 0.1162 | 0.1164 | 0.1158 | 0.1130 | 0.1085 | 0.1019 | 0.0934 | 0.0844 | 0.0765 |\\n| | NDCG_GAP@10 | 0.0236 | 0.0180 | 0.0127 | 0.0187 | 0.0045 | 0.0096 | 0.0267 | 0.0366 | 0.0313 | 0.0311 | 0.0293 |\\n\\n\\nThe line charts for the above metrics are presented in the PDF file, **Appendix \\u00a7 A.9 More Objectives (Accuracy, Diversity, and Fairness), Figure 15**. The following conclusions can be drawn:\\n\\n1. Figures 15(a) and 15(b) show that the unfair and fair conditions have **minimal impact** on both **NDCG@10** and **a-NDCG@10** individually, as well as on the trade-off relationship between these two metrics.\\n\\n2. Figure 15(c) demonstrates that the unfair and fair conditions **have an impact** on **NDCG-GAP@10**. Across multiple settings of accuracy weights, the NDCG-GAP@10 under the fair condition **is consistently smaller** than that under the unfair condition. This indicates that the control under the **fair/unfair** condition is effective.\\n\\nTo explore the **fine-grained control** of fairness weight, we investigated the performance of PadiRec on the three objectives under fairness weights of 0.1, 0.4, 0.7, and 1.0 when accuracy weight is set to 0.6 and 0.7 (at these point, both NDCG@10 and alpha-NDCG@10 demonstrate not bad performance). The results are shown in the table below (we have added it to **Appendix \\u00a7 A.9 More Objectives (Accuracy, Diversity, and Fairness)**):\\n\\n\\n\\n| Acc. weight | Fair. weight | NDCG@10 | a-NDCG@10 | NDCG_GAP@10 |\\n|-------------|--------------|---------|-----------|----------|\\n| 0.6 | 0.1 | 0.3034 | 0.1085 | 0.0267 |\\n| | 0.4 | 0.2910 | 0.1096 | 0.0253 |\\n| | 0.7 | 0.2945 | 0.1094 | 0.0175 |\\n| | 1.0 | 0.2959 | 0.1100 | 0.0119 |\\n|-------------|--------------|---------|-----------|----------|\\n| 0.7 | 0.1 | 0.3455 | 0.1019 | 0.0366 |\\n| | 0.4 | 0.3448 | 0.1019 | 0.0299 |\\n| | 0.7 | 0.3395 | 0.1045 | 0.0286 |\\n| | 1.0 | 0.3482 | 0.1027 | 0.0285 |\", \"conclusion\": \"As the **fairness weight increases**, accuracy (NDCG@10) and diversity (a-NDCG@10) show minimal fluctuation, while NDCG-GAP@10 **steadily decreases**, indicating improved fairness. This demonstrates that even under multiple objectives, PadiRec exhibits strong controllability.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you very much for your thoughtful feedback and for recognizing our efforts in addressing your comments and concerns. We appreciate your positive view of our work, and we are glad that our responses clarified your doubts and made the paper more robust.\\n\\nWe would like to highlight that we carefully addressed each of your concerns, and we did not find specific requests for \\\"substantial changes\\\" in your initial review. Nonetheless, we remain committed to making further improvements if needed, and we look forward to any further discussion or suggestions you may have.\\n\\nThank you again for your time and contributions to the ICLR community.\\n\\nBest regards\"}", "{\"summary\": \"The paper introduces PaDiRec, Parameter Diffusion for Controllable Multi-Task Recommendation, a framework designed to adapt recommender systems to changing task requirements without retraining. Traditional recommendation models often struggle to adjust dynamically. PaDiRec addresses this issue using a parameter diffusion model that can modify model parameters at test time, making it a model-agnostic solution adaptable to various backbone structures. Experiments on public datasets and an industrial dataset show that PaDiRec achieves high controllability and performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"PaDiRec enables real-time, on-demand changes in task preferences without requiring expensive retraining, making it ideal for industry. The use of diffusion models as parameter generators is also novel in the context of recommender systems, offering robust parameter generation that captures task-specific nuances.\\n\\nThe paper provides well-designed experiments and several datasets. According to the results, PaDiRec integrates well with diverse backbone models, proving its applicability across different recommendation systems and settings as well as the capability to achieve near real-time responses with significantly reduced latency compared to traditional retraining.\", \"weaknesses\": \"The approach relies heavily on predefined utilities which may not fully generalize to tasks with complex, non-linear objectives. This dependency could limit its flexibility in handling multi-faceted objectives beyond accuracy and diversity.\\n\\nWhile PaDiRec performs faster than retraining, the diffusion process may still be computationally intensive, especially in environments with limited processing power. More details on efficiency optimization could enhance its feasibility for larger-scale systems.\", \"questions\": \"How should practitioners choose between the various conditioning strategies (e.g., Pre&Post, Ada-Norm) when applying PaDiRec to new recommendation environments? Are there guidelines or heuristics to assist in selecting the optimal strategy?\\n\\nHow would PaDiRec handle non-standard or non-linear preference metrics? Would additional diffusion model tuning be required?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"# W3-2: Preference Weights Limitation\\nIn \\u00a72, we have limited the scope of the scenario to various combinations of preference weights. However, the model can be generalized to customize for **any combination of weights** within a continuous space during inference. Regarding more open-ended model customization (such as customizing the model based on user-provided natural language), we leave this as a direction for future work.\\n\\n# Q1: Guiding Principles\\nPadiRec provides greater subjective control over the model for both users and companies, allowing preference weights to be specified arbitrarily. However, excessive autonomy may lead to more confusion. To address this, we propose a practical approach by transforming continuous value controls into discrete click-based controls. Specifically, we could categorize preference weights into broad categories (e.g., high precision, low diversity) to make it easier for users to understand and interact.\\n\\n\\n\\n# Q2: Non-standard or Non-linear Preference Metrics.\\nFirstly, in the **\\\"Adapter Tuning\\\"** step of PadiRec, the multi-objective loss function combines multiple objective functions through a linear weighted sum of the preference weights. However, in practice, we allow the use of various optimization methods (not limited to linear combinations) because our primary goal is to obtain adapters that **align with the given preference weights.** These adapters are then used to train the conditional diffusion model with adapter - preference weight pairs.\\n\\nSecondly, for non-standard or non-linear preferences, we can train an additional encoder to **map these preferences into the preference weight space**. This approach avoids the need for additional training of the diffusion model.\"}", "{\"summary\": \"This paper proposes a new learning method called \\\"parameter diffusion\\\" for controllable multi-task recommendation, which allows customization and adaptation of recommendation model parameters to new task requirements without retraining. The proposed method uses existing optimized model parameters to generate new ones that are adapted to different task requirements through parameter diffusion. The main contribution of this paper is to provide a new learning method that enhances the controllability and flexibility of multi-task recommendation systems.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"\\u2022\\tTackles the practical issue of dynamic task requirements\\n\\u2022\\tAllows customization and adaptation of recommendation model parameters to new task requirements without retraining\\n\\u2022\\tNovel combination of diffusion models with adapter tuning\", \"weaknesses\": \"\\u2022\\tHigh inference complexity of diffusion models may not meet real-time recommendation requirements. No detailed analysis of computational overhead during inference\\n\\n\\u2022\\tParameter Optimization Strategy: \\no\\tRelies solely on single-task optimized parameters, ignores potential benefits of multi-task joint optimization\\n\\n\\u2022\\tLimited Generalization and Flexibility: \\no\\tThe method is primarily tested on only two specific tasks (accuracy and diversity); Unclear scalability to more utilities/tasks.\\no\\tPredefined preference weights limit the model's adaptability to unexpected scenarios\", \"questions\": \"Are there any standard or guiding principles that can help us choose appropriate task requirements (preference weights)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Concerns regarding the dataset and the topic.\", \"comment\": \"Thank you for your response. We believe there are still some misunderstandings about our paper.\\n\\n# Regarding the dataset:\\n\\n**First,** we must reiterate that the article discussing the limitations of MovieLens [1] was **published after our submission.** \\n\\n**Second,** nonetheless, we have carefully reviewed the article [1]. In fact, our approach **aligns with** the final suggestions of this paper [1]. The original text from **\\u00a7 5.4 Is It a Good Idea to Evaluate RecSys Models on MovieLens** is as follows:\\n\\n> On the other hand, MovieLens stands out as one of **the most popular datasets** in the field of recommender systems [2, 3, 4, 5], Results derived from MovieLens serve as valuable references for researchers to validate their implementations... In short, while **providing results on MovieLens for reference purposes is beneficial**, it should not serve as a strong justification for the effectiveness of a proposed model. Models should be evaluated on a variety of datasets, **not relying solely on the MovieLens dataset.**\\n\\nOur paper includes experiments **conducted on THREE datasets**, **not solely on MovieLens**, which **aligns with** the suggestions in the paper [1]. It **cannot be overlooked** that we conducted experiments on **THREE datasets**. If you believe there are additional datasets beyond the **THREE datasets**, please let us know, and we will do our best to address your concerns. \\n\\nFinally, the discussion and statement regarding the MovieLens evaluation **have been added to the PDF file in \\u00a7 Appendix A.15 Discussion on used Movielens Evaluation.** However, it is **important to note** again, **our focus** is **not on the patterns of sequential data** but rather on **providing controllability** to the backbone models.\\n\\n# Regarding the topic of the paper:\\n\\nThe sequential recommendation model serves only as a backbone in our work. The goal of our algorithm is to enhance the **controllability** of **any given backbone**. More specifically, our paper primarily **addresses** the challenge of **inefficient retraining of deployed models at test time,** thereby improving the models' ability to **adapt dynamically to changes in task requirements**.\\n\\n\\n\\n[1] Our Model Achieves Excellent Performance on MovieLens: What Does It Mean? TOIS, 2024 \\n\\n[2] Self-Attentive Sequential Recommendation. ICDM, 2018 \\n\\n[3] TiSASRec: Time Interval Aware Self-Attention for Sequential Recommendation. WSDM, 2020 \\n\\n[4] BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer. CIKM, 2019 \\n\\n[5] Generative Sequential Recommendation with GPTRec. SIGIR, 2023\"}", "{\"title\": \"Our focus and dataset setting\", \"comment\": \"Tank you for your response. Our algorithm is compatible with any recommendation model and aims to **provide controllability for downstream models**. Specifically, given a requirement, it can quickly respond and **customize a recommendation model** to **meet that requirement**. The user behavior sequences are included in the experiments solely as user features and are not our focus. While we acknowledge the limitations of MovieLens, they do not affect the core goal of our algorithm: **controlling downstream models**. Furthermore, the other two datasets consist of **real click behavior data** (unlike the rating data in MovieLens), which effectively demonstrate the applicability of our method.\\n\\nFor data processing, we used the ReChorus2.0 framework (https://github.com/THUwangcy/ReChorus) to standardize the data. The data was split using the Leave-One-Out approach, and detailed information has been updated in the **PDF file** **Appendix \\u00a7 A 3.2 Dataset Settings.**\"}", "{\"title\": \"Rebuttal deadline is approching.\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your hard work in reviewing our paper and providing valuable suggestions. As the discussion deadline approaches, with less than two days remaining (**Nov. 26**), we **have not receive any responses** to our rebuttal yet. We completely understand the demands of your busy schedules, but as authors, we find ourselves anxiously awaiting your feedback.\\n\\nWe would greatly appreciate knowing whether our responses have adequately addressed your concerns. If you have any further questions or require clarification before the deadline, **please do not hesitate to reach out**. Your dedicated time and effort in reviewing our work are sincerely appreciated.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"summary\": \"This paper introduces PaDiRec, a neural network diffusion-based approach for generating adaptable model parameters for controllable multi-task recommendation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper presents an intriguing application of diffusion models for multi-task recommendation.\\n\\n2. The paper is well-written and organized.\\n\\n3. Code availability enhances the paper\\u2019s reproducibility.\", \"weaknesses\": \"Here are the polished points based on the content of the paper:\\n\\n1. This paper primarily represents an application of the paper of \\\"Neural Network Diffusion\\\" [1] in recommender systems, specifically building on the framework from the referenced neural network diffusion work. While the authors cite this paper, they do not provide a thorough enough introduction to or comparison with this prior work in the related work section, which does not fully credit the original contribution. It is suggested that this work be submitted to an application-focused conference.\\n\\n2. This paper emphasizes practical applications rather than algorithmic development, making online testing essential. However, all experiments are conducted on offline datasets, raising concerns about the real-world usability of the proposed approach.\\n\\n3. The backbone models used, such as SASRec, are not state-of-the-art in recommendation systems, which could limit the effectiveness and generalizability of the proposed approach when compared with modern backbone models in recent two years such as [2].\\n\\n4. The paper\\u2019s focus is limited to accuracy and diversity, yet recommendation systems are often evaluated by a broader set of metrics, including serendipity, fairness, and more. Expanding the testing metrics could more robustly validate the method's effectiveness across a range of evaluation criteria.\\n\\n[1] Wang, Kai, et al. \\\"Neural network diffusion.\\\" *arXiv preprint arXiv:2402.13144* (2024).\\n\\n[2] Yue, Zhenrui, et al. \\\"Linear recurrent units for sequential recommendation.\\\" *Proceedings of the 17th ACM International Conference on Web Search and Data Mining*. 2024.\", \"questions\": \"1. Are there references supporting the use of Pearson r-d, or is this application new in this paper? If it\\u2019s new, could other established methods be used for evaluation instead?\\n\\n2. In calculating the reported efficiency, is the diffusion training time included into the results? Additionally, is the diffusion model designed to require only a single training session that allows for permanent deployment, or does it need periodic retraining?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Revised pdf and summary.\", \"comment\": \"Dear reviewers:\\n\\nThanks for your hard work, your suggestions really help us to improve our paper. We revised our paper according to your suggestions (**revised parts are marked as blue**) and **re-upload our modified pdf**.\", \"we_will_summarize_our_changes_as_follows\": \"- We conducted experiments on the **SOTA backbone**, detailed in Appendix A.7.\\n- We supplemented the paper with an analysis of **embedding size** in SASRec, presented in Appendix A.8.\\n- We conducted experiments on **additional controllable objectives** in Appendix A.9 (expanding from accuracy and diversity to include fairness).\\n- We provided a more **detailed structure** of the diffusion model in Appendix A.13 and derived the **computation cost** for the diffusion inference process in Appendix A.14.\\n\\nFinally, we emphasize that our paper primarily addresses the significant yet often overlooked challenge of inefficient retraining of deployed models at test time, thereby enhancing the models' ability to adapt dynamically to changes in task requirements. We have rigorously formulated this problem as Controllable Multi-Task Recommendation (CMTR) and, for typical recommendation systems, proposed PadiRec\\u2014a well-generalized and efficient algorithm that achieves \\\"one instruction, one model\\\" instead of relying on costly retraining. We kindly ask you to consider our contributions to the recommender system community, particularly in advancing the critical aspect of **controllable recommendation**.\\n\\nIf you have any further questions, please feel free to ask before the deadline (**Nov. 26**), and we will respond as soon as possible.\\n\\nBest,\\n\\nAuthors\"}", "{\"summary\": \"This paper highlights the dynamic preferences to be considered when deploying recommender systems in practice. Traditional models, including multi-task learning methods, cannot effectively handle dynamic user preferences. Although retraining is a common approach, it is very time-consuming and resource-intensive. Therefore, they proposed a parameter generation method for controllable multi-task recommendation, which effectively generates task-specific model parameters using a generative model. Specifically, they proposed PaDiRec by formulating an objective function consistent with task-specific preference weights and then using adapter tuning to fine-tune model parameters. They trained a diffusion model to learn the conditional distribution of these optimized adapter parameters. Finally, they evaluated PaDiRec using different sequential recommendation backbones to produce recommendations.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"(1) This paper studies an interesting research problem that is common in the field of recommender systems.\\n(2) The authors proposed a series of methods, including adapter modules, accuracy + diversity objective functions, adapter tuning, etc.\\n(3) The authors proved their point through a large number of experiments\", \"weaknesses\": \"(1) The evaluation presented in this paper raises some concerns. First, the authors assessed the sequential recommendation task using the MovieLens dataset. I would strongly advise caution, as the user behaviors reflected in the MovieLens data are not genuine viewing behaviors but rather rating behaviors. Many users rate movies they have not actually watched, often based on prior knowledge. A closer examination of the dataset reveals that some users can rate five or even ten movies within just a few minutes. The rating time in MovieLens is not the watching time! Also see [1] by Sun et.\\n \\n(2) The hyperparameters may not have been fine-tuned carefully. While the MovieLens dataset does exhibit high sequential patterns, it is important to note that, as mentioned, the user behaviors are not reflective of actual viewing. The sequential patter of recommendation systems employed by MovieLens can be easily detected using models like SASRec. I observed that the authors used an embedding size of 64; to my knowledge, SASRec's optimal embedding size is significantly larger than this in MovieLens. Utilizing a non-optimal embedding size for baseline models undermines the claim that the proposed model is superior. It\\u2019s essential to recognize that in deep learning, simply comparing models with the same embedding size is not a fair approach. Some deep learning models can effectively utilize larger embedding sizes, while others may reach saturation with smaller embedding sizes.\\n\\n(3) Multi-task learning is often applied in the ranking stage, while sequential recommendation typically functions as a recall algorithm. I do not think it makes much sense to achieve better performance in the sequential recommendation task.\\n\\n(4) In fact, having a more powerful sequential recommendation model may not be that important, as such improvements generally do not bring any benefits in the ranking stage or in online systems. Many sequential patterns learned by deep Transformer models are more closely related to recommendation exposure patterns than to actual user behavior patterns. Therefore, I do not think that the solution proposed in this paper has a particular impact on the current recommender system landscape.\\n\\n[1] Our Model Achieves Excellent Performance on MovieLens: What Does It Mean? TOIS2024\", \"questions\": \"no\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you to the authors for their diligent response, which addressed some of my concerns. However, I still have reservations regarding the novelty of the applied work in the ICLR community and the feasibility of the offline evaluation. Consequently, I will maintain my original ratings.\"}", "{\"comment\": \"Thank you for your time and effort. Some of your suggestions are highly valuable and worth adopting. However, there are a few misunderstandings regarding the contributions of our paper and its relevance to the conference topic. We have provided detailed explanations and conducted additional experiments to address each of your concerns, and we hope this helps to clarify any confusion!\\n\\n# W1&W2: Contribution\\nThe work [1] provided us with significant inspiration, as it demonstrated the critical role of diffusion models in high-performance parameter reconstruction. However, in our work, the diffusion model serves as just one tool for parameter generation. It is necessary for us to restate our **key contributions** and innovations: (1) the **definition** of Controllable Multi-Task Recommendation (**CMTR**), (2) the \\\"backbone + task-specific adapter\\\" **structure** in recommendation models, and (3) the use of conditional training to achieve the **\\\"one instruction, one model\\\" paradigm**. We acknowledge that our paper does not include A/B testing results, due to the company's complex processes and confidentiality requirements. Nevertheless, we conducted tests using **real clickstream data** collected from July 24, 2024, to August 24, 2024, as presented in the paper (**Table 1**), which reflects a certain level of consistency with the online tests.\\n\\nAdditionally, similar methodologies have already been adopted in transfer learning, as shown in work [2] published at ICLR 2024. This supports the relevance of adapting new technologies to different domains, **aligning with the conference topic**. Furthermore, our motivation is strongly rooted in industrial scenarios where business objectives frequently change. Compared to retraining models upon receiving new business goals, PadiRec **eliminates retraining costs** and **shortens the response time** from receiving a new objective **to conducting a new model**, as shown in Table 2 of our paper. \\n\\n[1] Neural network diffusion. _arXiv preprint arXiv:2402.13144_ (2024).\\n\\n[2] Spatio-temporal few-shot learning via diffusive neural network generation. ICLR, 2024\\n\\n# W3: SOTA Backbone\\n\\n\\nThanks for reminding me about the optimal backbone. It's important to clarify that PadiRec does **not focus on improving the accuracy** of sequential recommendation models. Instead, it provides a framework that can adapt to any downstream recommendation model, **with the goal of enabling customized recommendation models based on requirements without the need for retraining.** Nevertheless, we have supplemented our experiments with the SOTA backbone [1] and will include the relevant references and results in the main text. The experimental results are shown below. The trend figure with the backbone LRURec has been added in **Appendix \\u00a7 A.7 SOTA Backbone (LRURec).**\\n\\n\\n| Backbone | Algorithm | Avg.HV | Pearson r-a | Pearson r-d |\\n|----------|-----------|--------|--------------|-------------|\\n| LRURec | Retrain | **0.2205** | - | - |\\n| LRURec | CMR | 0.1887 | *0.9416 | *0.9551 |\\n| LRURec | Soup | 0.1459 | 0.7902 | 0.8616 |\\n| LRURec | MMR | 0.1911 | 0.8690 | 0.8043 |\\n| LRURec | PadiRec | *0.2126 | **0.9977** | **0.9983** |\\n| - | LLM | 0.0625 | -0.0922 | 0.09499 |\\n\\n\\n[1] Linear recurrent units for sequential recommendation. WSDM, 2024.\"}", "{\"title\": \"Thanks\", \"comment\": \"\\\"First, we must reiterate that the article discussing the limitations of MovieLens [1] was published after our submission.\\\"\\n\\nEven without this paper, I still believe that researchers in this field should know what kind of tasks MovieLens can do. It is a rating prediction dataset, where uses were paid to rate movies. Many users can rate over 10 movies in 1-2 minutes. Rating sequences should not be considered as user real watching behavior sequences. The sequential patterns you captured come from the original MovieLens exposure strategy.\\n\\nThe following comments only represent personal opinions, and you can ignore them if you disagree.\\n\\n1\\uff09Sequential recommendation on offline datasets has inherent limitations. The patterns we observe are largely artifacts of the original recommendation algorithms rather than true user behavior, which rarely follows strict sequences. This task was usually called session-based recommendation in the past, which acknowledges that user preferences tend to remain stable within short time windows.\\n\\n2\\uff09Further improvements in offline accuracy metrics offer diminishing returns for these sequential task. While I understand academic researchers' limited access to online datasets, the field has reached established in terms of offline performance metrics.\\n\\nOverall, I don\\u2019t think this paper is important to the Recsys community.\"}", "{\"comment\": \"# W3-1: Scalability to More Objectives\\nThank you for your valuable suggestions. We have added user group fairness as a controllable metric. Specifically, we use the NDCG GAP@10 between male and female groups as the metric (a smaller value indicates greater fairness in NDCG@10 between the two groups) to evaluate the impact of fairness weights on the metric under different settings. \\n\\nGiven the large number of possible weight combinations for the three objectives, we explored the impact of fairness through a controlled variable approach:\\n\\n1. Investigating the impact of fairness weight on other metrics (NDCG@10 and a-NDCG@10).\\n\\n2. Examining the effect of fairness weight on its metric (NDCG-GAP@10).\\n\\nThe experimental results are presented below. The table records the performance of three metrics\\u2014accuracy (NDCG@10), diversity (a-NDCG@10), and fairness (NDCG-GAP@10)\\u2014under two conditions: **unfair** (fairness weight = 0.1) and **fair** (fairness weight = 1), as accuracy weight varies (constrained diversity weight = 1 - accuracy weight).\\n\\n\\n| | Acc. weight | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1.0 |\\n|-------|-------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|\\n| Fair | NDCG@10 | 0.1062 | 0.1217 | 0.1288 | 0.1531 | 0.1760 | 0.2300 | 0.2959 | 0.3482 | 0.3814 | 0.4013 | 0.4072 |\\n| | a-NDCG@10 | 0.1147 | 0.1151 | 0.1158 | 0.1147 | 0.1150 | 0.1136 | 0.1100 | 0.1027 | 0.0959 | 0.0854 | 0.0798 |\\n| | NDCG_GAP | 0.0234 | 0.0197 | 0.0161 | 0.0114 | 0.0036 | 0.0018 | 0.0119 | 0.0285 | 0.0197 | 0.0162 | 0.0245 |\\n| UnFair| NDCG@10 | 0.1048 | 0.1103 | 0.1185 | 0.1518 | 0.1800 | 0.2375 | 0.3034 | 0.3455 | 0.3807 | 0.4029 | 0.4054 |\\n| | a-NDCG@10 | 0.1170 | 0.1168 | 0.1162 | 0.1164 | 0.1158 | 0.1130 | 0.1085 | 0.1019 | 0.0934 | 0.0844 | 0.0765 |\\n| | NDCG_GAP@10 | 0.0236 | 0.0180 | 0.0127 | 0.0187 | 0.0045 | 0.0096 | 0.0267 | 0.0366 | 0.0313 | 0.0311 | 0.0293 |\\n\\n\\nThe line charts for the above metrics are presented in the PDF file, **Appendix \\u00a7 A.9 More Objectives (Fairness), Figure 15**. The following conclusions can be drawn:\\n\\n1. Figures 15(a) and 15(b) show that the unfair and fair conditions have **negligible impact** on both **NDCG@10** and **a-NDCG@10** individually, as well as on the trade-off relationship between these two metrics.\\n\\n2. Figure 15(c) demonstrates that the unfair and fair conditions **have an impact** on **NDCG_GAP@10**. Across multiple settings of accuracy weights, the NDCG-GAP@10 under the fair condition **is consistently smaller** than that under the unfair condition. This indicates that the control under the **fair/unfair** condition is effective.\\n\\nTo explore the **fine-grained control** of fairness weight, we investigated the performance of PadiRec on the three objectives under fairness weights of 0.1, 0.4, 0.7, and 1.0 when accuracy weight is set to 0.6 and 0.7 (at these points, both NDCG@10 and alpha-NDCG@10 demonstrate not bad performance). The results are shown in the table below (we have added it to **Appendix \\u00a7 A.9 More Objectives (Accuracy, Diversity, and Fairness)**):\\n\\n\\n| Acc. weight | Fair. weight | NDCG@10 | a-NDCG@10 | NDCG_GAP@10 |\\n|-------------|--------------|---------|-----------|----------|\\n| 0.6 | 0.1 | 0.3034 | 0.1085 | 0.0267 |\\n| | 0.4 | 0.2910 | 0.1096 | 0.0253 |\\n| | 0.7 | 0.2945 | 0.1094 | 0.0175 |\\n| | 1.0 | 0.2959 | 0.1100 | 0.0119 |\\n|-------------|--------------|---------|-----------|----------|\\n| 0.7 | 0.1 | 0.3455 | 0.1019 | 0.0366 |\\n| | 0.4 | 0.3448 | 0.1019 | 0.0299 |\\n| | 0.7 | 0.3395 | 0.1045 | 0.0286 |\\n| | 1.0 | 0.3482 | 0.1027 | 0.0285 |\", \"conclusion\": \"As the **fairness weight increases**, accuracy (NDCG@10) and diversity (a-NDCG@10) remain almost unchanged, while NDCG-GAP@10 **steadily decreases**, indicating improved fairness. This demonstrates that even under multiple objectives, PadiRec exhibits strong controllability.\"}", "{\"title\": \"Thanks\", \"comment\": \"I have reviewed the authors' response, but I still have concerns. The problem like using MovieLens's for sequential recommendation - I strongly recommend consulting the creators of MovieLens to understand why it is not suitable for sequential recommendation tasks. I am not convinced by using the existing literature. The Recsys models were not properly evaluated in the much literature.\\nMore broadly, I do not see how this work significantly advances the field given its overall contribution. While I respect the editor's final decision, I stand by my original score because I do not think the paper's contribution is sufficient to be accepted in ICLR.\"}", "{\"title\": \"Follow-up on Concerns\", \"comment\": \"Dear Reviewer,\\n\\nWe kindly ask if our responses have addressed your concerns. If there are still any misunderstandings or uncertainties, we would greatly appreciate the opportunity to further discuss and clarify them with you.\\n\\nBest,\\nAuthors\"}" ] }
9YhocG0o2l
TOMVALLEY: EVALUATING THE THEORY OF MIND REASONING OF LLMS IN REALISTIC SOCIAL CONTEXT
[ "Yang Xiao", "Jiashuo WANG", "Qiancheng Xu", "Changhe Song", "Chunpu Xu", "Yi Cheng", "Wenjie Li", "Pengfei Liu" ]
As large language models (LLMs) are increasingly involved in human society, some studies try to evaluate LLMs' capability of theory of mind (ToM), which is about the understanding and reasoning of others' mental states and possible actions. However, these previous works simplify the ToM capability required in real social contexts during their evaluations. This can be reflected in three aspects: (1) most evaluations focus on a **static mental state** after several social scenarios while ignoring the changes of mental states across different scenarios; (2) they mainly consider **independent mental states**, however different kinds of mental states (beliefs, intentions, and emotions) and actions can influence one another in our real life; (3) there is an **absence of social settings and character profiles** in their evaluation, even though humans can effortlessly obtain and utilize this information in ToM reasoning processes. This lack can underestimate the abilities of LLMs. This paper aims to evaluate LLMs' ToM capability in closer alignment with a realistic social context. Correspondingly, we propose a new benchmark, named **ToMValley**, which alleviates the limitations mentioned above of previous works. Specifically, the benchmark is constructed using a framework that includes four steps: social background determination, mental state sketch, social scenario design, and rule-based question generation. Overall, there are 1100 social contexts and 78100 questions about characters' mental states. The quality of the benchmark is manually verified. Additionally, we evaluate ten popular LLMs on **ToMValley**. Experimental results suggest that LLMs' performances are significantly inferior to human levels by 11\%. Subsequent investigation indicates that LLMs are ineffective at interpreting alterations in mental states across social scenarios. Furthermore, we observe that LLMs are incapable of addressing compositional questions that necessitate multi-hop reasoning within the social context.
[ "Theory of Mind", "Benchmark", "Social Reasoning", "Large Language Models", "Reasoning" ]
https://openreview.net/pdf?id=9YhocG0o2l
https://openreview.net/forum?id=9YhocG0o2l
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sTU4WqEoBa", "Ttjxe30F2B", "TgF7wXTryc", "T0JfTfBdy5", "NOkrBhFtBY", "Erc9SyTl7g", "7oxWtCgVhj", "3dz3vC7mGg" ], "note_type": [ "official_review", "comment", "official_comment", "official_review", "official_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1730651369915, 1732696086746, 1732688131626, 1730708164887, 1730578941527, 1730043702495, 1729202776720, 1731822767637 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12447/Reviewer_hdht" ], [ "ICLR.cc/2025/Conference/Submission12447/Authors" ], [ "ICLR.cc/2025/Conference/Submission12447/Reviewer_hdht" ], [ "ICLR.cc/2025/Conference/Submission12447/Reviewer_JF8n" ], [ "ICLR.cc/2025/Conference/Submission12447/Reviewer_sGhK" ], [ "ICLR.cc/2025/Conference/Submission12447/Reviewer_TrER" ], [ "ICLR.cc/2025/Conference/Submission12447/Reviewer_UvAb" ], [ "ICLR.cc/2025/Conference/Submission12447/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents a novel benchmark for evaluating the Theory of Mind (ToM) capabilities of large language models (LLMs). It conducts a comprehensive evaluation of ten popular LLMs. The experimental section provides an in-depth analysis and discussion on various aspects, including the compositional reasoning abilities of LLMs in ToM and comparisons with human performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper presents a novel benchmark for evaluating the ToM capabilities of LLMs, which takes into account the mental states of characters within real social contexts and the dependencies among fine-grained mental states.\\n\\nThe types of questions are intriguing, covering understanding, influence, and transformation.\\n\\nThe experiments include an in-depth analysis of combinatorial reasoning involved in ToM and middle scenarios.\", \"weaknesses\": \"The design of character profiles has been mentioned in the OpenToM literature [1]. The five constructed scenarios, when connected, are quite similar to a long conversation, as in FanToM [2], where ToM capabilities of LLMs are also assessed in a dialog format, and the characters' belief states evolve as the conversation progresses. (I acknowledge that this paper's settings are more grounded in real social contexts, distinguishing it from FanToM.)\\n\\nFor influence-type questions, such as Scenario1->Scenario2, would the model's performance significantly improve if only these two scenarios were provided? Does context length greatly impact the model's performance?\\n\\nIn scenarios resembling real social contexts, interactive evaluation may be a better approach than directly feeding conversations into LLMs.\\n\\n[1] Xu H, Zhao R, Zhu L, et al. OpenToM: A Comprehensive Benchmark for Evaluating Theory-of-Mind Reasoning Capabilities of Large Language Models[J]. arXiv preprint arXiv:2402.06044, 2024.\\n\\n[2] Kim H, Sclar M, Zhou X, et al. FANToM: A benchmark for stress-testing machine theory of mind in interactions[J]. EMNLP 2023.\", \"questions\": \"How are the options specifically designed, and how is the model's performance calculated?\\n\\nIn Table 3, belief, emotion, and intention all include influence-type questions. Figure 1 provides an example of how belief can influence emotion. So, what kind of relationships are relied upon in the design of influence questions for emotion and intention?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Thanks for your suggestions and questions\\uff01\"}", "{\"title\": \"reply to author\", \"comment\": \"Thanks to the author for the reply.\\nMost of my questions are already relatively clear. Considering the innovation , I will maintain my current score.\"}", "{\"summary\": \"The paper introduces TOMVALLEY, a benchmark designed to evaluate LLMs' Theory of Mind (ToM) reasoning. Authors claim that TOMVALLEY addresses limitations of existing benchmarks by incorporating dynamic and diverse mental states across multiple scenarios and detailed character profiles within specific social context. This benchmark includes 1100 social contexts with 78,100 questions about mental states and evaluates LLMs on their ability to reason about beliefs, emotions, intentions, and actions over time. They run evaluation by feeding 71 questions within a single prompt and measure their performance by parsing out the 71 answers from the output response. They use this procedure even in the CoT setup. Results show that models are lagging behind human performance by 11% and CoT does not help.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"Incorporating dynamic mental states and multiple dimensions is essential for measuring holistic ToM reasoning.\", \"weaknesses\": [\"The primary weakness of this paper is that all evaluations are conducted by presenting 71 questions within a single prompt (Figure 20). This setup, which is also used for CoT evaluation with claims that CoT does not enhance performance, introduces numerous confounding variables, making it unsuitable for benchmarking performance accurately. For example, results from the questions in the beginning will definitely impact later responses. Moreover, CoT is not suited for coming up with batch of answers. This is probably why the CoT does not show performance improvement. Since all of the results and analyses are based on this setup, I find it difficult to correctly interpret them. I strongly encourage the authors to rerun their experiments on each question independently.\", \"The benchmark is entirely generated by an LLM, but human validation was conducted on only 5% of the dataset\\u2014a very limited proportion. Since LLMs are known to struggle with theory of mind tasks, human validation is strongly recommended, especially given the complexity of the benchmark design, which involves multiple steps in the generation pipeline. I would suggest running at least 33% of the data. Ideally, I would encourage 100% of the data.\"], \"questions\": [\"Some of the large figures could have been better if they were tables with a few rows. Why is Figure 3 a figure?\", \"\\u201cMeanwhile, ToM reasoning in dialogues has seldom been investigated in previous works.\\u201d \\u2192 There are many existing works, such as\", \"https://aclanthology.org/2023.emnlp-main.890/\", \"https://aclanthology.org/2024.sigdial-1.63/\", \"https://aclanthology.org/2024.acl-short.26/\", \"I encourage the authors to compare their benchmark with these as well.\", \"I spotted many grammar issues, please fix them in the updated draft.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a new theory of mind benchmark for LLMs called ToM Valley. The benchmark aims to address the gap of ToM being tested in limited, constrained scenarios by procedurally generating diverse scenarios, personas and conversations. The authors query for 4 types of inferences, belief, emotion, intention, and action with 3 types of questions. The authors validate their stimuli by asking grad students to rate them. The authors test 9 LLMs with 0-shot and 0-shot cot prompting. They find that the leading LLM trails human performance on the task. Through various ablations, the authors point to LLMs being bad at paying attention to information in the middle of the context, llms struggling at multi-hop compositional reasoning as issues.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper is well motivated and addresses a gap between abstract tests of ToM and realistic contexts in which LLMs are used.\", \"I liked the diversity of scenarios present in the evaluation\", \"The authors have a diverse set of LLMs that they test.\", \"I liked the analysis of including and excluding personas, and also the failing in the middle analysis!\"], \"weaknesses\": [\"Abstract:\", \"Can be more specific about what exactly the authors are testing with the benchmark.\", \"In the introduction, it is not clear what exactly the questions are testing, this could be made much clearer. The Introduction and the abstract are very vague. They could be much more specific: 1) What are you testing 2) How exactly is the data generated 3) How do you validate the data 4) How do llms perform? Maybe using figure 1 as a reference would be useful\", \"Section 3: I am not convinced that the authors address the problem of circularity. The evaluation of a conversation being coherent to the mental states present and the dynamics of emotions in a social scenario is quite involved. The model generating these scenarios should then have a rich model of human mental states (this has not been validated in previous work).\", \"Step 1: The way the profiles and names are collected could be moved from the appendix to the main text. Why are occupations pooled by gender? These could reinforce stereotypes.\", \"Step 2: How are initial and final mental states decided upon?\", \"Step 3: 1) How do you ensure that the dialogues generated by an LLM are faithful to the mental states? 2) How do you ensure that the conversation obeys the dynamics of the mental states (decided in step 2) and that these correspond to true dynamics of human mental states? While the authors collect human judgements for these, are coherence, authenticity and dynamism mostly dependent on an LLM being good at being good enough at understanding social contexts?\", \"How are the answer options for the questions generated? How is the correct answer picked? How do you ensure that all the other options are incorrect? For example, \\u201chow does \\u2026 change across all social scenarios?\\u201d seems like a broad open-ended question.\", \"In general, how do you ensure that there is no problem of circularity? That the capability being evaluated is not being used while generating the data?\", \"Confidence intervals are missing throughout the paper\", \"Why are myers-briggs personality traits used? These lack scientific validity and soundness. Why aren't alternatives like the big 5 / OCEAN used?\", \"An actual example like in fig16 would make the paper much clearer.\", \"It is not clear, what precise capabilities the benchmark is testing and why the construct is valid.\", \"The limitations are buried in the appendix, and should be addressed in the main draft.\", \"How many annotations per question do you collect? For both human experiments?\"], \"minor\": \"It would be nice to see other closed source models like gemini, claude! But this is not necessary.\\n\\nIn the title \\u201crealistic social context\\u201d \\u2192 \\u201crealistic social contexts\\u201d\", \"line_250\": \"toM \\u2192 ToM\", \"line_301\": \"five 5-Likert \\u2192 the five questions on a 5 point Likert scale\\n\\nLine 789 qomplexity \\u2192 complexity\", \"questions\": \"Please see weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Overall, TOMVALLEY builds upon previous studies, notably synthesizing elements from OpenToM and BigToM. The dataset combines features such as plot, character profiles, relationships (inspired by OpenToM), and elements of social location and dynamic mental states (from BigToM). Like these works, TOMVALLEY also uses LLMs to generate richer scenes from provided information, using these scenes as labeled synthetic data for testing. While TOMVALLEY integrates these aspects, the approach and methodology align closely with earlier work, raising questions about its degree of novelty.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors have developed detailed character profiles that serve as the basis for generating dialogue, which adds depth and context to the interactions. Based on the appendix, TOMVALLEY covers a broader range of scenarios and settings than previous ToM datasets, making it a more comprehensive resource. The dialogue generation also appears carefully constructed, with prompts that result in good-quality conversations.\\n\\n2. Unlike previous datasets that often include changes in objective factors, TOMVALLEY is fully focused on capturing and evaluating characters' mental states, with a stronger emphasis on emotions and psychological aspects. While OpenToM also explores character profiles, it primarily focuses on how these profiles influence characters' actions. In contrast, TOMVALLEY concentrates more deeply on mental states, making it a distinct contribution in this area compared to earlier works.\", \"weaknesses\": \"1. While ToM is an interesting and promising field, TOMVALLEY largely builds on prior work, offering incremental improvements rather than a completely new approach. The overall structure and evaluation strategy closely resemble past studies, making this contribution feel less innovative.\\nAs I mentioned, the dataset combines features such as plot, character profiles, relationships (like OpenToM), and elements of social location and dynamic mental states (like BigToM). Similar to these works, TOMVALLEY also uses LLMs to generate richer scenes from provided structured information, using these scenes as labeled synthetic data for testing (where the generated scene serves as the input, and the structured information as the label). The evaluation method is also very similar to BigToM.\\nWhile following the choices of previous works is reasonable, the authors overstate their contribution by claiming: \\u201cprevious works (1) most evaluations focus on a static mental state after several social scenarios, ignoring the changes in mental states across different scenarios; (2) they mainly consider independent mental states, whereas different kinds of mental states (beliefs, intentions, and emotions) and actions can influence one another in real life; (3) social settings and character profiles are absent in their evaluations, even though humans can effortlessly obtain and utilize this information in ToM reasoning processes.\\\"\\n\\n2. Though TOMVALLEY has created a large dataset, its reliance on synthetic data introduces inevitable noise. The authors evaluated answers across five criteria, with around 90% of responses meeting each criterion individually. However, it would be helpful to know how many answers meet all five criteria simultaneously, as this would provide a more holistic view of response quality. (See questions below)\\n\\n3. Additionally, the design of certain questions can be ambiguous. For instance, in a dialogue context, a question like \\u201cWhat is Angela Hwang's belief?\\u201d could be unclear without explicit contextual information, making it difficult for users to discern the exact belief being asked about. (See questions below)\\n\\n4. The analysis of model limitations also feels somewhat surface-level. It suggests that models struggle due to issues like \\u201clost in the middle\\u201d phenomena and a lack of necessary ToM reasoning abilities. While these are valid points, they echo discussions already present in prior ToM research referenced by TOMVALLEY. Adding fresh insights or diving deeper into unique findings from this specific dataset could make the analysis more impactful and distinctive.\", \"questions\": \"1. While human annotators assessed the quality of generated social scenarios and sampled questions, there doesn\\u2019t seem to be an evaluation criterion to check if the answers align with the generated scenarios. Could the authors elaborate on why this alignment check wasn\\u2019t included?\\n\\n2. The paper provides evaluations for each of the five criteria individually. However, it would be valuable to understand how many scenarios and question-answer pairs meet all five criteria simultaneously. This holistic assessment could give a more comprehensive measure of quality. Do the authors have data on the percentage of scenes and Q&A pairs that satisfy all dimensions?\\n\\n3. Some questions, like \\u201cWhat is Angela Hwang\\u2019s belief?\\u201d, seem ambiguous without clear context. This could impact the clarity and effectiveness of the dataset, but change those question at this stage would be expensive. How do the authors plan to address this issue?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces TOMVALLEY, a new benchmark designed to evaluate LLM's theory of mind (ToM) capabilities in realistic social contexts. TOMVALLEY provides 1,100 diverse social contexts and 78,100 questions about characters' mental states, constructed through a systematic framework that includes social background determination, mental state sketching, social scenario design, and rule-based question generation. The authors manually verified the quality of the benchmark to ensure its reliability. They evaluated ten popular LLMs using TOMVALLEY and found that their best performance lagged behind human levels by 11%. The results indicate that LLMs struggle to interpret changes in mental states across scenarios, providing insights into the current limitations of LLMs in understanding complex social interactions.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The authors present TOMVALLEY, a new benchmark that realistically mirrors social contexts. It includes a wide range of social settings, character profiles, and relationships between characters. This diversity is achieved through a systematic construction framework, making the benchmark both comprehensive and relevant to real-world scenarios.\\n2. The authors have manually verified the design of the benchmark to ensure its quality and reliability. They not only provide a human performance baseline for comparison but also conduct a thorough quality assessment.\\n3. The authors evaluate 10 LLMs on the benchmark, offering a lot of data to understand the reasoning of different LLMs.\\n4. The authors offer interesting analyses that shed new light on the area. For example, they explore how the presence or absence of character profiles affects model performance, reveal that LLMs Fail in the Middle Scenario, and demonstrate the models' limitations in handling compositional questions that require rigorous multi-hop reasoning to reach the answer.\", \"weaknesses\": \"1. There's a lack of comprehensive discussion of existing benchmarks. While the authors point out limitations in the previous theory of mind (ToM) benchmarks\\u2014such as focusing on static or independent mental states and lacking social settings\\u2014they do not adequately compare their work to existing studies. For example, benchmarks like MMToM-QA [1] and MuMA-ToM [2] explore interconnected mental states by asking about one mental state conditioned on another. Benchmarks like OpenToM [3] include narrative stories and character profiles, addressing the social context aspect. There are other benchmarks like EmoBench [4] and Infant Cognition Benchmark [5] that are missed in Table 2.\\n\\n2. In Section 2.2, the paper introduces the concept of \\\"process-level evaluation\\\" but fails to provide a clear definition for it. This makes it difficult for readers to grasp what this term means and how it relates to the papers' objectives. The coherence of the paragraph can also be improved.\\n\\n3. There's insufficient explanation of question types and categories. The benchmark includes items related to belief, emotion, intention, and action, and categorizes questions into types like understanding, influence, and transformation. However, these categories are only briefly mentioned without clear definitions or justification for their inclusion. Furthermore, a deeper analysis is needed to explain why LLMs' performance varies significantly across different question types, which could offer valuable insights into the models' strengths and weaknesses.\\n\\n4. The authors mention that the benchmark includes compositional problems that require rigorous multi-hop reasoning to arrive at the correct answer. However, they do not clearly define what these problems entail or how they were constructed to necessitate such reasoning. Providing a detailed explanation and concrete examples would help readers understand their importance and how they challenge LLMs differently than simpler questions.\\n \\n5. While the benchmark incorporates social backgrounds, character profiles, and relationships, and claims that humans effortlessly use this information in ToM reasoning, the authors don't clearly explain how this information affects the answers to the questions. It remains unclear whether these details are essential for arriving at the correct answers or if they simply add diversity to the questions. A better explanation is needed to illustrate how social context influences the reasoning process and why it is crucial for evaluating ToM capabilities in LLMs.\", \"questions\": \"1. While I appreciate the effort of testing 10 different LLMs, have you considered evaluating the ToM methods such as Sim-ToM [6], SymbolicToM [7] (ACL 2023 Outstanding Paper Award), and BIP-ALM [1] (ACL 2024 Outstanding Paper Award) on your benchmark? These approaches have demonstrated notable improvements in LLM reasoning and have \\\"solved\\\" some prior benchmarks.\\n\\n2. Just out of curiosity, could you also test GPT-4o on your benchmark? GPT-4o has shown significantly enhanced performance on some previous benchmarks related to ToM tasks. Feel free to disregard this question if you are constrained by time or resources during the rebuttal, but perhaps testing a small portion of your benchmark with GPT-4o could provide useful insights.\\n\\n3. Since you used GPT-4-Turbo to generate the dialogues and scenarios for your benchmark, could this influence the performance of the same or similar models when they are tested on TOMVALLEY? Might the similarity in writing style or content affect the evaluation results due to the models being tested on data generated by themselves?\\n\\n4. Regarding the human evaluation of question quality, could you briefly clarify (in one sentence) how you ensure fairness and objectivity in this process? I\\u2019m somewhat concerned that if the \\\"five graduate students\\\" are from the same lab, it could introduce bias.\\n\\n5. Small typo: Line 250: \\\"toM\\\" -> \\\"ToM\\\".\", \"references\": \"1. MMToM-QA: Multimodal Theory of Mind Question Answering, Jin et al, 2024\\n2. MuMA-ToM: Multi-modal Multi-Agent Theory of Mind, Shi et al, 2024\\n3. OpenToM: A Comprehensive Benchmark for Evaluating Theory-of-Mind Reasoning Capabilities of Large Language Models, Xu et al, 2024\\n4. EmoBench: Evaluating the Emotional Intelligence of Large Language Models, Sabour et al, 2024\\n5. An Infant-Cognition Inspired Machine Benchmark for Identifying Agency, Affiliation, Belief, and Intention, Li et al, 2024\\n6. Think Twice: Perspective-Taking Improves Large Language Models' Theory-of-Mind Capabilities, Wilf et al, 2023\\n7. Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker, Sclar et al, 2023\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for you suggestion!\", \"comment\": \"Thanks for your valuable suggestion!\\n\\n* \\u201cMeanwhile, ToM reasoning in dialogues has seldom been investigated in previous works.\\u201d\\n\\nGiven a large amount of tom-related research, we contend that this description does not introduce significant ambiguity. We will take your advice and compare with these works.\"}" ] }
9YZKbSoDr6
Multi-domain Analysis and Generalization of Image manipulation loCalization
[ "Keanu Nichols", "Divya Appapogu", "Giscard Biamby", "Dina Bashkirova", "Anna Rohrbach", "Bryan A. Plummer" ]
Advanced image editing software enables easy creation of highly convincing image manipulations, which has been made even more accessible in recent years due to advances in generative AI. Manipulated images, while often harmless, could spread misinformation, create false narratives, and influence people’s opinions on important issues. Despite this growing threat, current research on detecting advanced manipulations across different visual domains, remains limited. Thus, we introduce Multi-domain Analysis and Generalization of Image manipulation loCalization (MAGIC), a comprehensive benchmark designed for studying generalization across several axes in image manipulation detection. MAGIC comprises over 192K images from two distinct sources (user and news photos), spanning a diverse range of topics and manipulation sizes. We focus on images manipulated using recent diffusion-based inpainting methods, which are largely absent in existing datasets. We conduct experiments under different types of domain shift to evaluate robustness of existing image manipulation detection methods. Our goal is to drive further research in this area by offering new insights that would help develop more reliable and generalizable image manipulation detection methods. We will release the dataset after this work is published.
[ "Domain generalization", "Diffusion-Based Inpainting", "Misinformation Detection", "Computer Vision", "Benchmark Dataset", "Visual Forensics" ]
Reject
https://openreview.net/pdf?id=9YZKbSoDr6
https://openreview.net/forum?id=9YZKbSoDr6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yBGPrbzu13", "uofRDIf4PZ", "tjrG9DDJkL", "rRCGhvZsqm", "qqziQrl5lZ", "lr3ei17F7f", "jvLJpHd8P8", "ghuIQahQD6", "gSA6T3U2ab", "fxrR5djuvE", "eJ7FjG11ll", "WbcjBStChF", "QwqFb12Eys", "PXLIdqmrWD", "PGJ3IOXCRH", "ONSUu0xhpB", "M6sSmL8ejC", "LJZCeKGm48", "K619s2D2m8", "F6MXPeMuPO", "CBRuqd33eM", "A4bDykocvQ", "85q9Vyf0yo", "3v6mpv47xn", "1esajMVeeS", "0VOnLVWOtt" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732592525718, 1732406158581, 1732405966861, 1732406202265, 1732591762240, 1732408055355, 1732405836601, 1737523828118, 1733067933145, 1732405943416, 1730678340354, 1733067887717, 1730519315476, 1732530195251, 1732591645584, 1730555830649, 1732599231021, 1733068053633, 1734680807708, 1732619191992, 1732405980695, 1732531017154, 1732406058466, 1732405898674, 1730350427642, 1733068005385 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7271/Authors" ], [ "ICLR.cc/2025/Conference/Submission7271/Authors" ], [ "ICLR.cc/2025/Conference/Submission7271/Authors" ], [ "ICLR.cc/2025/Conference/Submission7271/Authors" ], [ "ICLR.cc/2025/Conference/Submission7271/Authors" ], [ "ICLR.cc/2025/Conference/Submission7271/Authors" ], [ "ICLR.cc/2025/Conference/Submission7271/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7271/Authors" ], [ "ICLR.cc/2025/Conference/Submission7271/Authors" ], [ "ICLR.cc/2025/Conference/Submission7271/Reviewer_iuZd" ], [ "ICLR.cc/2025/Conference/Submission7271/Authors" ], [ "ICLR.cc/2025/Conference/Submission7271/Reviewer_GsP7" ], [ "ICLR.cc/2025/Conference/Submission7271/Reviewer_iuZd" ], [ "ICLR.cc/2025/Conference/Submission7271/Authors" ], [ "ICLR.cc/2025/Conference/Submission7271/Reviewer_LCuF" ], [ "ICLR.cc/2025/Conference/Submission7271/Reviewer_iuZd" ], [ "ICLR.cc/2025/Conference/Submission7271/Authors" ], [ "ICLR.cc/2025/Conference/Submission7271/Area_Chair_tytE" ], [ "ICLR.cc/2025/Conference/Submission7271/Reviewer_VNC9" ], [ "ICLR.cc/2025/Conference/Submission7271/Authors" ], [ "ICLR.cc/2025/Conference/Submission7271/Reviewer_iuZd" ], [ "ICLR.cc/2025/Conference/Submission7271/Authors" ], [ "ICLR.cc/2025/Conference/Submission7271/Authors" ], [ "ICLR.cc/2025/Conference/Submission7271/Reviewer_VNC9" ], [ "ICLR.cc/2025/Conference/Submission7271/Authors" ] ], "structured_content_str": [ "{\"comment\": \">I think the answer to the first issue about the category is not very satisfying. At least, this will somewhat diminish the contribution of the paper. If the content itself does not align with its category but needs the support of description, then it's hard to declare as \\\"across domains\\\" reasonable. It's as if the boundaries between the two domains are not very clear, with possible overlapping areas, which weakens this claim. The experiment result decreases for multi-domain may be coming from overfitting instead of OOD quality.\\n\\nThank you for participating in the discussion of our paper! While these categories are based on the article content, the images also have statistical differences between them. For example, in MAGIC-New\\u2019s Politics and Elections category a person was the manipulated object in nearly half the images as this topic is far more person-centric than say, Science Technology, where they were selected in less than a quarter. A \\u201crider\\u201d exists in 4-8x more images in Sports and Entertainment than other categories. Thus, there are important differences between the images contained within these topics, so it is important to know if a model can generalize.\\n\\nThat said, these broad categories are also quite similar to how these images may be used in practice, as the topics we used are based on those produced by the news websites. However, these types of loose groupings are not limited to news. For example, a subreddit within the website Reddit can be focused on a particular board theme, which would be analogous to our topics. Images that post there would also likely have statistical differences with images of other subreddits, even if they can be hard to understand the relationship out-of-context of the post they stem from.\"}", "{\"comment\": \">For the Dataset Quality Survey, using a flowchart to visualize the process would be better.\\n\\nThe survey itself is simply a questionnaire and not a process, an example of which is provided in Figure 6 of our paper.\\n\\n>Lack of reporting IoU (along with AUC)\\n\\nWe have also added the F1 scores to Table 3, partly reproduced below with IoU scores as well, as you suggested. We observed that the F1 and IoU scores are also generally low across both manipulation OOD and image source OOD settings. The observed scores, particularly the low AUC (along with F1 and IoU) values in certain cases, reflect the core focus of our paper: highlighting that current models struggle to generalize across different domains for the problem of image manipulation detection.\\n\\n| Trained on | Magic-News | | Magic-COCO | | Magic-COCO | | Magic-News | |\\n|------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|\\n| Tested on | | Magic-News | | | | Magic-COCO | | |\\n| | MT-ID | MT-OOD | MT-ID | MT-OOD | MT-ID | MT-OOD | MT-ID | MT-OOD |\\n| | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU |\\n| EVP | 79.2/73.4/61.6 | 61.9/49.0/26.7 | 62.0/49.2/14.7 | 55.7/44.9/3.39 | 79.0/50.7/35.8 | 79.6/38.4/20.6 | 62.1/23.9/2.66 | 67.6/25.2/3.85 |\\n| EVP + SWAD | 79.5/73.2/55.7 | 62.8/53.4/22.9 | 60.2/45.2/11.7 | 56.7/39.4/1.77 | 79.2/52.8/37.2 | 84.3/42.3/22.6 | 58.3/22.5/1.53 | 66.9/24.3/3.27 |\\n| EVP + Soup | 80.9/74.9/63.8 | 64.2/51.8/29.2 | 63.7/47.7/17.4 | 57.4/33.6/3.63 | 80.6/57.8/43.4 | 84.0/43.5/23.1 | 54.9/20.8/3.77 | 59.8/22.0/1.25 |\\n| DOLOS | 78.1/71.1/61.4 | 57.0/49.0/34.4 | 69.6/55.4/36.6 | 59.7/52.4/38.2 | 61.3/21.8/2.59 | 62.0/23.3/6.63 | 48.5/21.8/3.39 | 52.9/24.3/9.39 |\\n| PSCC-Net | 72.9/72.8/61.6 | 49.5/48.9/37.5 | 51.8/29.0/8.91 | 49.7/3.90/0.44 | 71.6/36.8/23.6 | 70.2/30.9/17.5 | 48.8/4.78/0.30 | 49.7/4.50/0.32 |\\n| HiFi | 73.6/77.8/70.9 | 50.9/29.9/21.4 | 49.6/10.1/6.45 | 48.6/1.85/1.09 | 66.8/34.6/24.0 | 62.7/21.5/14.2 | 51.5/5.73/3.71 | 51.6/7.55/4.51 |\\n\\n\\n>Lack of classification performance (decide whether an image is a manipulated image or genuine one)\\n\\nFollowing [k,l,m], we focus on the task of manipulation localization, whereas manipulation classification is an entirely different task. That said, since some (but not all) models are designed to accomplish both tasks, we will update our paper with these results as soon as we have them. Once that occurs, we will update you with those results. \\n\\n[k] Liu, Weihuang, et al. \\\"Explicit visual prompting for low-level structure segmentations.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\n[l] Hao, Jing, et al. \\\"Transforensics: image forgery localization with dense self-attention.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\\n\\n[m] Zhou, Jizhe, et al. \\\"Pre-training-free image manipulation localization through non-mutually exclusive contrastive learning.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n>Demonstration of using the proposed datasets would help the performance of detection techniques on other datasets such as MagicBrush [a] and CocoGLIDE [b].\\n\\nBased on this feedback we have begun testing a subset of our models on these datasets and once we have the results we will update this comment.\"}", "{\"comment\": \"We thank the reviewer for spending the time on this paper and giving us feedback, please see our replies below.\\n\\n>1) The quality of the manipulated images in Figure 2 is worrying, especially for the removing class \\u2026 Even with a human evaluation, I\\u2019m worried about the three categories (removal, replacement, and insertion) under high-quality annotations.\\n\\nThank you for your feedback. Our dataset is indeed mixed in terms of the quality of the manipulations images due to limitations of these manipulation methods. However, our goal is not to produce high-quality manipulations, but rather to understand how detection methods perform across various axes of generalization including manipulation type and different image domains. Thus, a low quality manipulation that goes unnoticed by a detector is also a concern. For example, consider the impact low-quality manipulations would have on moderation efforts in a form. Since they are easy to identify, they would likely get flagged by these users rather than flagged by a manipulation detector, which would identify it as authentic. Due to this conflict between the users and automatic detectors, a moderator may be tasked with reviewing the image as well, expending costly resources.\\n\\nWhat\\u2019s more, in Table 8 our work also highlights that manipulation detectors find low and high quality manipulations equally challenging, with little difference between the two sets. This illustrates an important observation made by our work: human judgements and machine detectors do not use similar evidence to identify manipulations. This is to be expected to some degree, as automatic metrics judging image manipulation and generation quality are challenging to construct, making human judgements the gold standard for those tasks. Thus, we show the quality of the manipulations has little to do with the goal of our work: creating high quality manipulation detection methods that generalize across a range of distribution shifts.\\n\\n\\n>2) The image manipulation methods used in the paper are not very new. Using SD series, like SD2, or even SDXL is better.\\n\\nWe emphasize that our focus in this paper is to broadly evaluate robustness of various image manipulation detection methods. Moreover, most prior datasets do not include any diffusion-based methods at all. That said, we conducted an experiment whereby we inpainted 1000 images from both Magic-News and Magic-COCO using SDXL [j] and tested EVP and DOLOS on these images, additionally we have included the results from our prior experiments from Table 3 with our Stable-Diffusion results. Looking at the results overall we can see that testing on SDXL still showcases the similar problem we are highlighting, that OOD distribution performance on average performs worse than in-distribution performance. In our camera ready we will expand on these initial results.\\n\\n| | | Stable-Diffusion-XL | | |\\n|------------|----------------|---------------------|----------------|----------------|\\n| Trained on | MAGIC-News | MAGIC-COCO | MAGIC-COCO | MAGIC-News |\\n| Tested on | MAGIC-News | MAGIC-News | MAGIC-COCO | MAGIC-COCO |\\n| | MT-OOD | MT-OOD | MT-OOD | MT-OOD |\\n| | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU |\\n| EVP | 60.6/54.2/17.8 | 51.6/43.1/5.52 | 73.9/29.1/4.97 | 71.2/27.0/7.20 |\\n| DOLOS | 57.2/53.3/30.2 | 52.9/53.3/32.0 | 48.1/23.3/14.1 | 51.6/24.6/12.0 |\\n| | | Stable-Diffusion | | |\\n| EVP | 52.1/16.4/40.9 | 57.4/40.0/1.51 | 81.5/37.2/17.1 | 69.9/25.2/3.85 |\\n| DOLOS | 47.0/44.9/30.2 | 49.5/45.9/30.8 | 65.9/22.3/3.88 | 53.9/23.1/8.07 |\\n\\n[j] Podell, Dustin, et al. \\\"Sdxl: Improving latent diffusion models for high-resolution image synthesis.\\\" arXiv preprint arXiv:2307.01952 (2023).\"}", "{\"comment\": \">How to make sure the quality of the generated masks for MAGIC-News (since they are generated automatically from Mask2Former)\\n\\nFor our human evaluation because we have included Magic-News, this was a way for us to check how the manipulated objects looked overall, as we can see from question 1 and question 3, Magic-News which had generated masks performed around the same as Magic-COCO, which has ground truth segmentation masks. This suggests that there is little difference in the quality of the masks.\\n\\n>For the replacement operation, have you tested different-class replacements instead of same-class replacements\\n\\n\\nWe did not test different-class replacements in order to keep the semantic content of the image unchanged. However based on this feedback we will run a small scale experiment to determine the image it has on the models and once we have finished this experiment we will update this comment\\n\\n>How to ensure the quality of GLIGEN since it is not a perfect method, any mechanism to ensure its quality?\\n\\nGLIGEN\\u2019s ability to use spatially-aligned condition maps provides a unique framework to insert objects into specific locations within an image. Based on our experiments, the model often captures enough semantic and spatial information to create meaningful test cases for evaluating manipulation localization tasks. Specifically, in the human evaluation results we see that GLIGEN does perform similarly to other inpainting techniques like Blended Diffusion for question 3 (Does the object look realistic?) as reported in Table 6 with GLIGEN scoring 48.5 and Blended Diffusion scoring 51.0\\n\\n\\n>How many images are used for training, val, test, and out-of-domain subsets?\\n\\nWe have created a further breakdown of the results for each of the manipulations, this is similar to the information shown in Figure 3.\\n| | MAGIC-News | | | | |\\n|:-----------------:|:----------:|:----:|:-----:|:-----------------------------:|:-----:|\\n| | Train | Val | Test | | |\\n| | IM-ID | | | IM-OOD | |\\n| Latent-Diffusion | 11732 | 1676 | 3352 | Stable Diffusion | 22125 |\\n| GLIDE | 15410 | 2201 | 4403 | Auto-Splice (GLIGEN Splicing) | 9746 |\\n| Blended Diffusion | 3677 | 526 | 1051 | | |\\n| Authentic | 10208 | 1458 | 2916 | | |\\n| | | | | Adobe Firefly | 100 |\\n| Total: 58610 | 41027 | 5861 | 11722 | Total: 31971 | 31971 |\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n| | MAGIC-COCO | | | | |\\n|:-----------------:|:----------:|:----:|:-----:|:-----------------------------:|:-----:|\\n| | Train | Val | Test | | |\\n| | IM-ID | | | IM-OOD | |\\n| Latent-Diffusion | 11806 | 1676 | 3370 | Stable Diffusion | 22251 |\\n| GLIDE | 15519 | 2201 | 4462 | Auto-Splice (GLIGEN Splicing) | 10290 |\\n| Blended Diffusion | 3759 | 400 | 1212 | Blended-Latent Diffusion | 10088 |\\n| Authentic | 10208 | 1458 | 3416 | | |\\n| | | | | Adobe Firefly | 100 |\\n| Total: 59487 | 41292 | 5735 | 12460 | Total: 42729 | 42729 |\"}", "{\"comment\": \">Lack of classification performance (decide whether an image is a manipulated image or genuine one)\\n\\nAs noted earlier, [k,l,m], we focus on the task of manipulation localization, whereas manipulation classification is an entirely different task. However, to address this comment, as we believe it adds value, we evaluated this classification task by utilizing PSCC-Net and HiFi, since these both contain a classification head. We can see that PSCC-Net does better than HiFi in most cases even though their architecture is quite similar. Interestingly for PSCC-Net when trained on MAGIC-COCO and tested on MAGIC-COCO it performs quite well. One reason would be the HRNet backbone that PSCC-Net utilizes which was trained on ImageNet. However, both report a significant drop in performance when asked to generalize to OOD images, highlighting the importance of our work as we highlight these challenges.\\n\\n| Trained on | Magic-News | Magic-COCO | Magic-COCO | Magic-News |\\n|------------|------------|------------|------------|------------|\\n| Tested On | Magic-News | | Magic-COCO | |\\n| | AUC/F1 | AUC/F1 | AUC/F1 | AUC/F1 |\\n| PSCC-Net | 69.0/64.1 | 40.7/37.4 | 93.0/90.1 | 66.4/63.7 |\\n| HiFi | 54.7/54.7 | 50.7/34.9 | 56.0/55.8 | 53.9/53.6 |\\n\\n[k] Liu, Weihuang, et al. \\\"Explicit visual prompting for low-level structure segmentations.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\n[l] Hao, Jing, et al. \\\"Transforensics: image forgery localization with dense self-attention.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\\n\\n[m] Zhou, Jizhe, et al. \\\"Pre-training-free image manipulation localization through non-mutually exclusive contrastive learning.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\"}", "{\"comment\": \"We would like to thank all the reviewers for their feedback. We are encouraged that they recognize the sound motivation behind our work (*Reviewer iuZd*) addressing a pressing issue (*Reviewer LCuF*); they comment on the benchmark as being valuable (Reviewer iuZd), good (*Reviewer GsP7*), covering many scenarios, editing operations/techniques (*Reviewer VNC9*); they state that our experiments are comprehensive (*Reviewer iuZd*) and good (*Reviewer GsP7*), and an overall effort is commendable (*Reviewer LCuF*).\", \"reviewers_comments_included\": \"1) Questions as to how the quality of the manipulations affect our task (*Reviewers GsP7, VNC9*), which we pointed to results in Table 8 where image maniuplation localization models found high and low quality maniuplation samples similarly challenging.\\n\\n2) Request for additional metrics like F1 score or IoU (*Reviewers iuZd, VNC9*), which we have included in Table 3 and are consistent with the metrics we already reported.\\n\\n3) Request for a baseline Swin Transformer approach (*Reviewer iuZd*), which we have included in Table 3. \\n\\n4) Request for results on SDXL (*Reviewer GsP7*), which we provided with some initial results that has similar behavior to other generators.\\n\\n5) Various presentation changes that have created or modified the figures and writing in our paper (directly responded to for each reviewer)\\n\\nWe are happy to make any additional adjustments to our paper suggested by reviewers.\"}", "{\"comment\": \"We thank the reviewer for spending the time on this paper and giving us feedback, please see our replies below.\\n\\n>Although the authors claim to have used clustering for categorizing topics, some topics displayed in Figure 2 seem to vary significantly and do not appear entirely reasonable. For instance, it\\u2019s unclear how a bicycle image relates to the \\\"ARTS\\\" category, and both \\\"People\\\" and \\\"Ruins\\\" are grouped under \\\"Media.\\\"\\n\\nThe grouping of the images is not based on image content, but rather the article content. Thus, a bicycle could fall under arts due to an event with bicycles, or even simply a photography competition. In addition, these categories are broad, and contain many subcategories (i.e., the original VisualNews dataset had 159 subcategories that represent distinct subsets of our 8 categories).\\n\\n>The results across many topic classes in Table 4 are quite similar, suggesting that the distribution between classes may not be as distinct as initially anticipated\\nThere is nearly a 7 point difference between the best and worst performing topic within Table 4. While this is not as large as some other shifts, it still provides a significant difference to compare various models.\\n\\n>A paper [A] proposed in Nov. 2023 on ArXiv and accepted by ACM MM 2024 also uses Visual News and COCO to create a fine-grained diffusion and GAN-generated dataset \\u2026 it is recommended to reconsider the claim \\u201c\\\"first diffusion-based manipulation dataset.\\\" on line 149. \\n\\n\\nThank you for pointing out this paper and we have adjusted our associated discussions. We are unable to add the information to Table 1 at this time as many of the detailed statistics that breaks down the type of manipulation are not reported and we could not get access to the dataset in time for this response (although we have requested access and will include this information in our paper once we have been granted access). To summarize our comparisons, our dataset has many complementary benefits, as there is little overlap in the type of manipulation techniques utilized between our datasets. In addition, we focus on the impact of several axes of generalization that [A] did not study, including studying distribution shifts due to image statistics or topics. We also study the effect of manipulation quality on detection performance, whereas [A] did not consider this component. As such, our paper provides several notable contributions over even concurrent work like [A].\\n\\n>Since [A] retains the text prompts associated with images, which, with proper handling, could serve as a more accurate basis for topic categorization.\\n\\nAs we noted earlier, our topics are based on article content as opposed to image content. As these prompts are associated with image content, they are more akin to COCO captions, which our dataset also has access to (for the COCO subset). We also argue that comparing VisualNews vs. COCO images already provides an example of changes due to image content.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"First we would like to thank you for putting the time to participate in the discussion period for our paper and we thank you for raising your score. We know that the period to revise our pdf has passed, but we will add our experiments and analysis for the camera ready once our paper is accepted.\"}", "{\"comment\": \"We thank the reviewer for spending the time on this paper and giving us feedback, please see our replies below.\\n\\n>The proposed dataset, while diverse, does not introduce fundamentally new manipulation detection methods or models \\u2026 The proposed dataset only combines existing datasets and manipulation techniques \\u2026Thus, the contribution is more incremental than groundbreaking.\\n\\nFor our proposed dataset our main focus was not to propose a new method for inpainting or new models for detecting inpaintings. But to produce a new diffusion based inpainting benchmark dataset that allows researchers to study generalization across several axes of generalization for image manipulation detection. A major drawback of current image manipulation datasets is that they typically only focus on one axis of generalization, namely manipulation type and do not allow researchers the ability to determine how their models perform under axes like image source as well as manipulation type. In our experiments we highlight a major disadvantage of current models being able to generalize across these different axes as shown in Table 3 and highlight the need for better generalization for these image manipulation detection models\\n\\n\\n>The paper lacks a detailed comparison with other recent datasets or techniques \\u2026 The experiments only rely on existing architectures without substantial modifications.\\n\\nIn our paper we can see that in Table 1 we compare our model to other current manipulation detection datasets, additionally we discuss these datasets in the Related works section (Section 2). We additionally did provide experiments utilizing two domain generalization techniques namely Model Soups and SWAD and showed that these off the shelf techniques do not significantly improve performance in terms of generalization. We will highlight that this is a benchmark paper designed to highlight issues with existing methods, hence our main focus is not on providing new techniques in this paper. As you can see below [c,d,e], we have a number of papers being published at ICLR that do not propose new methods. \\n\\nAs this is an topic with increasing importance as image maniuplation methods become more sophisticated, our task's importance is likely only to grow. This makes datasets like ours that explore the generalization capabilities of these models and is complementary to existing datasets of vital importance for defending against misinformation such as applications to fighting crime (as defendants may claim an image has been altered by an AI model) and content moderation (where manipulated images may be presented as real). These are high impact applications that are meant to increase safety and security, highlighting their importance for study in our paper.\\n\\n[c] Wang, Xingyao, et al. \\\"Mint: Evaluating llms in multi-turn interaction with tools and language feedback.\\\" ICLR 2024\\n\\n[d] Gu, Jiuxiang, et al. \\\"ADOPD: A Large-Scale Document Page Decomposition Dataset.\\\" ICLR 2024\\n\\n[e] Wu, Haoning, et al. \\\"Q-bench: A benchmark for general-purpose foundation models on low-level vision.\\\" ICLR 2024\"}", "{\"summary\": \"The paper primarily builds a Diffusion-based model and constructs an image manipulation localization dataset with 192k images focused on inpainting tampering types. This dataset is divided into three major categories based on source, content topic, and specific types of Diffusion models used, allowing for evaluation of the model's cross-domain generalization performance. The authors evaluated the performance of several SoTA models on this dataset. A survey on human feedback on the quality of this dataset is also reported.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The motivation is sound, as cross-dataset or cross-domain performance has consistently posed challenges in the field of image manipulation localization. A dataset focused on cross-domain performance analysis would serve as a valuable benchmark.\", \"Experiments are comprehensive in demonstrating the utilization of each protocol.\"], \"weaknesses\": [\"## Main issue\", \"Although the authors claim to have used clustering for categorizing topics, some topics displayed in Figure 2 seem to vary significantly and do not appear entirely reasonable. For instance, it\\u2019s unclear how a bicycle image relates to the \\\"ARTS\\\" category, and both \\\"People\\\" and \\\"Ruins\\\" are grouped under \\\"Media.\\\" Additionally, the results across many topic classes in Table 4 are quite similar, suggesting that the distribution between classes may not be as distinct as initially anticipated.\", \"A paper [A] proposed in Nov. 2023 on ArXiv and accepted by ACM MM 2024 also uses Visual News and COCO to create a fine-grained diffusion and GAN-generated dataset. I understand that ACM MM was held after ICLR submission. However, the two articles have considerable similarities in the background, purpose, and subject matter. While the two papers represent distinct works, it is recommended to discuss [A] in the Related Work section and reconsider the claim in line 149 regarding being the \\\"first diffusion-based manipulation dataset.\\\" Since [A] retains the text prompts associated with images, which, with proper handling, could serve as a more accurate basis for topic categorization.\", \"For dataset-focused papers, it\\u2019s common to include tests with standard vision backbones, such as ResNet or Swin, to provide more straightforward benchmarks. Including these could serve as helpful references for comparison.\", \"## Minor issue\", \"Many AUC metrics in the tables are too close, showing little distinction and some values are excessively low. For instance, Table 3 includes numerous metrics below 0.5, indicating that the model has not effectively learned the corresponding distributions. More distinctive metrics, such as F1 or IoU, may be needed better to assess the model\\u2019s performance on each protocol.\", \"The statement between lines 522\\u2013524 appears unconvincing. The current explanation does little to clarify why the performance of PSCC is nearly the opposite of the other two models.\", \"## Reference\", \"[A] Zhihao Sun, Haipeng Fang, Juan Cao, Xinying Zhao, and Danding Wang. 2024. Rethinking Image Editing Detection in the Era of Generative AI Revolution. In Proceedings of the 32nd ACM International Conference on Multimedia (MM '24). Association for Computing Machinery, New York, NY, USA, 3538\\u20133547. https://doi.org/10.1145/3664647.3681445\"], \"questions\": \"See the Weakness Section. Overall, this is a solid piece of work. I will consider raising my rating if improve the presentation of details.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to thank you for putting the time and effort to participate in the discussion period for our paper and we thank you for raising your score. We know that the period to revise our pdf has passed, however we will add our experiments and analysis for the camera ready once our paper is accepted.\"}", "{\"summary\": \"This paper proposed a new image manipulation location benchmark for diffusion-based generation methods. It contains two image sources and seven manipulation techniques. The experiments under several settings, also provide some interesting insights.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed datasets seem good for image manipulation location tasks.\\n2. The writing and experiments are pretty good.\", \"weaknesses\": \"1. The quality of the manipulated images in Figure 2 is worrying, especially for the removing class. Although the authors mentioned that they apply human evaluation for the generated images, I'm worried about the data balance of the three categories (removal, replacement, and insertion) under high-quality annotations.\\n2. The image manipulation methods used in the paper are not very new. Using SD series, like SD2, or even SDXL is better.\\n3. In Table 3, it's interesting that the OOD score is higher than the ID score when trained on MAGIC-News and tested on MAGIC-COCO. It's better to provide a more depth analysis.\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your response,\\n\\nI think the answer to the first issue about the category is not very satisfying. At least, this will somewhat diminish the contribution of the paper. If the content itself does not align with its category but needs the support of description, then it's hard to declare as \\\"across domains\\\" reasonable. It's as if the boundaries between the two domains are not very clear, with possible overlapping areas, which weakens this claim. The experiment result decreases for multi-domain may be coming from overfitting instead of OOD quality.\\n\\n\\nThe discussion about [A] is clear and has already been added to the manuscript.\"}", "{\"comment\": \">Instead of just listing some numbers only, the visual charts should be used to summarize the statistics of datasets (e.g., editing areas statistics\\n\\nBased on this feedback we have added two other visual charts to make it easier to see more statistics of our dataset based on editing areas and manipulations sizes adapted from the statistics we described in Section 3.2 of our paper. \\n\\nWe have drafted two diagrams to address this comment. The first diagram is located in the Appendix as Figure 7 and it shows manipulation area sizes vs. editing techniques: This chart illustrates the distribution of manipulation area sizes across the three editing techniques: removal, insertion, and replacement.\\nThe second diagram is located in the Appendix as Figure 8, it is manipulation area sizes vs. image source: This chart compares the sizes of manipulated areas across the two sources, MAGIC-News and MAGIC-COCO.\\n\\n>Demonstration of using the proposed datasets would help the performance of detection techniques on other datasets such as MagicBrush [a] and CocoGLIDE [b].\\n\\nWe conducted an experiment whereby we trained on MAGIC-News and tested on MagicBrush, reported below. As seen in the table below EVP performs quite well when trained on MAGIC-News and evaluated on Magic-Brush.\\n\\n| MagicBrush | AUC/F1/IoU |\\n|------------|----------------|\\n| PSCC-Net | 44.3/26.6/12.3 |\\n| DOLOS | 58.8/28.0/15.8 |\\n| EVP | 76.1/37.6/23.4 |\\n\\n\\n\\nFor CocoGLIDE, we use the same three models trained on MAGIC-News, making CocoGLIDE images OOD. This is different from, say, the PSCC-Net reported in CocoGLIDE as that model is trained on 380K manipulated and pristine images extracted from COCO, whereas we use less than 100K news images. That said, as shown below, PSCC-Net trained on our dataset reports similar performance as the model reported in CocoGlide. What\\u2019s more, the EVP outperforms even the model proposed in the CocoGLIDE paper (TruFor). This helps illustrate the benefits that stem from using our dataset in other settings.\\n\\n| CocoGLIDE | AUC/F1/IoU |\\n|------------|----------------|\\n| PSCC-Net (reported in CocoGLIDE) | 77.7/51.5/- |\\n| TruFor (proposed in CocoGLIDE) | 75.2/52.3/- |\\n| PSCC-Net | 78.0/51.3/38.2 |\\n| DOLOS | 55.6/38.5/25.3 |\\n| EVP | 83.6/57.0/42.9 |\\n\\n>For the replacement operation, have you tested different-class replacements instead of same-class replacements\\n\\nAs we discussed earlier, we did not test different-class replacements in order to keep the semantic content of the image unchanged. However based on this feedback we ran a small scale experiment to determine the impact the image class has on the models. Using 100 images from Magic-News we created images that had new objects replace old objects, for instance we replaced a car with a bike and so on to try and keep the same semantic relevance with the initial object. The same was done with Magic-COCO, and we also included the experiment with replacing the object with the same object like we have done in all our experiments (e.g. replace bike with bike). We can see that overall for both EVP and DOLOS the difference between the replacing the object with a new one, or replacing the object with the same object type has similar scores. \\n\\n| Trained on | MAGIC-News | MAGIC-News | MAGIC-COCO | MAGIC-COCO |\\n|:----------:|:--------------:|:---------------:|:--------------:|:---------------:|\\n| Tested on | MAGIC-News | MAGIC-News | MAGIC-News | MAGIC-News |\\n| | New-Object | Replaced-Object | New-Object | Replaced-Object |\\n| | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU |\\n| EVP | 59.1/56.2/1.60 | 58.0/55.8/1.85 | 60.6/51.1/10.0 | 61.9/50.6/10.8 |\\n| DOLOS | 54.1/54.6/5.72 | 55.0/54.9/6.20 | 51.5/58.3/2.66 | 47.7/58.4/2.15 |\\n\\n| Trained on | MAGIC-COCO | MAGIC-COCO | MAGIC-News | MAGIC-News |\\n|:----------:|:--------------:|:---------------:|:--------------:|:---------------:|\\n| Tested on | MAGIC-COCO | MAGIC-COCO | MAGIC-COCO | MAGIC-COCO |\\n| | New-Object | Replaced-Object | New-Object | Replaced-Object |\\n| | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU |\\n| EVP | 87.0/53.1/35.0 | 86.9/53.2/34.8 | 60.8/26.0/1.81 | 61.8/25.9/0.77 |\\n| DOLOS | 67.2/25.0/2.93 | 66.8/26.1/2.85 | 44.8/23.7/1.62 | 46.1/23.9/1.21 |\"}", "{\"summary\": \"This paper introduces \\u201cMAGIC,\\u201d a large-scale dataset designed to evaluate the robustness and generalization of image manipulation detection models across multiple domains. MAGIC aims to assess model performance under various domain shifts, including different image sources, manipulation types, semantic topics, and manipulation scales. Results indicate that while the models perform well in distribution (ID), their OOD performance is limited, highlighting the challenges of domain generalization in image manipulation detection.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"MAGIC addresses a pressing issue in image manipulation detection by offering a large-scale dataset with a focus on domain generalization across multiple dimensions. This effort is commendable and fills a gap in manipulation detection research.\"], \"weaknesses\": [\"The proposed dataset, while diverse, does not introduce fundamentally new manipulation detection methods or models. The dataset\\u2019s construction (e.g., sourcing from MS COCO and VisualNews, manipulation types) is novel but does not demonstrate significant methodological innovation beyond combining existing datasets and manipulation techniques. Thus, the contribution is more incremental than groundbreaking.\", \"The paper lacks a detailed comparison with other recent datasets or techniques, and the experiments primarily rely on existing architectures without substantial modifications or improvements. The work\\u2019s dependence on pre-existing models for analysis and lack of new methodological contributions weaken its overall technical impact.\", \"I am not an expert in this field. But I think that a dataset for image manipulation localization is not sufficient for publication at ICLR.\"], \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I see the point. Discussing the gap from the distribution of object semantics is more convincing than simply talking about the label.\\n\\nI think more analysis on the distribution gap of semantics should be included in the paper, even intuitively, to defend and support your arguments of cross-domain (at least in supplementary material if there exist space limitations).\\n\\nAnyway, most of my doubts are solved, I have raised my rating from 5 to 6.\"}", "{\"comment\": \"We would like to thank you for your review, as a reminder the discussion period is coming to an end. We have responded to your current questions, if you have any further questions or clarifications to improve our score we would be happy to answer them.\"}", "{\"metareview\": \"This paper introduces a new dataset designed to evaluate the robustness and generalization of image manipulation detection models across multiple domains. \\bInitially, the reviewers raised several issues. That includes the quality of the constructed dataset (both image quality and the quality of the clustered topics), the lack of new manipulation detection methods, and the image manipulation methods used in this paper are not up-to-date. After rebuttal, most of the concerns are addressed. Also, as a dataset-focused paper, proposing new methods is not inherently required. Yet the reviewers remained significantly concerned about the low image quality within the proposed dataset, which is considered to be crucial given its intended role as a benchmark for image manipulation detection. For the dataset to be more valuable to the community, its image quality should be further improved. Overall, this paper is considered borderline. While it offers some valuable contributions, its current state does not fully meet the criteria for clear acceptance. To strengthen its position, the authors should enhance the dataset's quality to establish a more robust evaluation benchmark.\", \"additional_comments_on_reviewer_discussion\": \"After rebuttal, the reviewers still share concerns about the image quality of the proposed dataset.\"}", "{\"comment\": \"I want to thank the authors for their detailed responses. Most of my concerns regarding the presentation of the paper have been addressed. I hope the authors will include the new experiments and analyses in the revised version. However, I share Reviewer GsP7\\u2019s concern about the quality of the manipulated images in the proposed datasets. Despite the authors\\u2019 claims about the datasets covering various dimensions, including editing types and diverse domains, the image quality appears to be quite low. Given that these datasets are based on manipulated images generated by diffusion-based editing methods, they should ideally exhibit a level of quality comparable to that of diffusion-generated images. I will increase my score from 5 to 6.\"}", "{\"comment\": \">3) In Table 3, it's interesting that the OOD score is higher than the ID score when trained on MAGIC-News and tested on MAGIC-COCO. It's better to provide a more depth analysis.\\n\\nFor the last two columns of Table 3, if we look at the instance of EVP trained on Magic-News and tested on Magic-COCO we can see that MT-ID is 62.1% and MT-OOD is 67.6% for AUC. We have provided a breakdown of the results for each of the inpaintings present for MT-ID and MT-OOD seen below. We see that for MT-OOD, EVP tends to perform well on Stable Diffusion with 69.9% and Blended Latent Diffusion 72.6% AUC versus MT-ID like GLIDE at 64.2% and Latent Diffusion at 67.9%. This suggests that the Stable Diffusion and Blended Latent diffusion images are easier to localize the manipulations when trained on Magic-News and tested on OOD. Because of the latent space that Stable Diffusion is generated from and because Blended Latent Diffusion combines Blended and Latent diffusion together it can help explain why there is a higher performance for these models.\\n\\n\\n| EVP | | | |\\n|:-----------------:|:--------------:|:------------------------:|:--------------:|\\n| Trained on: | Magic News | Trained on: | Magic News |\\n| Tested on: | Magic COCO | Tested on: | Magic COCO |\\n| MT-ID | | MT-OOD | |\\n| GLIDE | 64.2/22.9/1.79 | Stable Diffusion | 69.9/25.2/3.85 |\\n| Latent Diffusion | 67.9/24.4/3.49 | GLIGEN Splicing | 57.8/23.5/2.70 |\\n| Blended Diffusion | 71.5/26.5/3.58 | Blended Latent Diffusion | 72.6/27.6/8.25 |\\n| Original | 51.6 | Adobe Firefly | 75.2/33.2/19.1 |\\n| AUC Average | 62.1 | AUC Average | 67.6 |\"}", "{\"title\": \"Clear experiments on AUC and IOU\", \"comment\": \"The supporting experiments on AUC and IOU provide a clearer visualization of the model's performance. In many cases, if the F1 score is lower than 0.1, the performance can be considered worse than simply predicting an entirely white output.\\n\\nThe conclusion is clear that IML-oriented methods perform better on OOD datasets compared to common vision backbones. Further, both of them struggle with OOD detection. (Although I still have a little doubt on if the OOD exactly represent crosses the domain)\"}", "{\"comment\": \"We thank the reviewer for spending the time on this paper and giving us feedback, please see our replies below.\\n\\n>Lack of high-quality examples of manipulated images and corresponding GT segmentation mask (Fig. 2 presents low-resolution images)\\n\\n>The quality of the proposed dataset is a concern (Tab. 6) since some methods are just around 50%)\\n\\n\\nThank you for your feedback. Our dataset is indeed mixed in terms of the quality of the manipulations images due to limitations of these manipulation methods. However, our goal is not to produce high-quality manipulations, but rather to understand how detection methods perform across various axes of generalization including manipulation type and different image domains. Thus, a low quality manipulation that goes unnoticed by a detector is also a concern. For example, consider the impact low-quality manipulations would have on moderation efforts in a form. Since they are easy to identify, they would likely get flagged by these users rather than flagged by a manipulation detector, which would identify it as authentic. Due to this conflict between the users and automatic detectors, a moderator may be tasked with reviewing the image as well, expending costly resources.\\n\\nWhat\\u2019s more, in Table 8 our work also highlights that manipulation detectors find low and high quality manipulations equally challenging, with little difference between the two sets. This illustrates an important observation made by our work: human judgements and machine detectors do not use similar evidence to identify manipulations. This is to be expected to some degree, as automatic metrics judging image manipulation and generation quality are challenging to construct, making human judgements the gold standard for those tasks. Thus, we show the quality of the manipulations has little to do with the goal of our work: creating high quality manipulation detection methods that generalize across a range of distribution shifts.\\n\\n\\n>Instead of just listing some numbers only, the visual charts should be used to summarize the statistics of datasets (e.g., editing areas statistics\\n\\nWe are preparing these visualizations and will update you once they are included in the paper.\\n\\n>A table of Editing technique summarization would also help, including the number of images, and examples.\\n\\nBelow is a table with the summarized numbers of images per technique, we refer to Figure 3 for a more detailed breakdown of the number of images for Magic-COCO and Magic-News based on manipulation type. We added it to the Appendix in Table 10. For illustrative examples for the same, please see Figure 2. \\n\\n| | Magic-News | Magic-COCO | |\\n|-------------------|------------------|------------------|-----------------------------------------------------------------------------------------------|\\n| Editing Technique | Number of Images | Number of Images | Inpainting Techniques |\\n| Replacement | 49493 | 59824 | Stable-Diffusion,Blended-Diffusion,Glide-Diffusion,*(Blended-Latent Diffusion), Adobe Firefly |\\n| Insertion | 9746 | 10290 | GLIGEN Splicing |\\n| Removal | 14205 | 16854 | Latent Diffusion |\\n*Blended-Latent Diffusion only occurs in the Out-Of-Distribution set of Magic-COCO\"}", "{\"comment\": \">For dataset-focused papers, it\\u2019s common to include tests with standard vision backbones, such as ResNet or Swin \\u2026 as a standard baseline.\\n\\n>Many AUC metrics in the tables are too close \\u2026 Table 3 includes numerous metrics below 0.5, indicating the models have not effectively learned the corresponding distributions. More distinctive metrics such as F1 or IoU, may be needed to assess the model\\u2019s performance on each protocol.\\n\\nWe trained a Swin Transformer [g] trained to make pixel-level predictions using the Upernet strategy [h] for manipulation localization to act as a standard baseline. The results are shown below and we have added these results to Table 3. As shown below, most specialized models perform worse than a simple Swin-based model. \\n\\nWe have also added the F1 scores to Table 3, partly reproduced below with IoU scores as well, as you suggested. We observed that the F1 and IoU scores are also generally low across both manipulation OOD and image source OOD settings. The observed scores, particularly the low AUC (along with F1 and IoU) values in certain cases, reflect the core focus of our paper: highlighting that current models struggle to generalize across different domains for the problem of image manipulation detection.\\n\\n| Trained on | MAGIC-News | | MAGIC-COCO | | MAGIC-COCO | | MAGIC-News | |\\n|------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|\\n| Tested on | | MAGIC-News | | | | MAGIC-COCO | | |\\n| | MT-ID | MT-OOD | MT-ID | MT-OOD | MT-ID | MT-OOD | MT-ID | MT-OOD |\\n| | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU | AUC/F1/IoU |\\n| EVP | 79.2/73.4/61.6 | 61.9/49.0/26.7 | 62.0/49.2/14.7 | 55.7/44.9/3.39 | 79.0/50.7/35.8 | 79.6/38.4/20.6 | 62.1/23.9/2.66 | 67.6/25.2/3.85 |\\n| EVP + SWAD | 79.5/73.2/55.7 | 62.8/53.4/22.9 | 60.2/45.2/11.7 | 56.7/39.4/1.77 | 79.2/52.8/37.2 | 84.3/42.3/22.6 | 58.3/22.5/1.53 | 66.9/24.3/3.27 |\\n| EVP + Soup | 80.9/74.9/63.8 | 64.2/51.8/29.2 | 63.7/47.7/17.4 | 57.4/33.6/3.63 | 80.6/57.8/43.4 | 84.0/43.5/23.1 | 54.9/20.8/3.77 | 59.8/22.0/1.25 |\\n| DOLOS | 78.1/71.1/61.4 | 57.0/49.0/34.4 | 69.6/55.4/36.6 | 59.7/52.4/38.2 | 61.3/21.8/2.59 | 62.0/23.3/6.63 | 48.5/21.8/3.39 | 52.9/24.3/9.39 |\\n| PSCC-Net | 72.9/72.8/61.6 | 49.5/48.9/37.5 | 51.8/29.0/8.91 | 49.7/3.90/0.44 | 71.6/36.8/23.6 | 70.2/30.9/17.5 | 48.8/4.78/0.30 | 49.7/4.50/0.32 |\\n| HiFi | 73.6/77.8/70.9 | 50.9/29.9/21.4 | 49.6/10.1/6.45 | 48.6/1.85/1.09 | 66.8/34.6/24.0 | 62.7/21.5/14.2 | 51.5/5.73/3.71 | 51.6/7.55/4.51 |\\n| Swin | 65.9/63.8/56.8 | 57.0/44.5/36.0 | 54.3/21.6/16.3 | 50.1/0.91/0.63 | 60.6/31.6/24.6 | 57.3/15.5/11.3 | 50.1/0.70/0.53 | 49.9/0.04/0.02 |\\n\\n\\n[g] Liu, Ze, et al. \\\"Swin transformer: Hierarchical vision transformer using shifted windows.\\\" Proceedings of the IEEE/CVF international conference on computer vision. 2021.\\n\\n[h] Xiao, Tete, et al. \\\"Unified perceptual parsing for scene understanding.\\\" Proceedings of the European conference on computer vision (ECCV). 2018.\"}", "{\"summary\": \"This paper introduces two novel datasets for image forensics specifically curated from diffusion-based editing methods: MAGIC-News and MAGIC-COCO. These datasets encompass various topics and object classes, with manipulations including object insertion, replacement, and removal, applied through various editing techniques such as Stable Diffusion, Blended Diffusion, Glide Diffusion, and Adobe Firefly. Experiments demonstrate the performance of several image forensic techniques on these new datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Propose new datasets that cover many situations: news (MAGIC-News) or daily lives (MAGIC-COCO)\", \"Cover many editing operations: insertion, removal, and replacement\", \"Include many editing techniques: Stable Diffusion, Blended Diffusion, Glide Diffusion, and Adobe Firefly.\"], \"weaknesses\": [\"Since this paper mainly focuses on new datasets, the presentation of the datasets should be prepared more carefully. Specifically:\", \"Lack of high-quality examples of manipulated images and corresponding GT segmentation mask (Fig. 2 presents low-resolution images)\", \"Instead of just listing some numbers only, the visual charts should be used to summarize the statistics of datasets (e.g., editing areas statistics\", \"A table of Editing technique summarization would also help, including the number of images, and examples.\", \"For the Dataset Quality Survey, using a flowchart to visualize the process would be better.\", \"Lack of reporting IoU (along with AUC)\", \"Lack of classification performance (decide whether an image is a manipulated image or genuine one)\", \"The quality of the proposed dataset is a concern (Tab. 6) since some methods are just around 50%)\", \"Demonstration of using the proposed datasets would help the performance of detection techniques on other datasets such as MagicBrush [a] and CocoGLIDE [b].\", \"[a] Kai Zhang, Lingbo Mo, Wenhu Chen, Huan Sun, and Yu Su. Magicbrush: A manually annotated dataset for instruction-guided image editing. Advances in Neural Information Processing Systems, 36, 2024. 2, 5, 6, 7278\", \"[b] Fabrizio Guillaro, D Cozzolino, Avneesh Sud, Nick Dufour, and L Verdoliva. TruFor: Leveraging all-round clues for trustworthy image forgery detection and localization. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pages 20606\\u201320615, December 2022\"], \"questions\": [\"How to make sure the quality of the generated masks for MAGIC-News (since they are generated automatically from Mask2Former)\", \"For the replacement operation, have you tested different-class replacements instead of same-class replacements\", \"How to ensure the quality of GLIGEN since it is not a perfect method, any mechanism to ensure its quality?\", \"How many images are used for training, val, test, and out-of-domain subsets?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to thank you for your review, as a reminder the discussion period is coming to an end. We have responded to your current questions, if you have any other questions or further clarifications to improve our score we would be happy to answer them.\"}" ] }
9YRUmPV7Jy
Intrinsic Explanation of Random Subspace Method for Enhanced Security Applications
[ "Yanting Wang", "Jinyuan Jia" ]
Random subspace method has wide security applications such as providing certified defenses against adversarial and backdoor attacks, and building robustly aligned LLM against jailbreaking attacks. However, the explanation of random subspace method lacks sufficient exploration. Existing state-of-the-art feature attribution methods such as Shapley value and LIME are computationally impractical and lacks security guarantee when applied to random subspace method. In this work, we propose EnsembleSHAP, an intrinsically faithful and secure feature attribution for random subspace method that reuses its computational byproducts. Specifically, our feature attribution method is 1) computationally efficient, 2) maintains essential properties of effective feature attribution (such as local accuracy), and 3) offers guaranteed protection against attacks on feature attribution methods. We perform comprehensive evaluations for our explanation's effectiveness when faced with different empirical attacks. Our experimental results demonstrates that our explanation not only faithfully reports the most important features, but also certifiably detects the harmful features embedded in the input sample.
[ "Certified Defense", "Feature Attribution" ]
Reject
https://openreview.net/pdf?id=9YRUmPV7Jy
https://openreview.net/forum?id=9YRUmPV7Jy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yM7WxWEXK0", "umJYvx29Ed", "uSbnQj9DhK", "tSxABNqgsD", "s04kPBesno", "po17rlGML5", "gOpiCxT8UX", "UEmDtThL0D", "NoYY7iignp", "LZDfCPF1J7", "DOaKcAf5By", "BeY4ZttTgy", "A1IMsu19Xn", "7QWZgzt9JT", "2fVXW7JMXz", "1JDheLqtXn", "17y2Q23GnH" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1730698829096, 1732219979072, 1733129920128, 1730256269324, 1734692187242, 1733161814537, 1732219661864, 1732218083409, 1733104140603, 1733161675505, 1737524085586, 1730550460234, 1732624457301, 1732687826691, 1730721133593, 1732772095791, 1732219465060 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10873/Reviewer_2GZo" ], [ "ICLR.cc/2025/Conference/Submission10873/Authors" ], [ "ICLR.cc/2025/Conference/Submission10873/Reviewer_bQ2B" ], [ "ICLR.cc/2025/Conference/Submission10873/Reviewer_r65P" ], [ "ICLR.cc/2025/Conference/Submission10873/Area_Chair_CSpi" ], [ "ICLR.cc/2025/Conference/Submission10873/Authors" ], [ "ICLR.cc/2025/Conference/Submission10873/Authors" ], [ "ICLR.cc/2025/Conference/Submission10873/Authors" ], [ "ICLR.cc/2025/Conference/Submission10873/Reviewer_2GZo" ], [ "ICLR.cc/2025/Conference/Submission10873/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10873/Reviewer_52Co" ], [ "ICLR.cc/2025/Conference/Submission10873/Reviewer_52Co" ], [ "ICLR.cc/2025/Conference/Submission10873/Reviewer_2GZo" ], [ "ICLR.cc/2025/Conference/Submission10873/Reviewer_bQ2B" ], [ "ICLR.cc/2025/Conference/Submission10873/Authors" ], [ "ICLR.cc/2025/Conference/Submission10873/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents EnsembleSHAP, a novel feature attribution method tailored for the random subspace method. EnsembleSHAP addresses limitations in traditional feature attribution approaches, such as Shapley values and LIME, which are computationally intensive and lack security assurances against explanation-preserving attacks. EnsembleSHAP leverages computational byproducts of the random subspace method to provide efficient, accurate, and secure explanations for model predictions. This method is specifically designed to improve resilience against adversarial and backdoor attacks, as well as jailbreaking attacks on large language models. Experimental results show that EnsembleSHAP outperforms baseline attribution methods in identifying harmful features under various security threats, including certified defense and jailbreaking scenarios. The theoretical analysis demonstrates that EnsembleSHAP maintains key properties of effective feature attribution, such as local accuracy and robustness against attacks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper is structured logically, moving from the problem context and related work to problem formulation, method design, theoretical analysis, and empirical validation.\\n\\n2. The authors provide a theoretical basis for EnsembleSHAP.\\n\\n3. EnsembleSHAP leverages the computational byproducts of random subspace methods, resulting in lower computational overhead compared to traditional methods.\\n\\n4. The paper considers multiple threats - adversarial attack, backdoor attack, and jailbreaking.\", \"weaknesses\": \"1. EnsembleSHAP is designed specifically for random subspace methods, which could limit its generalizability to other ensemble methods or broader feature attribution applications that do not involve subsampling.\\n\\n2. The efficiency claim is not well studied in the experimental section.\\n\\n3. The certified detection theorem and detection strategy are not clearly explained, making it difficult for readers to fully understand the approach and its guarantees.\\n\\n4. The method\\u2019s assumptions about limited modifications to input features may not hold for many real-world backdoor attacks, where an attacker might poison the entire input space or apply more complex poisoning strategies. This assumption restricts the generalizability of the certified detection method for a wider range of attacks.\\n\\n5. The paper evaluates EnsembleSHAP using TextFooler for adversarial attacks and BadNets for backdoor attacks.These attacks are somewhat dated, and there are newer, more sophisticated adversarial and backdoor attacks in current literature. Testing against more recent attacks could better demonstrate the robustness of EnsembleSHAP. In fact, one can even design an adaptive attack.\", \"questions\": \"My major questions are included in the above weaknesses comments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the constructive comments.\\n\\n**Weakness 1. Adaptive attack discussion: Discussion and experiments on adaptive attacks could further strengthen the paper. If attackers know the defense strategy, what happens? For instance, they could adjust the attack target so that triggers do not fall within the top 10% or 20% of important features but rather within the top 30% or 40% to circumvent defenses.**\\n\\nThank you for the constructive suggestion. We think that designing adaptive attacks capable of simultaneously bypassing both the defense mechanism (the random subspace method) and staying undetected by our method is an intriguing and meaningful direction for future research.\\n\\n**Question 1.\\nThe two works compared by the authors are not defense-oriented, so is this comparison fair? Should comparisons also include existing defenses against backdoor and adversarial attacks for large language models (LLMs) to better evaluate the proposed method\\u2019s effectiveness?**\\n\\nWe emphasize that our method is not primarily defense-oriented. Instead, we first establish that it provides a faithful explanation by proving its order consistency with the Shapley value. Faithfulness, in this context, implies that a feature with greater influence on the model's prediction is assigned a higher importance score. Consequently, the certified detection guarantee can be seen as a byproduct of this faithfulness. Moreover, as demonstrated in Table 1 of the experimental section, our method outperforms baseline approaches even when no attacks are present.\\n\\nWhile the random subspace method can be employed to defend against backdoor and adversarial attacks on LLMs, existing research has only focused on its use for jailbreaking attacks (which can be considered a more powerful form of adversarial attack without constraints on the perturbation boundary). We believe that exploring the application of the random subspace method for defending against backdoor and adversarial attacks on LLMs presents an intriguing direction for future research.\\n\\n**Question 2. Without prior knowledge, if the proposed method is used to defend and 10% or 20% of the important words are deleted, can the LLM still make accurate responses? The experimental results do not indicate whether the defense proposed in this paper affects the model's responses to normal text.**\\n\\nOur method serves as a post-hoc explanation tool for existing defense mechanisms that uses the random subspace method. It does not alter the decisions made by these defense mechanisms but instead identifies the most important features influencing those decisions.\\n\\n**Question 3. Regarding the faithfulness comparison in Table 1: Faithfulness is defined as the percentage of label flips when the top e features with the highest importance scores are deleted. My understanding is that this metric should be as high as possible under attack, as the deleted important features likely contain adversarial elements. In the absence of an attack, if deleting these features leads to a high label flip rate, it indicates that removing important features significantly impacts model performance. How should one decide whether or not to delete these features?**\\n\\nThank you for the question. In this paper, faithfulness is employed as a metric to evaluate the performance of the explanation. It\\u2019s important to note that our method does not actually delete features in practice. Instead, it provides an explanation for the outcome of a defense mechanism (one using the random subspace method). For instance, when combined with RA-LLM in the context of jailbreaking attacks against LLMs, our method serves as a post-attack analysis tool. RA-LLM determines whether an attack has occurred, and our method is subsequently used to identify which parts of the input leads to that decision.\\n\\n**Question 4.\\nIt is recommended that the authors add a discussion on adaptive attacks to enhance the practical value of the proposed method.**\\n\\nThank you for the valuable suggestion. We believe that developing adaptive attack strategies that can effectively bypass both the defense and the explanatory mechanisms presents an exciting and important avenue for future research.\"}", "{\"title\": \"Acknowledge\", \"comment\": \"Thank you for your response, which clarified a few issues. However, the lack of comparison with competing approaches and presentation clarity remain unaddressed. I will thus keep my score and invite the authors to address these points to improve their paper.\"}", "{\"summary\": \"This work reveals two major issues with current state-of-the-art feature attribution methods: (1) high computational costs and (2) a lack of security guarantees against explanation-preserving attacks. To address these issues, this study proposes a computationally efficient and inherently secure feature attribution method. The key insight derives from the fact that an ensemble model\\u2019s output aggregates the prediction results of all sub-sampled inputs, with each sub-sampled input\\u2019s influence on the ensemble output further distributable to the individual features within that input. Thus, the contribution of each feature can be inferred from analyzing the prediction results of all sub-sampled inputs containing that feature.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"A security guarantee is proposed for the explanation-preserving attack without increasing the high computational cost.\", \"weaknesses\": \"Adaptive attack discussion: Discussion and experiments on adaptive attacks could further strengthen the paper. If attackers know the defense strategy, what happens? For instance, they could adjust the attack target so that triggers do not fall within the top 10% or 20% of important features but rather within the top 30% or 40% to circumvent defenses.\", \"questions\": \"1. The two works compared by the authors are not defense-oriented, so is this comparison fair? Should comparisons also include existing defenses against backdoor and adversarial attacks for large language models (LLMs) to better evaluate the proposed method\\u2019s effectiveness?\\n2. Without prior knowledge, if the proposed method is used to defend and 10% or 20% of the important words are deleted, can the LLM still make accurate responses? The experimental results do not indicate whether the defense proposed in this paper affects the model's responses to normal text.\\n3. Regarding the faithfulness comparison in Table 1: Faithfulness is defined as the percentage of label flips when the top e features with the highest importance scores are deleted. My understanding is that this metric should be as high as possible under attack, as the deleted important features likely contain adversarial elements. In the absence of an attack, if deleting these features leads to a high label flip rate, it indicates that removing important features significantly impacts model performance. How should one decide whether or not to delete these features?\\n4. It is recommended that the authors add a discussion on adaptive attacks to enhance the practical value of the proposed method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This work proposed a post explanation method for random subspace method, and provided certified defenses against several attacks (adversarial examples, backdoor attacks, jailbreaking attacks), with the benefit of reduced computational cost.\\n\\nIt received 4 detailed reviews. The strengths mentioned by reviews mainly include the theoretical analysis, the reduced cost by utilizing the computational byproducts of random subspace methods, several threats. \\n\\nMeanwhile, there are also several important concerns, mainly including the lack of comparison with other efficient methods, lack of empirical computational complexity analysis, the certified detection theorem and detection strategy are not clearly explained, outdated attacks, the assumption about limited modifications to input features may not hold, and experiments (adversarial and backdoor attacks against LLM, adaptive attacks). \\n\\nThe authors provided rebuttals for these concerns, and some reviewers gave further feedback. Generally speaking, the authors didn't made enough efforts to address these concerns, for example, several suggested experiments are not followed. Several important concerns are not well addressed. Thus, my recommendation is reject.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal and discussions, as well as their influences in the decision, have been summarized in the above metareview.\"}", "{\"comment\": \"We hope our explanation adequately addresses your concerns and encourages you to reconsider the value of our paper. We are happy to address any further questions or concerns you may have. Thank you once again for your time and dedication in reviewing our work.\"}", "{\"comment\": \"We thank the reviewer for the valuable feedback. We address the questions below:\\n\\n**Weakness 1. In section 4, the importance scores for each feature within a given feature group are equal. This approach is overly simplistic and fails to reasonably capture the differences in importance among the various features.**\\n\\nWe note that these feature groups are randomly sampled and can be vary. In Section 5.3, we demonstrate that our method satisfies the essential properties of effective feature attribution and is order-consistent with the Shapley value. For more details, please refer to Weakness 4.\\n\\n**Weakness 2. In section 4, the author highlights an issue where variations in appearance frequency can lead to an unfair assessment of feature importance when the sample size N is small. However, there is no mathematical analysis of Eq. (9) to demonstrate how the designed importance score addresses this issue.**\\n\\nThank you for pointing that out. We conduct an empirical comparison between two approaches: (1) directly applying Monte Carlo sampling, and (2) Monte Carlo sampling with normalization based on appearance frequency (Eqn. 9). Our experiments focus on adversarial attacks, with $N$ set to 200. The results demonstrate that normalizing the importance scores by appearance frequency improves the performance.\\n\\n| Dataset | SST-2 | IMDB | AG-news |\\n|-----------------|-----------------|-----------------|-----------------|\\n| Without Normalization | 0.82 | 0.95 | 0.92 |\\n| With normalization | 0.87 | 0.99 | 0.96 |\\n\\n\\n**Weakness 3. In section 5.1, why not limit k < |S| instead of considering the special case that |S| < k.**\\nIn Section 5.1, we follow the standard definition of the Shapley value, which is calculated as the average of marginal contributions across all feature subsets, ranging in size from $0$ to $d-1$. This implies that for any given feature subset $S$, regardless of its size, the ensemble model should always be able to make predictions. \\n\\nHowever, by the original definition, the ensemble model cannot subsample $k$ features from the provided $|S|$ features if $|S| < k$. To address this limitation and ensure theoretical rigor, we mention this special case in our analysis.\\n\\n**Weakness 4. The importance score is calculated based on the frequency with which a feature is selected and the predicted label, meaning that two features that are occasionally selected together end up with the same importance score. In contrast, Shapley value calculations based on label probability would differentiate between these features. Consequently, the proposed ENSEMBLESHAP, which relies on this importance score, assigns identical values to these features, potentially overlooking the differences in their individual influences.**\\n\\nThank you for the question. In fact, two features that are occasionally selected together will not have the same importance score. They will only have identical scores if they are always selected together in every subsampled feature group. However, the probability of this occurring decreases exponentially as the number of subsamples increases. \\n\\nAs we prove in Section 5.3, our method is order-consistent with the Shapley value when a large number of feature groups are subsampled. In other words, if the Shapley value can differentiate between these features, our method can do so as well.\\n\\n**Weakness 5. The authors claim that the proposed method is computationally efficient. However, there is a lack of analysis regarding its complexity and the associated time costs.**\\n\\nGiven that the random subspace method is already deployed, our method introduces negligible additional computational time (less than 3 seconds) for the explanation.\"}", "{\"comment\": \"Thanks for the feedback!\\n\\n**Weakness 1. Unclear presentation.** \\nThe general approach of bagging or subsampling the feature set was, to the best of our knowledge, originally introduced to enhance the performance of decision trees. Therefore we follow the naming. We will provide additional clarification of this terminology in our paper.\\n\\n**Weakness 2. Lack of empirical computational complexity analysis.** \\nWe note that our method serves as a post-hoc explanation for the random subspace method. This means the defender first makes predictions using the random subspace method (which involves a large number of base model queries to improve robustness), and then our method provides an explanation for those predictions. Our key claim is that our approach incurs negligible additional computational cost (less than 3 seconds to compute Eqn.9), as it reuses the computational byproducts already generated during the deployment of the random subspace method. In contrast, other feature attribution methods require additional computation time because they are not specifically tailored to the random subspace method.\\n\\n**Weakness 3. No comparison with other efficient Shapley values estimation techniques.** \\nGiven that the RSM is already deployed, our method introduces negligible additional computational overhead (less than 3 seconds). FastSHAP can be seen as an extension of LIME, which needs to build up a training dataset for the explainer model. When applied directly to the random subspace method, it needs significant additional computational time.\\n\\n**Weakness 4. The formal algorithm is missing.** \\nSorry for the confusion. Eqn.9 in Section 4 provides the formal calculation of our method with \\u201cMonte Carlo\\u201d sampling. We use binary search to solve the presented optimization problem in Section 5.4. We will provide clarification on this in our paper.\\n\\n**Weakness 5. No further discussion of the certified detection rate results. The paper should be more self-contained.**\\nThank you for pointing this out. The explanation for the experimental results on the certified detection rate was short due to space constraints. From our experiments, we observed that the certified detection rate is insensitive to the total number of features (e.g., IMDB has much longer input sequences than AG-News, but both exhibit similar certified detection rates). However, the certified detection rate is strongly influenced by the ratio of subsampled features to the total number of features, as shown in Figure 17 in the Appendix. Specifically, a smaller subsampling ratio (or larger dropping ratio) improves the certified detection rate. This occurs because each adversarial feature influences fewer subsampled groups, making it harder for these features to change the predicted label without being detected.\\n\\nRegarding the plateau observed after a few \\\"e,\\\" we notice a turning point when \\\"e\\\" reaches the number of adversarial features \\\"T.\\\" Beyond this point, the number of reported features \\\"e\\\" is no longer the bottleneck. Instead, the certified detection rate depends more on \\\"T.\\\" When \\\"T\\\" is large, in the worse case, the attacker can distribute the influence of perturbed features more evenly toward the target label, making each adversarial feature contribute less to the target label. Since there are benign features that may inadvertently contribute to the target label, these perturbed features become hard to detect.\\n\\nSome experimental details are in the appendix because of space reasons. We will further refine our paper.\\n\\n**Question 1. Why did the authors not provide a comparison with other efficient methods for Shapley values estimation, like FastSHAP [1]?** \\nPlease refer to Weakness 3 for details. In summary, our method repurposes the computational byproducts generated during the ensemble model's predictions to provide explanations, with no additional computational cost. Additionally, we theoretically demonstrate that our explanation is order-consistent with the Shapley value for the ensemble model, which is hard to compute.\\n\\n**Question 2. What approach is used for solving the optimization problem stated in Sect. 5.4?** \\n\\nWe use binary search to solve the optimization problem in Section 5.4. We will provide clarification on this in our paper.\\n\\n**Question 3. What rationale is behind selecting the ICL as a baseline? How is it adapted for feature attribution?** \\n\\nWe think it would be interesting to leverage the in-context learning capabilities of large language models (LLMs) to build the explainer, as LLMs are becoming increasingly powerful and widely adopted. For this purpose, we use the GPT-3.5-turbo model. The prompt includes an in-context learning dataset, where the inputs consist of indexes of subsampled features, and the labels correspond to the predictions of the base model based on these subsampled features. The prompt includes an instruction guiding the LLM to output a ranked list of a certain fraction (e.g.,20%) of the most important features.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your response. The authors have addressed most of my concerns (although I am still not fully convinced about the robustness aspect). I have increased my score accordingly.\"}", "{\"comment\": \"Thank you for provide additional feedback and share your concerns with us. We appreciate your thoughtful review and recognition of our efforts.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper introduces EnsembleSHAP, a novel feature attribution method designed for random subspace methods. Compared with existing feature attribution techniques like Shapley values or LIME, the proposed EnsembleSHAP is both computationally efficient and intrinsically secure. Moreover, it provides a certified defense against various attacks.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe proposed EnsembleSHAP addresses the security gap in existing feature attribution methods, offering certified defenses.\\n2.\\tThe authors carry out empirical evaluations to assess the effectiveness of their explanations across various security applications of the feature attribution method.\", \"weaknesses\": \"1.\\tIn section 4, the importance scores for each feature within a given feature group are equal. This approach is overly simplistic and fails to reasonably capture the differences in importance among the various features.\\n2.\\tIn section 4, the author highlights an issue where variations in appearance frequency can lead to an unfair assessment of feature importance when the sample size N is small. However, there is no mathematical analysis of Eq. (9) to demonstrate how the designed importance score addresses this issue.\\n3.\\tIn section 5.1, why not limit k < |S| instead of considering the special case that |S| < k.\\n4.\\tThe importance score is calculated based on the frequency with which a feature is selected and the predicted label, meaning that two features that are occasionally selected together end up with the same importance score. In contrast, Shapley value calculations based on label probability would differentiate between these features. Consequently, the proposed ENSEMBLESHAP, which relies on this importance score, assigns identical values to these features, potentially overlooking the differences in their individual influences. \\n5.\\tThe authors claim that the proposed method is computationally efficient. However, there is a lack of analysis regarding its complexity and the associated time costs.\", \"questions\": \"Please help to check the weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I acknowledge that I have read the response.\"}", "{\"title\": \"Follow-up questions\", \"comment\": \"Thank you for your response to my previous questions. I have two follow-up questions:\\n\\n1. The first question is regarding your design choice in EnsembleSHAP. Specifically, the method uses the indicator function to attribute feature importance, rather than utilizing the class probabilities computed by the model. Using the indicator function will potentially overestimate or underestimate certain features. Could you clarify why the indicator function was chosen over class probabilities?\\n\\n2. The second question is regarding the practicality of the detection mechanism. While EnsembleSHAP offers certified detection of adversarially modified features, it seems that this detection is only effective if the defender already knows that an attack has occurred. In practical scenarios, however, defenders may not have prior knowledge of an attack and might rely on mechanisms to detect anomalous or adversarial inputs before applying feature attribution methods. Could you clarify how EnsembleSHAP could be used in scenarios where the defender does not know whether the data has been attacked? Does the method rely on external tools or assumptions, such as baseline comparisons or anomaly detection, to identify suspect inputs for analysis? If so, how might these external mechanisms interact with EnsembleSHAP to form a complete detection pipeline?\"}", "{\"summary\": \"This paper proposes EnsembleSHAP, a feature attribution method based on the well-known random subspace method, which is claimed to be computationally efficient and preserves fundamental properties of Shapley values. The method provides certifiable robustness against explanation-preserving attacks to language models, as theoretically shown.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Relevant topic;\", \"Theoretical analysis;\", \"Rich model and attack types considered in the experiments.\"], \"weaknesses\": \"- Unclear presentation;\\n- No empirical evaluation of the computational complexity;\\n- Lack of comparison with other efficient feature attribution methods;\\n- The proposed algorithm is not formally stated but only described verbally.\\n\\n**Comments.**\\n\\n**Unclear presentation.** Presentation needs substantial improvement. One unclear point to me is that the random subspace method proposed by T. K. Ho does not work in the way it is used in this paper, as far as my understanding of this work is concerned. The random subspace method creates distinct training sets by bagging and subsampling the feature set in each round, and it's the basic method used to train random forests. I don't see how it is directly applied in this work (at least, it's unclear how it's applied at training vs test time). It was originally proposed to boost the performance of classifier ensembles, and it had nothing to do with security issues. This should also be clarified in the paper, I guess that it's only the recent developments that used that method to get certified robustness via randomization (a la randomized smoothing). \\n\\n**Lack of empirical computational complexity analysis.** The authors did not provide any evaluation of the computational complexity required for computing the importance scores with the proposed method, nor they provided information on what algorithm they used for estimating the standard Shapley values. I don't buy that this method is computationally efficient, if it requires sampling as many as 10,000 different inputs before providing a prediction.\\n\\n**No comparison with other efficient Shapley values estimation techniques.** Other efficient methods have been previously proposed for efficient Shapley values estimation; despite this, the authors did not provide a comparison with other methods, e.g., FastSHAP [1].\\n\\n**Formal algorithm is missing.** In Sect. 4, there is no actual definition of the algorithm. Instead, a description of the used methods is given in words, such as \\u201cMonte Carlo\\u201d sampling or the approximation of the defined importance score. The approach to solving the presented optimization problem has not been reported.\\n\\n**No further discussion of the certified detection rate results.** In Sect. 6.3 the plot of the certified detection rate against the top-e important features is reported. However, there is no discussion on the obtained results; there is no discussion on the total number of considered features, why the detection reaches a plateau after few \\u201ce\\u201d. This require further elaboration.\\nMoreover, the experiments on jailbreaking, the motivation behind the choice of the hyperparameters, and other relevant experiments are confined in the appendix. The authors should reconsider that to make the paper more self-contained.\\n\\n[1] Jethani, N., Sudarshan, M., Covert, I. C., Lee, S.-I., & Ranganath, R. (2022). FastSHAP: Real-Time Shapley Value Estimation. International Conference on Learning Representations. Retrieved from https://openreview.net/forum?id=Zq2G_VTV53T\", \"questions\": \"1. Why did the authors not provide a comparison with other efficient methods for Shapley values estimation, like FastSHAP [1]? Is it because they are still inefficient when applied to random subspace methods?\\n\\n2. What approach is used for solving the optimization problem stated in Sect. 5.4?\\n\\n3. What is the rationale behind selecting the ICL [2] method as a baseline? How did the authors adapt it to work as a feature attribution method?\\n\\n\\n[2] Nicholas Kroeger, Dan Ley, Satyapriya Krishna, Chirag Agarwal, and Himabindu Lakkaraju. Are large language models post hoc explainers? arXiv preprint arXiv:2310.05797, 2023\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your thoughtful reply. We have addressed the questions as follows:\\n\\n**Why use indicator function instead of class probabilities?**\\n\\nWe use an indicator function in our method to ensure its applicability to a wider range of classifiers, including black-box classifiers that do not output class probabilities. For example, consider the application of our method to RA-LLM, a defense mechanism against jailbreaking attacks. The base classifier in RA-LLM is an alignment-check function that predicts the label \\\"harmful\\\" if the LLM's output (based on subsampled input text) includes phrases such as \\\"I am sorry, but I cannot answer this ...\\\", and predicts \\\"non-harmful\\\" otherwise. RA-LLM then aggregates these predictions across all subsampled texts to make the final decision (e.g., whether to reject or not). In such cases, using an indicator function allows our method to be more broadly applicable.\\n\\n**Practicality of the detection mechanism. Could you clarify how EnsembleSHAP could be used in scenarios where the defender does not know whether the data has been attacked? Does the method rely on external tools or assumptions, such as baseline comparisons or anomaly detection, to identify suspect inputs for analysis? If so, how might these external mechanisms interact with EnsembleSHAP to form a complete detection pipeline?**\\n\\nThe detection mechanism of EnsembleSHAP can be used as an post-attack forensic analysis tool within a defense pipeline. Specifically, a defender can first employ prevention or detection-based methods, such as anomaly detection, to identify whether an attack has occurred. Once an attack is detected, EnsembleSHAP can then be utilized to trace the root cause of the attack. Notably, even without integration with external tools, EnsembleSHAP can still be directly applied as a reliable explainer for the outputs of the ensemble model.\"}", "{\"comment\": \"We thank the reviewer for the constructive comments.\\n\\n**Weakness 1.EnsembleSHAP is designed specifically for random subspace methods, which could limit its generalizability to other ensemble methods or broader feature attribution applications that do not involve subsampling.**\\n\\nRandom subspace methods have a wide range of applications. In this paper, we focus on their use for defending against attacks in the input space. Additionally, random subspace methods have been applied to differential privacy (Liu et al., 2020), facilitate machine unlearning (Bourtoule et al., 2021), and build robust models to withstand poisoning attacks (Jia et al., 2021). Exploring the potential of EnsembleSHAP in these and other applications of random subspace methods represents an intriguing direction for future research.\\n\\n**Weakness 2. The efficiency claim is not well studied in the experimental section.**\\n\\nOur method serves as a post-hoc explanation technique for the random subspace method, leveraging computational byproducts while introducing negligible costs (less than 3 seconds). Improving the computational efficiency of the random subspace method itself remains an open challenge (Jia et al., 2021;Zhang et al.,2023; Zeng et al.,2023).\\n\\n**Weakness 3. The certified detection theorem and detection strategy are not clearly explained, making it difficult for readers to fully understand the approach and its guarantees.**\\n\\nThank you for pointing that out. The explanation of the certified detection theorem is brief due to space constraints. The theorem states that if an attacker modifies $T$ features of the original testing input $x$ to alter the predicted label of the ensemble classifier, our method can guarantee that $D(x, T)$ of these modified features will be detected as the most important features.\\n\\nTo provide some intuition, consider an extreme case where the subsampling size $k$ is 1, meaning each feature gets a single vote. Suppose there are 5 features in total, and all features initially vote for the correct label before the attack. To flip the predicted label, the attacker would need to change the predictions of at least 3 features to the target label. In this scenario, the contributions of these 3 modified features to the target label would surpass those of the unmodified features (which contribute nothing to the target label). Therefore, if we report top-3 most important features for the predicted label (after the attack), these 3 modified features are provably reported.\\n\\nThe guarantee $D(x, T)$ depends on several factors in the general case:\\n\\n1. Confidence of the ensemble model's prediction before the attack: If the ensemble model is highly confident, the modified features must influence a greater number of subsampled groups to alter the prediction, making them more detectable.\\n\\n2. Subsampling ratio: A smaller subsampling ratio enhances certified detection performance.\\n\\n3. $T$: A larger $T$ makes certified detection more challenging.\\n\\nThese factors collectively impact the effectiveness of the certified detection.\\n\\n**Weakness 4. The method\\u2019s assumptions about limited modifications to input features may not hold, where an attacker might poison the entire input space or apply more complex poisoning strategies.**\\n\\nOur certified detection guarantee holds for any complex poisoning strategies, provided the number of poisoned features is bounded. However, we acknowledge that certified detection becomes increasingly difficult as the number of perturbed features grows. This is because adversarial features can collectively influence the target label, reducing the individual contribution of each adversarial feature to a level that becomes indistinguishable from benign features (which may also inadvertently contribute to the target label). It is worth noting that this challenge also exists in certified defenses. Indeed, existing certified defenses (Jia et al., 2021; Zhang et al.,2023; Zeng et al.,2023) cannot certify for >10% of adversarial features. In practice, our method is still effective when 20% of the input features are poisoned (in adversarial attacks).\\n\\n**Weakness 5. Some attacks evaluated are somewhat dated, and there are newer, more sophisticated adversarial and backdoor attacks. In fact, one can even design an adaptive attack.**\\n\\nWe employ TextFooler for adversarial attacks and BadNets for backdoor attacks because they are representative and can be readily applied to ensemble models. Many advanced white-box adversarial attacks require substantial modifications and high computational costs to adapt to ensemble models. For black-box attacks, the design of state-of-the-art methods fundamentally aligns with TextFooler, as they all rely on a trial-and-error process for black-box optimization. Additionally, our certified detection analysis indicates that the effectiveness of our method depends largely on the total number of modified features, rather than the specific attack strategy employed.\"}" ] }
9YNyiCJE3k
OSDA Agent: Leveraging Large Language Models for De Novo Design of Organic Structure Directing Agents
[ "Zhaolin Hu", "Yixiao Zhou", "Zhongan Wang", "Xin Li", "Weimin Yang", "Hehe Fan", "Yi Yang" ]
Zeolites are crystalline porous materials that have been widely utilized in petrochemical industries as well as sustainable chemistry areas. Synthesis of zeolites often requires small molecules termed Organic Structure Directing Agents (OSDAs), which are critical in forming the porous structure. Molecule generation models can aid the design of OSDAs, but they are limited by single functionality and lack of interactivity. Meanwhile, large language models (LLMs) such as GPT-4, as general-purpose artificial intelligence systems, excel in instruction comprehension, logical reasoning, and interactive communication. However, LLMs lack in-depth chemistry knowledge and first-principle computation capabilities, resulting in uncontrollable outcomes even after fine-tuning. In this paper, we propose OSDA Agent, an interactive OSDA design framework that leverages LLMs as the brain, coupled with computational chemistry tools. The OSDA Agent consists of three main components: the Actor, responsible for generating potential OSDA structures; the Evaluator, which assesses and scores the generated OSDAs using computational chemistry tools; and the Self-reflector, which produces reflective summaries based on the Evaluator's feedback to refine the Actor's subsequent outputs. Experiments on representative zeolite frameworks show the generation-evaluation-reflection-refinement workflow can perform de novo design of OSDAs with superior generation quality than the pure LLM model, generating candidates consistent with experimentally validated OSDAs and optimizing known OSDAs.
[ "Keywords: Large Language Model", "OSDA", "Zeolite", "Molecular Design" ]
Accept (Spotlight)
https://openreview.net/pdf?id=9YNyiCJE3k
https://openreview.net/forum?id=9YNyiCJE3k
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tG0B1OqS0F", "rmVv8v6tfr", "rOieRcOxOA", "psN4zlIOqU", "nJJA5CxteD", "m980n6rCU4", "klJcHMrBEg", "kfmaqK7HN8", "az9Vaf4qxk", "YoguobW9S4", "XkXCGZjJ53", "Vr3U4dg2Ti", "U6WVg7CGcC", "JFlOmMlqNQ", "IVbtlPCckH", "Hv6B9sLdpm", "Hju73JPebr", "GQ6MaIvL43", "GIoyQNyyRx", "FaaLTgG6H4", "FAvmRoSUYl", "7qwAvbGXpA", "4ujrJ1rC5T", "37Ld9zTMvC", "2esYeIWyIr", "1LvxKU89aV" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1732121316211, 1732123123558, 1732119651463, 1732122598982, 1732404045495, 1732119317837, 1730719858681, 1732121083343, 1737523582154, 1732499958891, 1732123379889, 1730650386538, 1732122305321, 1732119535873, 1732122349255, 1732693073277, 1732525849041, 1734480892427, 1732120529634, 1732122866241, 1732122117868, 1732527134780, 1732122965829, 1729473761535, 1730715790474, 1732689497723 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Reviewer_WsXS" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Reviewer_93jd" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3544/Reviewer_t8Dz" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Reviewer_t8Dz" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Area_Chair_SLeM" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Authors" ], [ "ICLR.cc/2025/Conference/Submission3544/Reviewer_WsXS" ], [ "ICLR.cc/2025/Conference/Submission3544/Reviewer_bUa4" ], [ "ICLR.cc/2025/Conference/Submission3544/Reviewer_bUa4" ] ], "structured_content_str": [ "{\"comment\": \"## (3/3)\\n\\n# Part V: Expert Validation\\n\\n\\uff08question 3\\uff09\\n\\nWe submitted the generated molecules to several experts in the field of chemistry, who used their extensive knowledge and practical synthesis experience to evaluate the submitted molecules. The focus was on the rationality of the newly designed OSDA molecules and the feasibility of their synthesis.\\n\\nBelow are several examples of expert evaluation\\n\\ngenerated\\uff1aCC[N@+]1(C)CCCCC1\\n\\nThe generated compound and the literature compound CCCC[N+]1(C)CCCC1 [1] are both cyclic quaternary ammonium salts. The ring structures are six-membered rings and five-membered rings. These two ring structures have similar tension and stability in terms of organic chemistry, and both are stable. The N atom in both molecules contains methyl and flexible low-carbon alkyl chains. The generated one has fewer carbon atoms in the chain and is easier to synthesize.\\n\\ngenerated\\uff1aCC\\\\[N+\\\\](C)(C)C(C)C\\n\\nThe generated compound and the literature compound CCC\\\\[N+\\\\](C)(C)CCC [2] both are chain quaternary ammonium salts. They also contain the same number of carbon atoms. The structure difference is that one contains an n-propyl group and the other contains an isopropyl group. Yet n-propyl group and the isopropyl group have similar properties in organic compounds. The generated is easy to synthesize or purchase commercially.\\n\\ngenerated\\uff1aC\\\\[N+\\\\]1(C)C[C@H]2CC\\\\[C@H\\\\](CC2)C1\\nThe generated and the literature compound CC1(C)CC2CC(C)(C1)C\\\\[N+\\\\]2(C)C[3] both bridged ring quaternary ammonium salts. The bridged ring structures in the two molecules have similar tensions and stabilities in terms of organic chemistry. The N atom in both molecules contains the methyl group, and thus has similar chemical properties. The literature molecule can be transformed into the generated one by demethylation and ring cleavage reactions.\\n\\nThe experimental synthesis of these molecules requires a certain amount of time, and we are currently unable to provide specific synthesized examples. However, given that the generated molecules have been validated and endorsed by domain experts, along with their low synthesis complexity scores, we believe there is a high likelihood of successful synthesis for these molecules.\\n\\n\\n[1]:Azim M M, Stark A. Ionothermal synthesis and characterisation of Mn-, Co-, Fe-and Ni-containing aluminophosphates[J]. Microporous and Mesoporous Materials, 2018, 272: 251-259.\\n\\n[2] Lee J H, Kim Y J, Ryu T, et al. Synthesis of zeolite UZM-35 and catalytic properties of copper-exchanged UZM-35 for ammonia selective catalytic reduction[J]. Applied Catalysis B: Environmental, 2017, 200: 428-438.\\n\\n[3]Wagner P, Nakagawa Y, Lee G S, et al. Guest/host relationships in the synthesis of the novel cage-based zeolites SSZ-35, SSZ-36, and SSZ-39[J]. Journal of the American Chemical Society, 2000, 122(2): 263-273.\"}", "{\"comment\": \"## (4/4)\\n\\n# Details of Our Method\\n\\uff08weakness[Details] and question 3,5\\uff09\\n\\n## OSDA Agent framework\\n\\nOur OSDA agent framework is designed for optimizing the OSDA (Organic Structure-Directing Agent) molecules for zeolite design or improving an existing molecule. The simplified process works as follows:\\n**Task Input**: When we need to design an OSDA for a specific zeolite or optimize an existing molecule, we first provide the task requirements through a carefully designed prompt. This prompt is input into a large language model (LLM), referred to as the Actor. Based on the task requirements, the LLM generates an initial OSDA molecule.\\n\\n**Molecule Evaluation**: The generated molecule is then fed into an evaluation module, called EVALUATION, which consists of several chemical tools:\\n\\n1.RDKit Screening Criteria: Based on chemical expert experience, we have designed a set of screening standards to verify and assess the generated molecule's feasibility and validity.\\n\\n2.SCScore (Synthesis Complexity Score): This score evaluates the synthetic difficulty of the molecule, considering factors such as the number of synthetic steps, reagents required, and the overall feasibility of synthesis.\\n\\n3. Binding Energy Estimation Model: This model uses multiple information fusion techniques to estimate the binding energy between the OSDA molecule and the active sites in the zeolite lattice, which is crucial for predicting stability during the zeolite synthesis process.\\n\\n**Feedback and Self-Reflection**: The evaluation results are then provided to another large language model, called self-reflection, which summarizes the evaluation feedback. This model incorporates the results with expert knowledge to generate suggestions for improving the molecule\\u2019s design.\\n\\n**Final Optimization**: The feedback is sent back to the LLM (Actor), which uses it to refine the OSDA molecule further, ensuring it meets the synthesis requirements, adheres to expert rules, and has the desired binding energy.\\n\\nThrough this iterative process, the framework is able to generate OSDA molecules that are optimized for synthesis feasibility, adhere to empirical rules, and exhibit the ideal binding energy for zeolite synthesis\\n\\n## Binding energy estimation mode\\n\\n**Importance of Binding Energy**\\n\\nIn zeolite synthesis, the organic structure-directing agent (OSDA) interacts with zeolite precursors, guiding the formation of a specific porous structure. The compatibility between OSDA and the zeolite is critical, as it determines whether the OSDA can effectively facilitate the crystallization process. This compatibility is often reflected in the binding energy: lower binding energy usually indicates stronger interactions between the OSDA and zeolite, making the OSDA more likely to succeed in synthesizing that zeolite type. Empirical evidence also shows that discovered OSDAs for specific zeolites tend to exhibit lower binding energies.\\n\\n**How the Model Works**\\n\\nOur binding energy estimation model is grounded in the fundamental formula of binding energy, which depends on the interactions among three components: the OSDA-zeolite complex, the zeolite itself, and the OSDA molecule. Here's how we estimate binding energy:\\n\\n**Generating the Complex:**\\n\\nWe use VOID to simulate multiple docking poses of the OSDA within the zeolite framework. The pose with the lowest energy is selected as the estimated binding energy.\\n\\n**Extracting Features:**\", \"complex_features\": \"A crystal graph convolutional neural network (CGCNN) is employed to extract structural and energetic features of the docked OSDA-zeolite complex.\", \"zeolite_features\": \"Another CGCNN captures the standalone properties of the zeolite framework.\", \"osda_features\": \"A pre-trained chemical transformer model extracts molecular-level features of the OSDA.\\n\\n**Feature Fusion and Prediction:**\\n\\nThe features from these three components are fused to estimate the binding energy accurately.\\nThis integrated approach ensures the estimation considers all relevant interactions, making it a robust tool for predicting OSDA compatibility with specific zeolites\\n\\nWe greatly appreciate your valuable feedback on our manuscript and acknowledge the areas that require improvement. We are committed to addressing the issues you raised during the camera-ready revision phase and will ensure that both clarity and detail are significantly enhanced.\"}", "{\"comment\": \"## (3/3)\\n\\n# Evaluation Metrics and Their Relevance to OSDA Applications\\n\\n\\uff08Question4\\uff09\\n\\nThe metrics we selected are specifically designed to address the key practical requirements for OSDA applications in industry, and are divided into three main categories: **validity**, **molecular similarity**, and **distribution similarity**. Additionally, we incorporate subjective expert judgment to further ensure the relevance of the generated molecules in real-world OSDA design.\\n\\n**Validity**: In decades of OSDA research, chemists have identified several empirical rules for selecting potential OSDA molecules, focusing on characteristics such as ring structures, bond types, functional groups, and elemental compositions. Molecules that do not meet these criteria are generally not considered viable candidates for OSDA design. Our Validity metric is aimed at quantifying how well the generated molecules satisfy these established criteria. It represents the proportion of generated molecules that conform to the structural and chemical requirements necessary for OSDA molecules, making it highly relevant for practical applications where these rules are well-established.\\n\\n**Molecular Similarity**: We use several well-established molecular similarity metrics, including BLEU, Morgan, MACCS, and RDK, which are commonly used in molecular design tasks. These metrics assess the similarity between the generated molecules and existing molecules, focusing on different aspects of the chemical structure. In traditional OSDA design, chemists often look for molecules with similar structures or functional groups to known OSDAs, so these similarity metrics align with how OSDA molecules are typically identified in practice.\\n\\n**Distribution Similarity**: For distribution similarity, we introduced two novel metrics: Molecular WHIM energy distance (ED) and Kullback-Leibler divergence (KL). These metrics are designed to evaluate the similarity in distribution between the generated molecules and existing OSDAs, particularly in terms of their geometric and topological properties. The WHIM descriptor focuses on a molecule\\u2019s geometric features, such as size, atomic distances, angles, and topological structure. Since specific zeolite pores have fixed sizes, the molecules that can bind to these pores must have similar geometric properties. Therefore, the WHIM energy distance and KL divergence metrics help us assess how well the generated molecules match the structural distributions of known OSDAs, which is crucial for real-world applications where the geometric compatibility of OSDAs with zeolite pores is critical.\\n\\n**Expert Judgement**: Finally, given that OSDA design still relies on some degree of experience and intuition, we also incorporated expert judgment to subjectively evaluate the generated molecules. This step ensures that our generated candidates align with practical and empirical insights from the field of OSDA design.\"}", "{\"comment\": \"## (1/4)\\n\\nThank you for recognizing the **novelty** of our work in [Algorithm use] and the **effectiveness** of our [results]. We also appreciate your constructive feedback.\\n\\nThank you for your thoughtful comments. We appreciate your feedback, and we will address each of your concerns one by one.\\n\\n# The Importance of OSDA in Zeolite Synthesis\\n\\nOrganic Structure-Directing Agents (OSDAs) are essential in zeolite synthesis, guiding the formation of their unique porous structures. Zeolites, with their selective molecular sieving properties, are used in catalysis, adsorption, ion exchange, and gas separation. OSDAs interact with the aluminosilicate framework during synthesis to control pore size, diameter, and crystal morphology, influencing zeolite functionality. The choice of OSDA is key to tailoring zeolite properties for specific industrial and environmental applications.\\n\\n# Why OSDAs pose a challenge to contemporary methods\\uff1f\\n\\n\\uff08Weaknesses[Motivation] and question 1\\uff09\\n\\nOSDA plays a crucial role in determining the topology of zeolites, especially in the formation of their porous structures. Traditional methods mainly rely on two approaches: first, using empirical experience and experimentation to search for feasible OSDA molecules within a molecular library; and second, employing computational simulations of zeolite-OSDA interactions (such as Density Functional Theory, DFT) to assist in OSDA design. Both experience-based and simulation-based traditional methods are highly time-consuming and resource-intensive. In particular, the vast chemical space of OSDA (small molecules) presents significant challenges in discovering new OSDA candidates using conventional methods.\\n\\nMoreover, due to the difficulty in finding new OSDA molecules, the existing pool of OSDA candidates is **very limited**. For example, the Jensen dataset, which collects data from papers published between 1966 and 2020, contains only **758** different OSDA molecules. Traditional machine learning generation methods typically require large datasets, but the small number of OSDA molecules makes it challenging to train effective generative models. Several traditional generative model approaches[1,2] are typically trained using synthetic routes. This means that when designing OSDAs , information related to the synthetic pathway, such as gel chemistry, must be provided. Moreover, the limited number of different OSDA molecules in the dataset negatively impacts the model's ability to search effectively within the chemical space. Currently, Large Language Models (LLMs) have broad knowledge across related fields, and we aim to leverage the domain expertise of LLMs to overcome the data scarcity issue.\\n\\n# Why do we use an interactive, feedback-driven approach?\\n\\n\\uff08Weaknesses[Motivation] and question2\\uff09\\n\\nDue to the phenomenon of \\\"hallucination\\\" in large language models (LLMs), especially when performing complex tasks like molecular design, it is necessary to introduce additional chemical knowledge to help the LLMs accomplish the task more effectively. Through an interactive, feedback-driven approach, we professionally evaluate the molecules designed by the LLM, including factors such as OSDA empirical rules, molecular synthesis difficulty, safety, etc., and provide feedback to the LLM. This process is similar to how chemists validate through experiments and make improvements based on the experimental results.\\n\\n[1] Jensen Z, Kwon S, Schwalbe-Koda D, et al. Discovering relationships between OSDAs and zeolites through data mining and generative neural networks[J]. ACS central science, 2021, 7(5): 858-867.\\n\\n[2] Xu L, Peng X, Xi Z, et al. Predicting organic structures directing agents for zeolites with conditional deep learning generative model[J]. Chemical Engineering Science, 2023, 282: 119188.\"}", "{\"title\": \"Thank you for the responses but please include some of the explanations in the main paper\", \"comment\": \"Thank you for the very detailed responses. I now better understand the main contribution of the paper. I agree that it is relevant. Please, make sure that you do convey the explanation why generating OSDAs poses a challenge to existing methods (including traditional ML) in the main paper, it much better underscore the importance of the introduced method.\\n\\nI would also suggest to the OSDA agent framework the same way you did in the response in the main text. It makes the framework clear. It also shows that the framework has the potential to generalize. Your contribution on Binding energy estimation would benefit if added to the appendix.\\n\\nI changed my score to Above acceptance threshold. I would score it higher, but I cannot evaluate if the Molecule Evaluation part of the framework is generalizable. Overall, like I said above and like you indirectly convey in the paper and your replies, the framework does have the potential to be used in other limited-data use-cases. I can see how it would work in other chemistry-related topics. But whether you could incorporate Evaluation from entirely different area, and still harness the power of LLMs during Feedback and Self Reflection is unclear. If it is clear to you, please openly spell it out with your reasoning in the paper.\"}", "{\"comment\": \"## (1/3)\\n\\nWe thank you for your kind acknowledgment of our work as **unique** and **innovative**, and for recognizing the **novelty** of our approach in integrating LLMs with computational chemistry tools. We thank you for your constructive comments.\\n\\nBelow, we provide detailed answers to the reviewer's concerns.\\n\\n# Generalizability of the Proposed Method\\n\\n\\uff08Weaknesses 1\\uff09\\n\\nThank you for your valuable feedback. We agree that the generalizability of our proposed model is an important consideration. While the current study focuses specifically on the design of OSDAs for zeolites, we believe that the underlying methodology can be extended to other types of molecular design. The model is built upon a foundation of large language models (LLMs) trained on a diverse range of scientific texts, providing it with **fundamental chemical knowledge** that supports broader molecular design capabilities. Additionally, the LLM Agent is highly **flexible and adaptable**, capable of selecting appropriate tools based on specific design requirements. This adaptability enables the model to be applied to broader classes of materials and chemical processes, beyond zeolites and OSDAs.\\n\\n# Computational cost and Accessibility\\n\\n\\uff08Weaknesses 2\\uff09\\n\\n The computational chemistry tools we rely on\\u2014RDKit, Scscore, and Void\\u2014are all accessible via Python, which allows for seamless integration within our framework. Furthermore, the computational chemistry component of our model is modular, meaning we have the flexibility to choose or swap tools as needed without requiring changes to the overall structure of the framework. This modularity ensures that the approach can adapt to future updates or replacements of the computational tools, offering flexibility and scalability. \\n\\nIn terms of computational cost, our method primarily incurs time costs related to evaluating the generated molecules, particularly when Void uses Voronoi diagrams to sample docking poses. Each OSDA-zeolite pair can generate tens to hundreds of docking configurations. Our experiments show that the evaluation time for each molecule is approximately 1 to 3 minutes.\\n\\n# Diversity and Representativeness of the Data\\n\\n\\uff08Weaknesses 3\\uff09\\n\\nWe understand the concern about the diversity and representativeness of the datasets used in training and validation, and we appreciate the opportunity to clarify this point. We selected the Zeolite Organic Structure Directing Agent Database (OSDA Database) and the Jensen dataset primarily because they are both widely recognized and extensively applied in the field of zeolite materials and organic structure-directing agents (OSDAs). The Jensen dataset, which compiles 5,663 zeolite synthesis pathways extracted from 1,384 publications between 1966 and 2020, is one of the most comprehensive datasets in this domain. It covers a broad range of zeolite types and provides a wide variety of OSDA molecular volumes, ensuring a certain level of diversity. Similarly, the OSDB (OSDA Database) is built upon first-principles simulations, providing composite energy data for over 500,000 zeolite-OSDA pairs. To the best of our knowledge, this is one of the most complete datasets of its kind in terms of binding energy data. In this paper,the types of zeolites tested in our experiments are also representative of various zeolite types used in industry.\\n\\nWhile we acknowledge that no dataset is exhaustive, we believe that these datasets are sufficiently representative of the range of zeolite types and OSDAs commonly encountered in the field, which supports the robustness and applicability of our findings. The use of these well-established datasets also ensures that our model is grounded in widely accepted data within the community.\"}", "{\"summary\": \"The paper introduces the OSDA Agent, an innovative framework combining Large Language Models (LLMs) with computational chemistry tools for the de novo design of Organic Structure Directing Agents (OSDAs) for zeolites. The framework includes three main components: Actor, Evaluator, and Self-reflector, enhancing the process of generating and optimizing OSDA molecules. Demonstrates significant improvements over existing models in generating OSDAs that are not only theoretically valid but practically feasible. Provides a comprehensive methodological approach and substantial experimental results, setting a new standard in computational chemistry and molecular design.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Integrating LLMs with computational chemistry tools to create a feedback-informed molecule generation model is unique. This model addresses the limitations of traditional molecule design models by providing an interactive and iterative design process that includes a novel binding-energy prediction module (Section 4.1).\\n2. The experimental design is robust, with clear definitions of metrics and methodological approaches that ensure the reproducibility and validity of results. The use of a layered approach combining different computational tools to assess molecule viability is particularly noteworthy (Sections 5.1 and 5.2). \\n3. The paper is well-structured and written, offering detailed explanations of the processes and technologies involved, such as the Actor-Evaluator-Self-reflector framework. Figures and tables effectively illustrate complex processes and results. \\n4. This research has a high potential impact, particularly on industries relying on zeolites. Generating and optimizing OSDAs more efficiently could lead to significant advancements in material science and related fields.\", \"weaknesses\": \"1. The paper could better address how the proposed model might generalize to other types of molecular design beyond OSDAs for zeolites. It is unclear whether the techniques and improvements reported apply to broader classes of materials or chemical processes.\\n2. The reliance on external computational chemistry tools may introduce limitations related to the scalability and speed of the proposed approach. The paper could expand on the implications of these dependencies, particularly in terms of computational cost and accessibility. \\n3. The models are trained and validated primarily using the Zeolite Organic Structure Directing Agent Database and the Jensen dataset. The diversity and representativeness of these datasets could be questioned, potentially affecting the robustness and applicability of the findings (Section 3.1).\", \"questions\": \"1. Can the methodologies and framework presented be adapted to design other types of molecules or materials not discussed in the paper? What changes or adaptations would be necessary?\\n2. What are the computational resource requirements for implementing the OSDA Agent framework, significantly when scaling to larger datasets or more complex molecular structures? \\n3. How does the framework handle changes or updates in computational chemistry tools or LLMs? Is there a mechanism to easily integrate new tools or updates in knowledge of the field? \\n4. Could you elaborate on the choice of evaluation metrics used? How do these metrics align with the practical requirements of OSDA applications in the industry?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## (2/3)\\n\\n# Part III: Comparison of Components Across Different Large Language Models\\n\\n\\uff08Weaknesses3 and question 5\\uff09\\n\\n The decision to use different LLM variants (GPT-4 for the Actor and GPT-4o for the Self-reflector) was motivated by the need to leverage the extensive chemical knowledge embedded in large models, especially in the design phase of OSDA. The Actor is responsible for summarizing and synthesizing information for molecular design, and it benefits from GPT-4's advanced capabilities in this area. In addition, the reflection process plays a crucial role in refining the molecule designs, and GPT-4\\u2019s performance in this phase is highly effective. To balance computational efficiency, we used GPT-4o as the Self-reflector, which offers a cost-effective solution while maintaining the required functionality for reflecting and improving the designs.\\n\\nWe acknowledge the importance of testing with Open Source LLMs to enhance the generalizability of our framework. We have also demonstrated that our approach can significantly improve the OSDA design capabilities using open-source models. Specifically, we tested two popular open-source LLMs, Mistral[1] and LLama 3.1[2], and found that our method also yielded substantial improvements in their ability to design OSDA molecules\\n\\n| Method | BLEU $\\\\uparrow$ | Morgan $\\\\uparrow$ | MACCS $\\\\uparrow$ | RDK $\\\\uparrow$ | ED $\\\\downarrow$ | KL Divergence $\\\\downarrow$ |\\n|--------------------------|-----------------|-------------------|------------------|----------------|-----------------|----------------------------|\\n| OSDA Agent | 0.601 | 0.368 | 0.816 | 0.624 | 0.934 | 0.825 |\\n| OSDA Agent* | 0.571 | 0.317 | 0.772 | 0.601 | 0.964 | 1.091 |\\n|--------------------------|-----------------|-------------------|------------------|----------------|-----------------|----------------------------|\\n| Llama | 0.522 | 0.301 | 0.628 | 0.416 | 1.693 | 1.073 |\\n| OSDA Agent (Llama) | 0.551 $\\\\uparrow$ | 0.315 $\\\\uparrow$ | 0.754 $\\\\uparrow$ | 0.565 $\\\\uparrow$ | 0.901 $\\\\uparrow$ | 0.693 $\\\\uparrow$ |\\n| Mistral | 0.338 | 0.177 | 0.411 | 0.224 | 2.692 | 1.192 |\\n| OSDA Agent (Mistral) | 0.512 $\\\\uparrow$ | 0.306 $\\\\uparrow$ | 0.740 $\\\\uparrow$ | 0.541 $\\\\uparrow$ | 0.825 $\\\\uparrow$ | 0.661 $\\\\uparrow$ |\\n\\n\\nThe OSDA Agent is our default model. The OSDA Agent* replaces the actor with GPT-4o while the OSDA Agent (Llama) is fully built on Llama, OSDA Agent (Mistral) is fully built on Mistral.\\n\\nOur experimental results show that, regardless of the model used, our OSDA Agent significantly enhances the overall design results(**OSDA Agent (Llama)** Vs **Llama** , **OSDA Agent (Mistral)** vs **Mistral** ). However, currently, its performance is still slightly inferior to GPT-4, which remains the most effective model for this specific task.\\n\\n[1]https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407\\n\\n[2]https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct\\n\\n# Part IV: Addressing the Complexity of Zeolite Structures and OSDA Size Limitations\\n\\n\\uff08question2\\uff09\\n\\nAs of now, there are approximately 250 types of zeolites, and our dataset includes 210 zeolite structures, covering the vast majority of zeolite frameworks. The seven zeolites designed in this paper are also representative types used in industry. Within different zeolite structures, based on chemical field experience, the pore structure, pore size, thermal stability, and framework type have a significant impact on OSDA selection. For new zeolite structures, we can use examples of OSDA from similar zeolite frameworks to design new prompts based on the characteristics of the new structure. When training the binding energy model, we can consider purposeful data augmentation and perturbation to improve generalization ability.\\n\\nZeolite OSDAs generally refer to small molecules, and our OSDA agent is currently focused on small molecules. For large molecules (with a molecular weight greater than 1000 ), such as surfactants, their interaction with zeolites differs significantly from small molecules. Therefore, Therefore, the screening criteria in Table 2 of the paper and the binding energy estimates will no longer be valid for such large molecules.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"Thank you for addressing my questions\", \"comment\": \"Thank you so much for your detailed responses to my questions and concerns. I have a better understanding of the paper and decided to adjust my score.\"}", "{\"title\": \"Summary of Revision\", \"comment\": \"Dear Chairs and Reviewers,\\n\\nWe would like to thank the reviewers for their careful and constructive comments. We also thank the reviewers for acknowledging our work is original, effective ( 93jd,t8Dz,WsXS\\n), novel ( bUa4,93jd) and has great potential (93jd). The paper has been revised in accordance with the reviewers\\u2019 comments and suggestions. Updates and changes are marked by blue color in the revised version. The major changes in this revision lie in the following aspects:\", \"appendix_b\": \"Ablation Study\", \"appendix_g\": \"Evaluation of Alternative LLM Components\", \"appendix_h\": \"Supplementing the Background and Motivation\", \"appendix_j\": \"Definition and Explanation of Terms and Concepts in the Paper.\\n\\nShould you need further information, please let us know. We look forward to hearing from you soon.\\n\\nYours sincerely,\\n\\nAuthors of Paper \\u201cOSDA Agent: Leveraging Large Language Models for De Novo Design of Organic Structure Directing Agents\\u201d\"}", "{\"summary\": \"This paper introduces the OSDA Agent, an interactive framework specifically made for designing organic structure directing (OSDA) agents, which is critical for zeolite synthesis. The main goal of the OSDA agent is to generate the de novo design of OSDAs with given zeolites as the target. This framework integrates computational chemistry tools into LLMs to check the quality of generated molecules and includes a reflection mechanism to improve OSDA generation. The OSDA Agent proposed in this work contains three key components: actor, evaluator, and self-reflector. This novel framework design creates a continuous learning environment that would facilitate more efficient and effective zeolite synthesis planning. Experimental results from the paper demonstrate the OSDA Agent yields better quality results compared to existing baseline models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Novel chemistry LLM agent for molecule generation with a reflection mechanism design: the self-reflection modules in the OSDA Agent reuse feedback from previous iterations to optimize future trials, and this enhances the agent's decision-making ability effectively.\\n2. Tools to evaluate the quality of generated molecules: this model also includes three chemical tools to check if the generated chemical is valid and feasible, by checking the validity with RDKit, synthetic complexity. To estimate the binding energy, the authors also train their own binding energy estimation model that has lower computational complexity compared to existing computational methods.\", \"weaknesses\": \"1. Limited post-generation check: this agent framework includes three chemical property checks, and those would be useful for checking if the generated chemical is valid or reasonable, however, it does not consider other properties such as toxicity and explosiveness, the safety of the chemical in general.\\n2. see questions below\", \"questions\": \"1. Robustness of the input to the agent model: as there are multiple ways to represent a chemical, does this model support multiple chemical expressions, i.e. chemical formulas like H2O and IUPAC names like oxidane? The 'Evaluator' in the agent model would convert the input target zeolite into a SMILES string, but have you considered changing the chemical expression to see if the 'Evaluator' still works?\\n2. Accuracy of binding energy estimation model: this work proposes a new binding energy estimation model to lower the computational complexity of previous computational tools, but how accurate is this estimation model, and is the estimation from this model comparable to traditional atomic simulation methods? \\n3. LLM consideration: this work considers OpenAI's GPT models as the base LLM for the agent, and they are not open-source models and would be potentially costly to use, is there any reason that the authors did not also consider open-source models like Llama and Mistral, or would you consider experimenting with those open-source models? \\n4. Performances of the baseline models: curious about performance compared to baseline models, specifically for the 'validity' metric the authors proposed, how is this calculated, is it based on results from multiple computational models? This question stems from the result showing that one of the baseline models has a validity score of 0.00 for both datasets whereas the proposed model has a score of 100. \\n5. Memory and cost: this agent model includes a reflection mechanism and the paper also shows the number of reflections vs SCScore plot, showing that 4 reflections would reduce the synthesis difficulty, would having more reflections yield even better performances, and what is the cost of reflections?\\n6. Safety concerns: do GPT models always fulfill users' requests, and is there any case that the model refuses to fulfill the request or gives out a warning due to safety concerns of the generation process or the chemical itself, or is there any case that the model fails to generate the answer due to its limited knowledge of certain uncommon chemicals?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## (2/3)\\n\\n# Question3\\uff1aExplore Open Source LLM\\n\\nWe appreciate your suggestion to explore open-source models such as Llama[1] and Mistral[2]. We did indeed experiment with both models and compared their performance against OpenAI's GPT-4 in the context of our OSDA (Open Source Design Agent) for chemical design tasks.\\n\\n| Method | BLEU $\\\\uparrow$ | Morgan $\\\\uparrow$ | MACCS $\\\\uparrow$ | RDK $\\\\uparrow$ | ED $\\\\downarrow$ | KL Divergence $\\\\downarrow$ |\\n|--------------------------|-----------------|-------------------|------------------|----------------|-----------------|----------------------------|\\n| OSDA Agent | 0.601 | 0.368 | 0.816 | 0.624 | 0.934 | 0.825 |\\n| Llama | 0.522 | 0.301 | 0.628 | 0.416 | 1.693 | 1.073 |\\n| OSDA Agent (Llama) | 0.551 $\\\\uparrow$ | 0.315 $\\\\uparrow$ | 0.755 $\\\\uparrow$ | 0.565 $\\\\uparrow$ | 0.901 $\\\\uparrow$ | 0.693 $\\\\uparrow$ |\\n| Mistral | 0.338 | 0.177 | 0.411 | 0.224 | 2.692 | 1.192 |\\n| OSDA Agent (Mistral) | 0.512 $\\\\uparrow$ | 0.306 $\\\\uparrow$ | 0.740 $\\\\uparrow$ | 0.541 $\\\\uparrow$ | 0.825 $\\\\uparrow$ | 0.661 $\\\\uparrow$ |\\n\\n\\n[1]https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407\\n\\n[2]https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct\\n\\nThe OSDA Agent is our default model. The OSDA Agent (Llama) is fully built on Llama, OSDA Agent (Mistral) is fully built on Mistral.\\nOur experimental results show that, regardless of the model used, our OSDA Agent significantly enhances the overall design results (**OSDA Agent (Llama)** Vs **Llama**, **OSDA Agent (Mistral)** Vs **Mistral**). However, currently, its performance is still slightly inferior to GPT-4, which remains the most effective model for this specific task.\\n\\n# Question4\\uff1aAbout the evaluation index \\\"Validity\\\"\\n\\nThe validity metric evaluates the proportion of generated molecules that meet a set of screening criteria derived from decades of research on OSDAs. These criteria, developed by domain experts, include empirical rules regarding molecular rings, bond types, functional groups, and elemental compositions. Chemists typically avoid attempting to use molecules that do not satisfy these rules in zeolite synthesis, making this metric highly relevant for practical applications.\\n\\nRegarding performance, our approach uses chemical tools to actively screen generated molecules during the generation process. Molecules that do not meet the criteria are flagged, and the model reflects on and regenerates them, ensuring that all output molecules adhere to these screening rules. This process contributes to the high validity score achieved by our model.\\n\\nIn contrast, baseline text-based molecular design methods, such as BIOT5 and MOLT5, were tested using textual inputs tailored to specific zeolites and screening criteria translated into natural language. However, due to the lack of specialized domain knowledge and limited understanding of nuanced natural language descriptions, these baseline models rarely generated molecules that met the validity criteria.\"}", "{\"comment\": \"## (2/3)\\n\\n# Adapting the Methodologies and Framework for Designing Other Molecules or Materials\\n\\n\\uff08Question1\\uff09\\n\\nWe believe that the methodologies and framework presented in the paper can indeed be adapted to design other types of molecules or materials, though some adjustments would be necessary to account for the specific requirements of different domains.\\n\\n**Prompt Engineering**, previous studies have shown that providing large language models (LLMs) with context through In-Context Learning (ICL) and guiding the reasoning process using methods like Chain-of-Thought (CoT) can enhance their ability to solve complex problems. For designing other types of molecules or materials, it would be necessary to carefully design the prompts according to the specific problem at hand. This may require domain-specific expertise to ensure that the model is given the correct context and instructions for the new design task. Fine-tuning the prompts will be essential for effectively applying the model to different molecular or material design challenges.\\n\\n**Selecting Task-Specific Tools**\\uff0c as LLMs can sometimes generate outputs that are factually incorrect, commonly referred to as \\\"hallucinations,\\\" it is crucial to integrate specialized chemical tools to ensure the generated results are accurate and meet the desired criteria. By incorporating targeted computational tools and databases that are tailored to the new materials or molecules being designed, we can validate the outputs and improve the reliability of the model\\u2019s predictions. These tools will help filter out erroneous results and ensure that the design process is aligned with the specific requirements of the new system.\\n While the core methodology is adaptable to other molecular or material design tasks, appropriate adjustments in prompt design and the integration of specialized chemical tools will be necessary to ensure the framework can meet the specific demands of different applications.\\n\\n# Computational Resource Requirements and Scaling Molecule Type\\n\\n\\uff08Question2\\uff09\\n\\nThe OSDA Agent framework is designed to efficiently handle typical molecular design tasks. The chemical tools we use, namely Void and Scscore, are both Python-based and primarily rely on CPU computation. However, to improve efficiency, particularly when estimating binding energies, we leverage A6000 GPU for more intensive computations.\\n\\nFor tasks that involve generating a large number of OSDA candidate molecules, we can control the number of docking poses generated by Void, which helps manage computational load and reduce the time spent on energy estimations.\\n \\nRegarding more complex molecular structures, our observations suggest that the complexity of OSDA molecular structures has a minimal impact on the runtime of Scscore and Void. These tools perform relatively consistently across different levels of molecular complexity, making the framework scalable even for more intricate molecular designs.\\n\\n# Handling Updates and Integration of New Tools in the Framework\\n\\n\\uff08Question3\\uff09\\n\\nWe recognize that the fields of computational chemistry and machine learning are evolving rapidly, and our OSDA Agent framework is designed with flexibility and extensibility in mind to accommodate this change. The LLM Agent, as well as the computational chemistry components, are built with a modular architecture that allows for easy integration of new tools and updates.\\n\\n**Modular Design for Computational Tools**: The computational chemistry portion of the framework is a standalone module. This design allows us to update or replace any specific computational tool (e.g., Void, Scscore) by simply modifying the interfaces, without requiring significant changes to the overall framework. This modular approach ensures that updates or substitutions of tools can be seamlessly incorporated, preserving the integrity of the entire system.\\n\\n**Updating Knowledge and LLMs**: In addition to accommodating updates in computational tools, we also ensure that the framework stays up-to-date with the latest developments in chemical knowledge. We achieve this by continuously expanding and updating the relevant databases. For example, new data can be added to the training sets and incorporated into the prompt engineering process to improve the model's predictive capabilities. As LLMs evolve and improve, we can integrate new versions or fine-tuned models to further enhance the system's performance.\\n\\nIn summary, the OSDA Agent framework is designed to be highly adaptable, with a modular structure that facilitates the easy integration of new computational chemistry tools and LLMs. This approach ensures that the framework can keep pace with technological advancements and maintain its relevance in the face of ongoing developments in both computational chemistry and machine learning.\"}", "{\"comment\": \"## (3/3)\\n# Question5\\uff1aAbout the Reflection Count\\n\\n**Would more reflections yield better performance?**\\nBased on our observations, during the molecule optimization task, the first few reflections (typically the first four to five) effectively improve the SCScore and binding energy. This is achieved by optimizing the carbon chain length, functional group positioning, simplifying the molecular backbone, and reducing steric hindrance, while maintaining the functional characteristics of the original molecule. However, starting from the fifth to the sixth iteration, the optimized molecules begin to significantly differ in structure from the original molecule, and the estimated binding energy starts to rise. This suggests that while more reflections may improve the binding energy further, they may sacrifice functional consistency with the original molecule. Therefore, we set the number of reflections to 4-5 in most of our experiments to balance performance enhancement and functional consistency.\\n\\n**What is the cost of reflections?**\\nThe primary cost of each reflection is associated with the evaluation of the generated molecules. Specifically, each reflection typically incurs a time cost of 1 to 3 minutes. In our experimental setup, each reflection involves modifying the original molecule to optimize the SCScore and binding energy, which requires evaluating the performance of the new molecule. As a result, increasing the number of reflections leads to higher computational costs, especially when multiple iterations are involved.\\n\\nIn summary, while increasing the number of reflections might improve the binding energy further, too many reflections may reduce functional consistency with the original molecule. Additionally, each reflection incurs a computational cost. In our experiments, we found that setting the number of reflections to 4-5 strikes a good balance between performance improvement and computational cost.\\n\\n# Question6\\uff1a About the security of LLM\\n\\nTo date, we have not encountered any issues where the LLM fails to respond due to safety concerns related to the generated chemical molecules. However, we acknowledge that assessing the safety of chemical molecules\\u2014such as toxicity and explosiveness\\u2014requires more specialized chemical tools, like Toxtree and EXPLO5. It is difficult for LLMs to determine the toxicity or explosiveness of a molecule solely based on its molecular structure. For instance, the molecule we generated, C\\\\[N+\\\\]1(C)CC2CCC1CC2, was identified as toxic by the Toxtree tool, but the LLM did not issue a warning during the generation process. We plan to incorporate additional chemical tools in the future to assess and provide warnings regarding the safety of the generated chemical molecules.\\n\\nCurrently, we have not encountered a situation where the LLM completely refuses to generate content. While the generated molecules may not meet the requirements in various aspects, we utilize a reflection process to allow the LLM to reconsider and generate molecules that better align with the desired specifications.\"}", "{\"title\": \"Response to Reviewer bUa4\", \"comment\": \"Dear Reviewer bUa4,\\n\\nWe appreciate that our rebuttal addressed your concerns. Please let us know if you have any further questions.\"}", "{\"title\": \"Response to Reviewer t8Dz\", \"comment\": \"Dear Reviewer t8Dz,\\n\\nWe appreciate that our rebuttal addressed your concerns. Also, thank you for the support for our work! Please let us know if you have any further questions..\"}", "{\"metareview\": \"This paper presents an LLM-driven framework (combined with computational tools) for designing OSDA molecules for zeolite synthesis, integrating computational chemistry and a reflection component to refine candidates iteratively. The reviewers find the approach original, effective, and of practical significance. Reviewers noted the experimental results are strong with clear improvements over baseline methods. Some reviewers requested clearer explanation of why OSDAs are challenging to design and how certain components contribute, the authors provided detailed clarifications, ablation results, and additional background in their rebuttal.\\n\\nOverall, the paper\\u2019s contributions\\u2014combining LLMs, chemistry tools, and an iterative refinement cycle\\u2014represent a meaningful step forward for molecular design tasks. The paper addresses reviewer concerns adequately, and I recommend acceptance. There are additional prior work on applying LLMs, computational tools, machine learning potentials, DFT calculations, etc in a pipelined/iterative approach for iterative search/refinement of other types of materials [1,2,3]. I recommend the authors consider including these work in their discussion.\\n\\n[1] FlowLLM: Flow Matching for Material Generation with Large Language Models as Base Distributions\\n[2] Fine-Tuned Language Models Generate Stable Inorganic Materials as Text\\n[3] Generative Hierarchical Materials Search\\n[4] MatterGen: a generative model for inorganic materials design\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised questions around motivation and clarity, generality and novelty of the approach, and insight into dataset limitations and choices of baseline models. The authors clarified the complexity and limited availability of OSDA examples, explained the significance of feedback-driven iterative design, and provided ablation studies to highlight the importance of each component. The discussions, revisions, clarifications sufficiently resolved major concerns.\"}", "{\"comment\": \"## (1/3)\\n\\nWe would like to express our sincere gratitude for your recognition of the **originality** and **practical significance** of our work, as well as for your acknowledgment of the **effectiveness** of the OSDA Agent framework we proposed. We also appreciate your constructive suggestions, which will be invaluable in helping us further improve and refine our approach.\\n\\nBelow, we provide detailed answers to the reviewer's concerns.\\n\\n# Part I: Ablation Study on Model Components\\n\\uff08Weaknesses1 and question4\\uff09\\n\\nIn this paper, the core of the proposed OSDA Agent method lies in the **reflection mechanisms**, where the chemical tools provide reflective information from different aspects. The results of the ablation study are presented below\\uff1a\\n\\n| Method | Validity $\\\\uparrow$ | BLEU $\\\\uparrow$ | Morgan $\\\\uparrow$ | MACCS $\\\\uparrow$ | RDK $\\\\uparrow$ | ED $\\\\downarrow$ | KL Divergence $\\\\downarrow$ | Avg Rank |\\n|----------------------------|---------------------|-----------------|-------------------|------------------|----------------|-----------------|----------------------------|----------|\\n| OSDA Agent | **1.000** | 0.601 | 0.368 | **0.816** | **0.624** | **0.934** | **0.825** | **1.28** |\\n| Remove reflection mechanism | 0.702 | 0.581 | 0.331 | 0.782 | 0.553 | 1.359 | 0.973 | 4.57 |\\n| Remove RDKit | 0.770 | 0.593 | 0.355 | 0.751 | 0.566 | 1.233 | 0.830 | 3.42 |\\n| Remove Scscore | 1.000 | 0.570 | **0.372** | 0.802 | 0.614 | 1.256 | 1.001 | 2.85 |\\n| Remove blending energy | 1.000 | **0.627** | 0.356 | 0.787 | 0.619 | 1.275 | 0.972 | 2.42 |\\n\\n\\n\\nBased on the results of the ablation study, when measuring importance by average ranking, the component with the most significant impact on our method is the **reflection mechanisms**. For our approach, if removing the reflection mechanisms, our model will degrade into using In-Context learning and few-shot Cot prompt engineering for OSDA molecular design. Removing this part leads to the poorest average performance. The next most impactful component is the **RDKit tool**. Furthermore, the **SCScore**(Synthetic Complexity Score) and **Binding Energy** models have a considerable influence on the molecular geometry (It has a great influence on WHIM energy distance and KL divergence)\\n\\nThe most common type of failure in our experiments is the **deviation from the required C/N ratio**. We speculate that the reason for this is that the C/N ratio is a clear and quantitative constraint in molecular design, but the LLM lacks direct numerical calculation capabilities, leading to these failures.\\n\\n# Part II: Reflection Iteration Count Setting\\n\\uff08Weaknesses2 and question1\\uff09\\n\\nThank you for your insightful comment. When choosing the number of iterations, we considered experimental outcomes, stability, and computational resource consumption. Our experiments consisted of two task: molecular optimization and molecular design, with the iteration count based on the following analysis.\\n\\nIn the molecular optimization task, we fixed the initial molecule and optimized its SCscore(Synthetic Complexity Score) and Binding Energy. In the first 4 iterations, optimization showed significant improvement, and the molecular functionality remained stable. However, from the 5th iteration onwards, the optimized molecule showed considerable structural differences from the initial molecule, indicating potential over-optimization that could deviate from the intended goal. Therefore, we limited the iterations to 4 to avoid further structural changes.\\n\\nIn the molecular design task, where no initial molecule was fixed, we relied on the OSDA agent to design new molecules. After 4-5 iterations, SCscore stabilized between 1.7 and 1.8, and Binding Energy reached a stable level. After the 6th iteration, SCscore began to fluctuate, and Binding Energy increased, suggesting diminishing returns from further iterations. Thus, setting the iterations to 4-5 ensures stable results while avoiding unnecessary fluctuations or performance decline.\\n\\nOur OSDA molecular design framework is an automated chemical process, where time and cost are critical. Each iteration adds computational expense, with excessive iterations yielding diminishing returns. Thus, 4-5 iterations strike a balance between optimization and efficiency.\\n\\nIn conclusion, the choice of 4-5 iterations is based on our experimental observations, balancing stable results with computational cost, ensuring both efficiency and sustainability.\"}", "{\"comment\": \"## (2/4)\\n\\n# The Novelty of Our Method\\n\\n\\uff08Weaknesses[Clarity] and question 4\\uff09\\n\\n**About the Motivation of the Work**\\n\\nOur motivation for using an LLM Agent in OSDA molecule design stems from the challenge of **limited data** availability in this field (with only 758 distinct OSDA molecules in the dataset), which makes traditional machine learning methods difficult to train. Our approach leverages the extensive knowledge of LLMs, supported by specialized chemical tools, and employs a design-evaluation-reflection-improvement paradigm to enhance performance in OSDA molecule tasks.\\n\\nMoreover, our method is highly adaptable to other similar chemical tasks. Thanks to the flexibility of our framework, new chemical tools and reflection mechanisms can be customized to meet the specific needs of different tasks. This makes our approach a viable solution for many data-scarce generative tasks in the AI for Chemistry domain.\\n\\n**Algorithmic Innovations**\\n\\nWhile we did draw inspiration from the work of Shin et al. (2024), the reflection mechanism in our study goes beyond simple adjustments and has been customized extensively for chemical generation tasks. Our reflection mechanism dynamically integrates feedback from multiple complex chemical evaluation tools (e.g., SCScore, binding energy estimation).We have also demonstrated the effectiveness of the proposed reflection mechanism through ablation studies.\\n\\n| Method | Validity $\\\\uparrow$ | BLEU $\\\\uparrow$ | Morgan $\\\\uparrow$ | MACCS $\\\\uparrow$ | RDK $\\\\uparrow$ | ED $\\\\downarrow$ | KL Divergence $\\\\downarrow$ | Avg Rank |\\n|----------------------------|---------------------|-----------------|-------------------|------------------|----------------|-----------------|----------------------------|----------|\\n| OSDA Agent | **1.000** | 0.601 | 0.368 | **0.816** | **0.624** | **0.934** | **0.825** | **1.28** |\\n| Remove reflection mechanism | 0.702 | 0.581 | 0.331 | 0.782 | 0.553 | 1.359 | 0.973 | 4.57 |\\n| Remove RDKit | 0.770 | 0.593 | 0.355 | 0.751 | 0.566 | 1.233 | 0.8230 | 3.42 |\\n| Remove Scscore | 1.000 | 0.570 | **0.372** | 0.802 | 0.614 | 1.256 | 1.001 | 2.85 |\\n| Remove blending energy | 1.000 | **0.627** | 0.356 | 0.787 | 0.619 | 1.275 | 0.972 | 2.42 |\\n\\nThe OSDA Agent is our default model, and removing the reflection mechanism or any individual component within the Evaluator results in worse performance. Through the *design-evaluation-reflection-improvement* paradigm, we significantly enhanced the performance of OSDA molecule generation and provided a generalizable solution for other data-scarce generative tasks in AI for Chemistry.\\n\\n**Representation Learning Innovations**\\n\\nOur approach introduces a novel representation method for binding energy estimation. While prior studies have explored the representation of crystals, zeolites, and materials, very few have investigated the representation of complexes involving organic molecules and these materials. Most previous research on complexes has focused on organic macromolecules and small molecules, such as proteins.\\n\\nTo the best of our knowledge, the only study on Organic-Inorganic complex representation is our concurrent work, Zeoformer[1], which focuses on the coarse-grained periodicity of OSDA-zeolite complexes. In contrast, our binding energy estimation model starts from the definition of binding energy and not only extracts features from the OSDA-zeolite complex but also incorporates distinct features of both OSDA and zeolite.\\n\\n| Model | Binding Energy (kJ/mol Si) (MAE \\u2193) |\\n|----------------------------|------------------------------------|\\n| Only Complex encoder | 0.469 |\\n| Complex encoder+Zeolite encoder | 0.411 |\\n| Complex encoder+ Smiles encoder | 0.402 |\\n| full model | **0.384** |\\n\\nThe Complex encoder, Zeolite encoder, and Smiles encoder respectively represent the extraction of complex information, zeolite framework information, and molecular information. \\nThis novel representation method enhances the accuracy and relevance of binding energy estimation and advances the study of Organic-Inorganic complexes.\\n\\n[1]Shen X, Wan Z, Wen L, et al. Zeoformer: Coarse-Grained Periodic Graph Transformer for OSDA-Zeolite Affinity Prediction[J]. arXiv preprint arXiv:2408.12984, 2024.\"}", "{\"comment\": \"## (1/3)\\n\\nWe thank you for recognizing the **originality** and **effectiveness** of our proposed OSDA Agent, as well as the **novelty** of the self-reflection mechanism in optimizing molecule generation. We are grateful for your constructive comments, which have contributed to improving the clarity and impact of our work.\\n\\nBelow, we provide detailed answers to the reviewer's concerns.\\n\\n# Weaknesses1\\uff1a Chemical Safety Assessment\\n\\nThank you for your valuable comments. We completely agree with the importance of safety in chemical molecule design. Since safety assessments require specialized chemical tools, we plan to incorporate chemical safety evaluation tools such as Toxtree[1] and EXPLO5[]2] to assess the molecules we design. These tools are effective in identifying potential safety concerns, such as toxicity and explosiveness, and issuing warnings for molecules that may present risks. By integrating these tools, we aim to further refine our approach, ensuring that the generated molecules not only meet the chemical property requirements but also adhere to chemical safety standards.\\n\\n[1]Toxtree is a software tool used for predicting the toxicity of chemical substances through structure-toxicity relationship (SAR) models.\\n\\n[2]EXPLO5 is a software tool used for calculating the thermodynamic properties and phase behavior of chemical substances, and it can be used to assess the explosive properties of molecules.\\n\\n# Question1\\uff1aSupport for Multiple Inputs\\nCurrently, our model primarily supports converting chemical input into SMILES (Simplified Molecular Input Line Entry System) strings, as SMILES is a standardized and widely used molecular representation. This format is also more compatible with the chemical tools employed in our evaluator, such as the energy estimation models\\uff0c Void and Scscore, which both take SMILES as input.\\n\\nRegarding IUPAC names, while our current implementation does not directly handle them, we recognize the value of supporting multiple chemical expressions. We can consider integrating additional tools or leveraging the capabilities of the large language model itself to convert IUPAC names into SMILES strings, enabling flexibility in the input format.\\n\\nIn summary, while the evaluator currently works with SMILES input, we are open to exploring ways to expand the model's ability to handle other chemical representations in the future.\\n\\n# Question2\\uff1aAccuracy of binding energy estimation\\n\\nThank you for your thoughtful question. To evaluate the accuracy of our binding energy estimation model, we trained and tested it using the OSDB database, where the binding energy values are derived from traditional atomic simulation methods. These values serve as our ground truth labels.\\n\\nOur model achieved a Mean Absolute Error (MAE) of approximately 0.38 kcal/mol, which demonstrates a high level of accuracy. This performance is comparable to existing binding energy estimations in the OSDA framework. For example, for AFI-type zeolites, the binding energy ranges from -3 kcal/mol to -9 kcal/mol, further demonstrating the model's robustness, particularly for this class of materials.\\n\\nWe believe this level of accuracy provides strong evidence that our model can be a reliable alternative to more computationally expensive traditional atomic simulation methods, especially in cases where reducing computational complexity is critical.\"}", "{\"title\": \"Response to Reviewer WsXS\", \"comment\": \"Dear Reviewer WsXS,\\n\\nThank you very much for your detailed feedback and for taking the time to assess our work carefully. We truly appreciate your constructive suggestions and the recognition of the relevance of our contributions.\\n\\nSince large language models (LLMs) are pre-trained on extensive corpora, we believe they have the ability to handle other similar tasks. Although the current research focuses specifically on OSDA design for zeolites, we are confident that the underlying methods can be extended to other types of molecular design. The model is based on large language models that have been trained on various scientific texts, providing foundational chemical knowledge that supports broader molecular design capabilities. Moreover, the LLM Agent offers high flexibility and adaptability, enabling it to choose appropriate tools and evaluation methods based on specific design requirements. This adaptability allows the model to be applied to a wider range of materials and chemical tasks beyond zeolites and OSDA. We plan to further explore applying our approach to a broader range of chemical design tasks in the future.\\n\\nWe take your comments very seriously and are committed to addressing them thoroughly in the camera-ready version.\"}", "{\"comment\": \"## (3/4)\\n\\n# Definition of terms \\n\\n\\uff08weakness [Clarity]\\uff09\\n\\nThank you for your feedback. We will clarify the meanings of the terms and acronyms used in the manuscript both in the main text and the appendix to improve readability.\\n\\nHere are the definitions of some key terms\\uff1a\\n\\n**SMILES (Simplified Molecular Input Line Entry System)**\\n\\nSMILES is a symbolic language used to represent the structure of chemical molecules. It encodes the atoms, bonds, and spatial arrangements of a molecule in a string of characters. SMILES is widely used in computational chemistry and AI for Science due to its simplicity in storing and exchanging chemical information and its ability to be parsed by computer programs.\\n\\n**In-Context Learning (ICL)**\\n\\nIn-Context Learning refers to the process where a pre-trained language model performs reasoning and generates responses based on provided examples or contextual information in the input. This is done without the need for additional model training or parameter updates, making it a form of immediate inference for a given task.\\n\\n**Chain of Thought**\\n\\nChain of Thought refers to a reasoning process where a model breaks down a complex problem into a sequence of logical steps to improve the accuracy and transparency of its reasoning. This approach helps in step-by-step problem-solving by guiding the model through intermediate steps.\\n\\n**RDKit**\\n\\nRDKit is an open-source toolkit for cheminformatics that provides a wide range of functionalities for molecular manipulation, feature extraction, structure visualization, and drug design. It is widely used in chemistry and bioinformatics.\\n\\n\\n**SCScore (Synthetic Complexity Score)**\\n\\nSCScore is a metric used to assess the synthetic difficulty of a chemical molecule. It considers factors such as the number of synthetic steps, reagents, and reaction conditions required to synthesize the molecule, evaluating its complexity from laboratory synthesis to industrial production. A higher score indicates greater synthetic difficulty.\\n\\n**Binding Energy**\\n\\nBinding energy refers to the interaction energy between an OSDA (organic structure-directing agent) molecule and the active site in a zeolite lattice. It is used to describe and predict the stability of the OSDA during the zeolite synthesis process.\"}", "{\"summary\": \"The authors introduce OSDA Agent, an interactive framework for designing Organic Structure Directing Agents (OSDAs) used in zeolite synthesis. The framework leverages large language models (LLMs) as the core intelligence, complemented by computational chemistry tools. OSDA Agent consists of three key components: the Actor (generates potential OSDA structures), the Evaluator (assesses generated OSDAs using computational tools), and the Self-reflector (produces reflective summaries to refine subsequent outputs).\", \"the_main_advantages_of_osda_agent_are\": [\"Improved generation quality compared to pure LLM models, producing candidates consistent with experimentally validated OSDAs.\", \"Integration of chemical knowledge and tools to ensure generated molecules adhere to chemical rules and are feasible.\", \"Interactive and iterative design process that leverages feedback and self-reflection\", \"Experiments demonstrate that OSDA Agent outperforms baseline methods, including state-of-the-art text-based de novo molecule generation approaches.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"[Algorithm use] The work seems to highlight the successful use of existing LLM pipelines in searching a vast space of objects of scientific discovery. In this case, the objects were chemical agents, OSDAs, but theoretically, these could also be molecules, drugs, equations, etc. It also presents the possibility of connecting LLM pipelines to more traditional algorithms or solutions which allow evaluating the outputs of LLM models and sending feedback to improve future iterations. This is valuable for the community since it shows the viability of prior work.\\n\\n[Results] Building on the previous strength, the use of the existing methods allowed authors to discover researched OSDAs as well as potentially new ones. This shows the robustness of the framework, possibly opening new avenues to zeolite synthesis. The real consequences of the introduced method are difficult to predict without expert knowledge of the chosen topic.\", \"weaknesses\": \"[Motivation] Being a non-expert in chemistry it is difficult to grasp why OSDAs pose such a challenge to contemporary methods. Linked to that, the description of related work could use some more detail, at least 1 example of an existing method and the way it works with a discussion on how the method presented in this work builds upon its weaknesses. The authors do try to convey that new algorithms for OSDAs are needed but it isn\\u2019t clear to me why. Is it only the need to give feedback to the algorithm in order to get better outputs with the next iteration (impossible to achieve with traditional ML methods)? If so, why would it be so crucial for OSDAs? Is it that the current methods are ineffective for some subclass of OSDAs or\\u2026?\\n\\n[Specificity] The work may be certainly worthwhile to OSDA specialists but I have doubts whether it would be of general interest. It shows how to apply an existing framework to a specific task of chemical agent construction. There seems to be limited novelty when it comes to representation learning itself or algorithmics in general. The reflection mechanism that verbalizes errors found by the Evaluator for future learning seems to be taken from a paper by Shin et al. (2024), and adjusted for this particular task, similarly to other used algorithms.\\n\\n[Clarity] As much as the manuscript is written well and it is possible to understand the syntax and often what is being conveyed, some of the jargon becomes weary and makes details intelligible. This happens in Section 3, where the data is described and seemingly one of the crucial contributions of the work \\u2014 the binding energy. It is also evident in Section 4.1 \\u2014the authors talk about SMILES sequences, In-Context Learning, Chain of Thought, etc. without ever defining, even intuitively, what these acronyms convey. See also RDKit, SCScore, etc. Section 4.1. would benefit from explaining intuition on the framework rather than an overview with specifics defined later.\\n\\n[Details] There are a lot of moving parts in the whole framework but there is little detail on how they work exactly. Which of the parts are new knowledge? How does the whole framework operate, from start to finish? Each step in the processing could be described in a separate subsection, to streamline presentation.\", \"questions\": \"1. Can you provide more context on why OSDAs pose such a challenge to contemporary methods?\\n2. What specific aspects of OSDA design call for an interactive, feedback-driven approach like the one you've developed? Are there certain subclasses of OSDAs that current methods struggle with?\\n3. Can you provide an overview of the working of the whole OSDA framework descriptively, without pointing to existing work?\\n4. How does this work contribute to the broader fields of representation learning or algorithmic design beyond an application of existing methods to OSDA synthesis? Can you elaborate on the novelty of your approach compared to existing frameworks?\\n5. How does the binding energy estimation model work, and why is it crucial for your approach? Could you explain this concept in more accessible terms?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces OSDA Agent, a novel framework that combines LLMs (particularly GPT-4) with computational chemistry tools to design Organic Structure Directing Agents (OSDAs) for zeolite synthesis. The framework consists of three key components:\\n\\n1. An Actor powered by GPT-4 that designs potential OSDAs using few-shot Chain of Thought prompting, In-Context Learning with OSDB database examples, and a memory component for context.\\n\\n2. An Evaluator that employs computational chemistry tools (RDKit for validity, SCScore for synthesis feasibility) and a novel binding energy model(having an MAE of 0.384 kJ/mol Si) to assess generated structures. \\n\\n3. A Self-Reflector using GPT-4o that provides iterative feedback based on evaluations stored in memory to improve chemical validity, synthetic feasibility, and binding energy estimations through a generation-evaluation-reflection-refinement workflow.\\n\\nThe authors validate their framework across multiple zeolite types, demonstrating its ability to generate chemically valid OSDA candidates outperforming the baselines across multiple similatiy metrics. Additionally, the framework successfully optimizes existing OSDA molecules, reducing their synthetic complexity scores from 3.45 to 2.46 while maintaining their functional properties and keeping binding energies within desired ranges (-3.38 to -9.00 kcal/mol). The authors benchmark their experiments against other methods such as MolT5 and BioT5 using diverse metrics including validity, MACCS, and BLEU scores. Subject matter experts have confirmed that the OSDAs designed by the OSDA Agent show the potential to function effectively as structure-directing agents.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Novel OSDA framework that combines LLMs with chemical validation tools and binding energy models.\", \"Immense practical significance due to zeolites' widespread industrial applications, making efficient OSDA generation methods valuable.\", \"Strong experimental results demonstrating OSDA Agent's ability to generate competitive and sometimes superior OSDA designs over baseline methods such as MolT5, BioT5, and pure GPT-4\", \"Comprehensive evaluation using multiple metrics (validity, similarity, distribution measures) strengthened by validation from domain experts.\", \"Demonstrated capability to optimize existing OSDA structures by reducing synthetic complexity scores while maintaining functional properties.\"], \"weaknesses\": [\"Additional experiments examining the relative importance of different components (e.g., the reflector mechanism) and various tools (RDKit, SCScore, binding energy models), along with analysis of failure patterns (such as consistent failures in chemical validity) would strengthen the paper's findings.\", \"The paper does not specify the number of iterations used in the design-evaluate-reflect process. Additional analysis of how metrics change with iterations and when the performance plateaus would be valuable.\", \"A comparison with domain-specific fine-tuned LLMs or using LLMs other than GPT-4 could strengthen the generalizability of the work.\"], \"questions\": [\"Could you specify the number of iterations of the OSDA agent (design-evaluate-reflect iterations)? What criteria did you use to determine the optimal number of iterations?\", \"The paper demonstrates OSDA Agent's success on multiple zeolite types, how does your framework handle increasing zeolite complexity? Are there any known limitations in terms of zeolite structure complexity or OSDA size?\", \"Could you elaborate on the expert validation process? Specifically, what validation criteria were used, and have any of your generated OSDAs been experimentally synthesized?\", \"Your framework integrates multiple components (reflection mechanism, multiple aspects in the evaluator such as RDKit, SCScore, and binding energy models). Have you conducted ablation studies to understand their relative importance? For instance, how much does the reflection mechanism contribute to the final performance, and what are the most common types of failures?\", \"What motivated the choice of using different LLM variants (GPT-4 for the Actor, GPT-4o for the Self-reflector) in your framework? Have you conducted comparative experiments with other LLM combinations, either using the same model for both components or testing with open-source LLMs?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detailed explanations of all my queries, questions and concerns!\"}" ] }
9Y6QWwQhF3
FoREST: Frame of Reference Evaluation in Spatial Reasoning Tasks
[ "Tanawan Premsri", "Parisa Kordjamshidi" ]
Spatial cognition is one fundamental aspect of human intelligence. A key factor in spatial cognition is understanding the frame of reference (FoR) that identifies the perspective of spatial relations. However, the AI research has paid very little attention to this concept. Specifically, there is a lack of dedicated benchmarks and in-depth experiments analyzing large language models' (LLMs) understanding of FoR. To address this issue, we introduce a new benchmark, **F**rame **o**f **R**eference **E**valuation in **S**patial Reasoning **T**asks (FoREST) to evaluate LLMs ability in understanding FoR. We evaluate the LLMs in identifying the FoR based on textual context and employ this concept in text-to-image generation. Our results reveal notable differences and biases in the FoR identification of various LLMs. Moreover, the bias in FoR interpretations impacts the LLMs' ability to generate layouts for text-to-image generation. To improve spatial comprehension of LLMs, we propose Spatial-Guided (SG) prompting, which guides the model in exploiting the types of spatial relations for a more accurate FoR identification. The SG prompting improves the overall performance of FoR identification by alleviating their bias towards specific frames of reference. Eventually, incorporating the FoR information generated by SG prompting in text-to-image leads to a more accurate visualization of the spatial configuration of objects.
[ "Spatial language", "Evaluation benchmark", "Frame of reference" ]
Reject
https://openreview.net/pdf?id=9Y6QWwQhF3
https://openreview.net/forum?id=9Y6QWwQhF3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yTb6lW1fPF", "xOnFljghaY", "u8UvXNq0gb", "ry7olEx9KN", "rr5qxCX8TT", "pUJsF5WWYP", "hOzpmPsxeB", "fsiuJSBnSG", "eKlXlCFE9T", "dZmFPVoYw7", "dH6qbQ0B3K", "aYmW2btfxr", "Z80TeNb7Yc", "WoVIOz2c1q", "W0TotdBqCF", "RJztnpnw46", "MaLUExHLrU", "I7JOKjQoJt", "DVdxEgxf3r", "DETJhXTtlT", "CU5UgzO6Vh", "CLUU9S8imm", "9PzEO5TEzX", "85nYECi4oX", "79HOhQoxAq", "6vqql6QQms", "4JuPGl9kjs", "3PlvtkzeTP" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1732400992967, 1732401194699, 1732677741006, 1732316290353, 1732558426871, 1730604898908, 1732407988163, 1730103818541, 1732163341267, 1732556875356, 1732636583088, 1732163334699, 1732551423150, 1737524226335, 1734496290262, 1732901009686, 1732920261105, 1732163744346, 1733107073986, 1732407963921, 1732560210211, 1732522580010, 1732408006457, 1732409072189, 1732408053722, 1730566068506, 1730218537417, 1732920210790 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Reviewer_bEBN" ], [ "ICLR.cc/2025/Conference/Submission12949/Reviewer_xDEW" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Reviewer_xDEW" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Reviewer_bEBN" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Reviewer_WWK9" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12949/Area_Chair_V5Uf" ], [ "ICLR.cc/2025/Conference/Submission12949/Reviewer_xDEW" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Reviewer_bEBN" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ], [ "ICLR.cc/2025/Conference/Submission12949/Reviewer_m8vc" ], [ "ICLR.cc/2025/Conference/Submission12949/Reviewer_WWK9" ], [ "ICLR.cc/2025/Conference/Submission12949/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We appreciate your valuable feedback on our work. We want to address each weakness you mentioned and the question below.\\n\\n## Randomness in the selection of four frame-of-reference (FoR) classes\\n\\nWe respectfully disagree with the comment on the randomness of our selection. \\nOur selection of four frames of reference classes is based on our deep study of the related work in linguistics, as we have cited a few in the paper [1, 2]. some of the reviewers also highlighted the soundness of our choice and the theoretical support. Similar terms are used across several cognitive studies. \\nOne example is in [3]. They used the terms egocentric and allocentric, which have the same meaning as relative and intrinsic frames of reference, respectively. \\nThe related work section provides more examples of using similar terms.\\nFor the AI community, we also encounter some literature that use similar concepts but may be in different terms. \\nWe provide two highly relevant references in the related work section. \\nThe first one [4] uses the same terms for the frame of reference, intrinsic and relative, while the other [5] uses object-centric terms to represent the same concept as an intrinsic frame of reference. Also, below, we highlighted the papers we have already cited in the paper to support our claims regarding our choice of FOR.\\n\\n\\n[1] Stephen C. Levinson. Space in Language and Cognition: Explorations in Cognitive Diversity. Language Culture and Cognition. Cambridge University Press, 2003.\\n\\n[2] Thora Tenbrink. Reference frames of space and time in language. Journal of Pragmatics, 43(3):704\\u2013722, 2011. ISSN 0378-2166. doi: https://doi.org/10.1016/j.pragma.2010.06.020. URL https://www.sciencedirect.com/science/article/pii/S037821661000192X. The Language of Space and Time.\\n\\n[3] Francesco Ruotolo, Tina Iachini, Gennaro Ruggiero, Ineke J. M. van der Ham, and Albert Postma. Frames of reference and categorical/coordinate spatial relations in a \\u201cwhat was where\\u201d task. Experimental Brain Research, 234(9):2687\\u20132696, Sep 2016. ISSN 1432-1106. doi: 10.1007/s00221-016-4672-y. URL https://doi.org/10.1007/s00221-016-4672-y.\\n\\n[4] Fangyu Liu, Guy Emerson, and Nigel Collier. Visual spatial reasoning, 2023. Transactions of the Association for Computational Linguistics.\\n\\n[5] Boyuan Chen, Zhuo Xu, Sean Kirmani, Brian Ichter, Danny Driess, Pete Florence, Dorsa Sadigh, Leonidas Guibas, and Fei Xia. Spatialvlm: Endowing vision-language models with spatial reasoning capabilities, 2024. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).\\n\\n## Figure 3 issue\\n\\nWe would appreciate it if the reviewer could rephrase their question and clarify which conclusion is not convincing. Our conclusion from this part of our research is that characterizing the frame of reference is helpful for T2I. However, when the FoR is inherently ambiguous, multiple valid images exist, and if we can characterize the FoR as much as possible, then we can get one of those valid images. \\n\\nFigure 3 only illustrates an ambiguity case in spatial expressions in the A-split and the possible correct images.\\nIn \\u201cA car is to the right of a cow,\\u201d the car can be positioned to the cow's actual right or the right of the cow\\u2019s location from the camera's perspective. \\nWe consider both options as valid interpretations in the A-split. \\nConversely, the counter-part context in C-split, such as \\u201ca car to the right of a cow from the camera\\u2019s perspective,\\u201d has only one correct interpretation, corresponding to image(b) in Figure 3. \\n\\n## Experimental analysis of Llama results\\nThank you for the interesting question. In fact, we think our results are insightful for understanding the original bias of the LLMs at different sizes. Llama3-70 B's lower performance is observed in the zero setting, where we lack control over the model\\u2019s behavior. \\nOne explanation is that the larger models acquire stronger biases from their training examples and memorize the FoR patterns.\\nHowever, the larger model generally exhibits superior instruction-following abilities and mitigates this issue when additional instructions are provided to elaborate on the answer in more sophisticated prompt-engineering settings.\\nThis is evident in the improvements observed when employing CoT and SG-prompting.\"}", "{\"comment\": \"## Additional results\\n\\nThank you for your comment, we conducted the additional experiments, reported here on Qwen2. \\nSince Molmo is a multi-model model based on Qwen2, we anticipate its performance to be comparable. \\nWe observed that the 7B variance of Qwen2's performance is equivalent to that of Gemma2-9B. \\nConversely, the 72B variant of Qwen2 exhibits a distinct behavior compared to the other models utilized in our dataset.\\nThe model's default (zero-shot setting) interpretation aligns with the GPT family (prefer external intrinsic cases). \\nHowever, when employing a few-shot setting or CoT, the model prefers external relative cases over external intrinsic cases, resulting in exceptionally high performance. \\nThis is because the model assumes that most objects do not have front and back on their own and that the spatial relation is created from an outside perspective, which is opposite to the GPT.\\nNevertheless, this assumption lowers the performance in the EI of C-split.\\nOur SG prompting seems to resolve this issue, improving performance in this category and helping the model achieve SOTA results in C-split.\\n\\n| Model | ER-C-Split | EI-C-Split | II-C-Split | IR-C-Split | Avg | A-split |\\n|---------------------|-------------------|-------------------|-------------------|-------------------|-------------------|------------------|\\n| Qwen2-7B (0-shot) | 99.61 | 2.07 | 35.94 | 24.60 | 40.55 | 99.34 |\\n| Qwen2-7B (4-shot) | 34.36 \\u2193(65.25) | 65.11 \\u2191(63.04) | 89.84 \\u2191(53.91) | 89.52 \\u2191(64.92) | 69.71 \\u2191(29.16) | 61.71 |\\n| Qwen2-7B (CoT) | 53.40 \\u2193(46.20) | 78.59 \\u2191(76.52) | 100.00 \\u2191(64.06) | 49.60 \\u2191(25.00) | 70.40 \\u2191(29.85) | 61.38 |\\n| Qwen2-7B (SG) | 71.53 \\u2193(28.08) | 79.46 \\u2191(77.39) | 96.88 \\u2191(60.94) | 59.27 \\u2191(34.68) | 76.78 \\u2191(36.23) | 73.30 |\\n| Qwen2-72B (0-shot) | 60.21 | 93.70 | 85.16 | 45.16 | 71.06 | 60.21 |\\n| Qwen2-72B (4-shot) | 89.92 \\u2191(29.71) | 59.02 \\u2193(34.67) | 94.53 \\u2191(9.38) | 76.21 \\u2191(31.05) | 79.92 \\u2191(8.87) | 90.83 |\\n| Qwen2-72B (CoT) | 84.69 \\u2191(24.48) | 78.26 \\u2193(15.43) | 92.19 \\u2191(7.03) | 85.89 \\u2191(40.73) | 85.26 \\u2191(14.20) | 84.16 |\\n| Qwen2-72B (SG) | 92.93 \\u2191(32.72) | 97.39 \\u2191(3.70) | 96.09 \\u2191(10.94) | 85.08 \\u2191(39.92) | 92.87 \\u2191(21.82) | 93.84 |\\n\\n*Table: Results of Qwen2-7B and Qwen2-72B on our dataset. \\\"\\u2191(number)\\\" indicates improvement over 0-shot by (number), and \\\"\\u2193(number)\\\" indicates decrease compared to 0-shot by (number).*\\n\\n## LLama3 results \\n\\nOne potential explanation is that the larger models acquire biases from their training examples, which can lead to confusion in zero-shot experiments. In these experiments, we lack control over the model\\u2019s behavior except for the prompt, which remains constant across all settings.\\nThis issue can be mitigated when the model provides additional information in CoT and SG prompting, as illustrated in Table 1.\\nConversely, the smaller model experiences a decline in performance when additional explanation is necessary. One plausible explanation for this observation is that the model provided an erroneous interpretation, resulting in an inaccurate conclusion. This phenomenon may be attributed to the extended generated sequence, as the larger model does not encounter this issue.\\n\\n\\nWe hope that all responses address any confusion the reviewer raised. We hope they are convincing about our work and could increase our score. Again, we really appreciate your comment. We will let you know again when we upload the revised version.\"}", "{\"comment\": \"Thanks to the author for the feedback. Some of my concerns have been solved, but some still exist.\\n 1.I understand what you want to express in Figure 3, and I agree it is necessary to add error examples to show the importance of FoR.\\n 2. The result of Llama. I think the authors are avoiding my question. Even if CoT has a smaller impact on small models, it should not have a negative impact on the experimental results. Through the details given in the appendix, I judge that the quality of the prompt for CoT is ill-considered. Using the rough CoT as a baseline and presenting the carefully designed SG as their own method will diminish the rigor of this paper in my evaluation. This leads me to decrease my score.\"}", "{\"title\": \"Reviewer Response\", \"comment\": \"I would like to thank the authors for their detailed explanation and the additional experimental results. As a follow-up, could you provide further analysis or qualitative demonstrations of baseline failure cases, particularly for CoT? I remain concerned about whether the proposed SG prompting genuinely enhances LLMs\\u2019 spatial understanding or if it primarily leverages linguistic cues to categorize sentences with only superficial spatial interpretation, given that the inputs are generated from a limited set of templates. A detailed analysis of LLM outputs might help address this concern.\"}", "{\"comment\": [\"**Prompt Quality of COT**\", \"We are sorry that you are more critical about the quality of our prompt for the COT case and made you reduce the overall assessment of our work. Here we would like to provide our further defense about this situation.\", \"Based the comment that we received from reviewer xDEW, we changed the prompt and the re-ran the experiments, as expected the results are sensitive to small changes in the language, so, we obtain some different values for some of the settings, however, the new results still indicate the outperformance of our SG-prompting compared to COT. We think the advantage comes from characterizing the type of spatial relations in SG and this is consistently observed independent from the quality of prompt's phrasing.\", \"To further confirm the validity of our conclusions, in obtaining the new results on Qwen in the response to reviewer bEBN, we did extra caution about the quality of phrasing the prompt and typos and used a simpler phrasing. As you see the results are consistent with out previous conclusions.\", \"The length of COT explanations is comparable to the length of SG explanation. In the worst case, there is two sentences difference.\", \"A minor justification of some of the typos also is that the example prompts in the appendix included a typo that was not in the actual prompt \\\"The bird is accuracy left of the car\\\" was actually \\\"The bird is accurately left of the car\\\" but it seems, in the appendix, that spelling typo happened.\", \"We hope these new pieces of information and our new reported results are convincing for the reviewer to change their mind and do not relate the merits of the paper to the typos in prompt.\"]}", "{\"summary\": \"This paper proposes a new Frame of Reference (FoR) comprehension task for LLMs, where the models need to identify the perspective category based on a given spatial narrative. For this task, a new benchmark, named FoREST is generated. Using this benchmark, the paper identifies the inability and biases of various LLMs in solving this task, and proposes a new prompting technique to guide LLMs in identifying key information from the textual input thus enhancing their performance on this task. This paper also shows how this ability can be utilized for text-to-image generation under specific spatial guidance, highlighting the potential application value of this work.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper proposes a novel task that can potentially reveal the ability of LLMs in understanding spatial concepts.\", \"weaknesses\": \"1. It is unclear if the proposed Spatial-Guided prompting technique helps \\u201creduce FoR bias in LLMs\\u201d, as claimed in the abstract in the paper, or just clarify the category terms that LLMs are tasked to identify. Since the FoR classes (external intrinsic, external relative, etc) are technical terms in cognitive studies that do not appear commonly in the internet data used for training LLMs, a clear and intuitive explanation of the terms is naturally important for solving this task. However the definition of the terms provided to the LLMs is formal and not intuitive. For example, it does not define clearly what does \\u201cthe referenced object\\u2019s intrinsic directions\\u201d mean. This is only explained to some extent in the Spatial-Guided prompting examples, such as \\u201cthe car has direction\\u201d. What if the concepts are explained in plainer and more intuitive language? Such as:\\n\\n$\\\\textit{\\\"External intrinsic: The spatial description of an item A relative to another item B, where (1) A is not contained by B; (2) The spatial relationship description is relative to the B\\u2019s facing direction, if B has one (Example: a horse, a car. Counterexample: a box.)\\\"}$\\n\\n2. Why understanding FoR is an important problem is not articulated adequately. Since, according to the introduction in this paper, this task is more commonly seen cognitive linguistics study rather than AI or related fields, more discussion on the potential application of FoR understanding ability of LLMs can help readers better understand the motivation. The text-to-image task shown in the paper is a great application, but it is on specifically designed command types. Can this ability be potentially applied for other embodied AI or robotic tasks that require strong spatial understanding capacity?\", \"questions\": \"Does the temperature setting impact the bias of LLMs on this task? The paper sets the sampling temperature to 0, and claims the bias of LLMs by showing that they more frequently produce external classes under ambiguous queries (that correspond to multiple correct categories) in Figure 4. It is possible that a low temperature limits the diversity of LLMs response.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## Failure examples\\n### Failure Examples in Chain of Thought that are resolved by SG\\n\\n\\n1. Context: A horse is in front of a dog from the dog's perspective\", \"cot\": \"Explanation: position of the cow is referring to the back direction of the bag, and both objects are not containing one another. Answer: external intrinsic.\", \"sg\": \"Explanation: Topological: cow is not inside of the bag. Distance: cow may have some distance from the bag according to the context. Direction: The bag, which is the relative object, does not have an intrinsic direction, so the context is likely referring to the observer's perspective that the cow is behind the bag. Answer: external relative\", \"label\": \"external relative\", \"title\": \"Failure example (1)\"}", "{\"summary\": \"Research on spatial perception capabilities in large language models is a key direction for optimizing their generation abilities. This paper introduces a benchmark for understanding frames of reference (FoR) and evaluates different LLMs to test the spatial perception capabilities. Additionally, it uses diffusion models to conduct experimental visualizations that simulate this understanding. The benchmark provides guidance on designing prompts that enhance spatial reasoning, contributing to improved text-to-image accuracy.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper focuses on a core shortcoming in large language models' text comprehension-spatial understanding, presenting a benchmark for evaluating spatial perception abilities. It summarizes potential situations in spatial perception using existing evaluation metrics. Downstream tasks incorporate text-to-image generation experiments with diffusion models to visualize the difference in spatial understanding capabilities of different LLMs. The paper is well-structured, with straightforward explanations of the methods, and employs vivid examples to enhance understanding.\", \"weaknesses\": \"However, this paper lacks logical coherence in its descriptions of various cases. The selection of content for the four FoR classes seems random, raising the question of whether there could be a clearer division for categorizing different spatial reasoning tasks. It remains unclear if the four cases can comprehensively cover all possible spatial reasoning tasks, and more citations are needed to support your claim. The presentation of experimental results is also insufficient; for instance, the two images in Figure 3 fail to follow the principle of controlling variables, rendering the conclusions unconvincing. Furthermore, the experimental analysis is inadequate; while the comparison between LLaMA3-8B and LLaMA3-70B results is noteworthy, the spatial understanding of LLaMA declines as parameters increase, varying across C-Split cases. This raises the question of what insights researchers can draw from this benchmark to adjust datasets to maintain or even improve spatial understanding performance. Addressing this issue is essential to the benchmark's purpose.\", \"questions\": \"I would look forward to seeing more experimental results, particularly on how different large language models perform on the benchmark. For example, how do the latest models like Qwen2 or Molmo perform in different cases? Additionally, it\\u2019s intriguing that spatial understanding may decline as parameter count increases\\u2014what could be the underlying reasons? I also noticed that the performance decreases with the use of CoT and 4-shot settings, which is puzzling. What might be causing this effect?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"## Questions\\n\\n**Does the temperature setting impact the bias of LLMs on this task?**\\n\\nIt is possible that temperature influences the bias of LLMs, particularly in the zero-shot setting. \\nTo address this, we conducted experiments with Llama3-70B. \\nComparing two distinct temperatures (0 and 1) revealed a change in the distribution, that is the frequencies of the classes changed sometimes for 10%. However, the change is not dramatic, and it seems the relative preferences for most of the categories did not change. \\nSpecifically, the model showed the same highest frequency responses for the cow, car, and pen cases, even higher frequency in some settings.\\nTherefore, a high temperature does not significantly change the diversity of LLMs' responses to this task, which is an interesting result. We is going to add the related tables to the appendix of the new version due to the lack of space. \\n\\n\\n### Cow Case\\n\\n| Model | ER temp-0 | ER temp-1 | EI temp-0 | EI temp-1 | II temp-0 | II temp-1 | IR temp-0 | IR temp-1 |\\n|---------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|\\n| 0-shot | 75.38 | 87.12 | 23.86 | 12.50 | 0.76 | 0.13 | 0.00 | 0.25 |\\n| 4-shot | 0.00 | 15.66 | 100.00 | 84.34 | 0.00 | 0.00 | 0.00 | 0.00 |\\n| CoT | 31.82 | 49.87 | 68.18 | 49.87 | 0.00 | 0.13 | 0.00 | 0.13 |\\n| SG | 51.39 | 70.45 | 48.61 | 29.42 | 0.00 | 0.00 | 0.00 | 0.13 |\\n\\n### Box Case\\n\\n| Model | ER temp-0 | ER temp-1 | EI temp-0 | EI temp-1 | II temp-0 | II temp-1 | IR temp-0 | IR temp-1 |\\n|---------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|\\n| 0-shot | 22.50 | 41.67 | 77.50 | 58.33 | 0.00 | 0.13 | 0.00 | 0.25 |\\n| 4-shot | 0.00 | 0.00 | 100.00 | 100.00 | 0.00 | 0.00 | 0.00 | 0.00 |\\n| CoT | 0.00 | 5.83 | 100.00 | 94.17 | 0.00 | 0.00 | 0.00 | 0.00 |\\n| SG | 11.67 | 33.33 | 88.33 | 66.67 | 0.00 | 0.00 | 0.00 | 0.00 |\\n\\n### Car Case\\n\\n| Model | ER temp-0 | ER temp-1 | EI temp-0 | EI temp-1 | II temp-0 | II temp-1 | IR temp-0 | IR temp-1 |\\n|---------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|\\n| 0-shot | 55.20 | 68.24 | 49.01 | 31.15 | 0.79 | 0.61 | 0.00 | 0.00 |\\n| 4-shot | 0.60 | 5.94 | 99.40 | 94.06 | 0.00 | 0.00 | 0.00 | 0.00 |\\n| CoT | 19.64 | 38.52 | 80.16 | 61.27 | 0.20 | 0.20 | 0.00 | 0.00 |\\n| SG | 44.25 | 56.97 | 55.75 | 43.03 | 0.00 | 0.00 | 0.00 | 0.00 |\\n\\n### Pen Case\\n\\n| Model | ER temp-0 | ER temp-1 | EI temp-0 | EI temp-1 | II temp-0 | II temp-1 | IR temp-0 | IR temp-1 |\\n|---------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|\\n| 0-shot | 90.62 | 96.88 | 9.38 | 3.12 | 0.00 | 0.61 | 0.00 | 0.00 |\\n| 4-shot | 0.00 | 7.03 | 100.00 | 92.97 | 0.00 | 0.00 | 0.00 | 0.00 |\\n| CoT | 17.19 | 28.91 | 82.81 | 71.09 | 0.20 | 0.20 | 0.00 | 0.00 |\\n| SG | 48.31 | 57.81 | 54.69 | 42.19 | 0.00 | 0.00 | 0.00 | 0.00 |\\n\\n*Table: The results between two different temperatures of Llam3-70B on the A-split of FoREST. The numbers show the percentage frequency of responses from the model.*\\n\\nWe hope that all responses address any confusion the reviewer raised. We hope they are convincing about our work and could increase our score. Again, we really appreciate your comment. We will let you know again when we upload the revised version.\"}", "{\"comment\": \">why you use different generation models to show your result:\\n\\nSD-2.1 consistency generated the right image using the COW intrinsic right. We wanted to show the possible variety of the solutions, that is why we used the output of multiple models. We hope this makes sense now. \\n\\n>SD-2.1 correctness: \\n\\nThe image generated by SD-2.1 is **actually correct**, if you think about it from the Cow perspective and the intrinsic right side of the COW. We are not sure what is the confusion point, can you please clarify further? \\n\\n>how can I see the importance of FoR?\\n\\nIn Figure 3 we just showed two cases that are both correctly generated without our prompting effort with FoR information. We will add an example for the wrong cases generated by the same models and will demonstrate they how it was fixed and generated the valid image considering the FoR. \\n\\n> LLama-* results vs Qwen-* results\\n \\nTo our understanding all the results are consistent when we look at it as follows, \\n - Zero-shot settings reflect the original bias of the models and depending on that bias even large models can have lower accuracy compared to small models. This depends on their more training that can increase a certain bias for large models. Holds for both LLama and Qwen.\\n - COT and 4-shot: Alway increase the performance of large models significantly, due to their ability to follow instructions with a larger context. The impact is not alway good for smaller models. Holds both for LLama and Qwen.\\n - SG model: sharply improves the large models and most of the time better than COT. \\nPlease let us know which part is inconsistent and why you think LLama-8b does not sound valid. \\n\\n> the actual COT and SG prompts: \\n\\nWe have a full example in page 15 of appendix, and in that case the place holder of {instruction answer} contains the actual text that we sho isn 752-754 for COT and 754 and 778-779. We replace that content in the full example for the clarity in the appendix.\"}", "{\"comment\": \"Based on the reviewer's comment regarding the short explanation in the CoT, we did additional experiments, see below Table. In this experiment, we extend the CoT examples to include more information regarding the direction of relatum to identify the FoR classes. We provide the result on Qwen2 with the new CoT explained above. According to the table, we observe some changes in results. The results from the new CoT show the model favors their preference class (Relative class for Qwen2). The preference of Qwen2 is identified based on our additional results [see Tables in response to reviewer bEBN]. Overall, the old CoT prompt provides a better average compared to the new COT, so this change does not influence the main conclusion of our experiments.\\n\\n| Model | ER | EI | II | IR | All |\\n|----------|--------|--------|--------|--------|--------|\\n| New CoT | 93.39 | 67.72 | 79.69 | 87.10 | 81.97 |\\n| Old CoT | 84.69 | 78.26 | 92.19 | 85.89 | 85.26 |\\n\\n**We also want to remind the reviewer of other contributions to our paper besides the SG prompting. We present the FoREST dataset to reveal the LLMs' understanding of the frame of reference, which is important for comprehending spatial language. Most current spatial benchmarks pay less attention to this aspect and assume the same frame of reference across all scenarios. Our results reveal that different LLMs interpret spatial expressions differently, which could influence the model's performance in more complex tasks.\\nWe also provide the results of a text-to-image task, which confirm our hypothesis that information regarding FoR potentially enhances the performance of the downstream task.\\nWe hope this is convincing enough for the reviewer to reconsider our paper's value and overall assessment.**\"}", "{\"comment\": \"We appreciate your valuable feedback on our work.\\nWe would like to address each weakness you mentioned and the question below.\\n\\n## Weaknesses \\n### Prompt is not well-explain\\n\\nWe agree that our prompting can be rephrased and simplified. We then experimented with your suggested phrasing. \\nWe report the results in the Table below based on your prompt. You can compare this with Table 1 in the paper. \\nAs you can see, the results are very sensitive to changes in the prompt and vary for either better or worse. However, our prompt provides a better average, so this change does not influence the main conclusion of our experiments. \\n\\n| Model | ER | EI | II | IR | Avg |\\n|--------------------|-----------------|-----------------|------------------|------------------|------------------|\\n| Llama3-8B (0-shot) | 48.63 | 94.02 | 78.91 | 6.45 | 57.00 |\\n| Llama3-8B (4-shot) | 53.93 \\u21915.30 | 56.85 \\u219337.17 | 100.00 \\u219121.09 | 37.90 \\u219131.45 | 62.17 \\u21915.17 |\\n| Llama3-8B (CoT) | 63.55 \\u219114.92 | 42.28 \\u219351.74 | 93.75 \\u219114.84 | 35.48 \\u219129.03 | 58.77 \\u21911.76 |\\n| Llama3-8B (SG) | 69.31 \\u219120.68 | 79.02 \\u219315.00 | 100.00 \\u219121.09 | 19.76 \\u219113.31 | 67.02 \\u219110.02 |\\n\\n*Table: Results of C-split of Llama3-8B with updated prompt. \\\"\\u2191\\\" indicates improvement over 0-shot, and \\\"\\u2193\\\" indicates decrease compared to 0-shot.*\\n\\n### Changing the bias of SG prompting\\n\\nWe apologize for the confusion of the purpose of SG prompting. \\nOur primary intention is not to change the bias as long as the model has a correct interpretation.\\nInitially, the model preferred some specific FoR that was possibly incorrect. \\nTherefore, we want to direct the model to describe related spatial relations and provide better responses, which could potentially change the inherent bias of the model towards specific classes. \\nThis, in turn, improves FoR comprehension and performance in related tasks.\\nWe will update our abstract to emphasize improving accuracy instead of a change in their bias. \\n\\n### Lack of discussion regrading the potential application used FoR understanding\\n\\nWe agree that adding the potential applications in the introduction would enhance the reader\\u2019s understanding of the motivation behind characterizing the FoR. \\nCurrently, we discuss how current AI benchmarks and the lack of utilization of the FoR in their dataset from lines 42 to 50. \\nWe will rephrase this part to include more information on the potential problems in these applications that necessitate a comprehensive understanding of the FoR.\\nCertainly, embodied AI is one important application that will benefit from FOR comprehension, particularly when an instruction-giver and instruction-follower have different perspectives and potential variations in their spatial language and usage of FORs. \\nThis requires the model to comprehend the dynamic change in FoR (perspective changes) in the instruction so that it can perform the task more effectively. \\nOther potential applications, such as video narrative generation and 3D scene construction based on text, can also benefit from this ability since they require the model to understand different perspectives.\\n***We are going to integrate these explanations and motivation in the new version of the paper.***\"}", "{\"comment\": \"Thank you for explanation. All of my previous questions are resolved, while weaknesses still remain unaddressed.\\n\\n> 1. The dataset is pure synthetic and constructed by a limited number of textual templates.\\n\\n\\\"Though this simplifies reasoning if the language model can capture the pattern, our experiments show that it can still fail to recognize the frame of reference.\\\" Do you mean failure cases indicates LLM's spatial limitation? I can't agree with that since this task fails to disentangle linguistic and spatial ability of LLMs. \\n- As pointed out by reviewer xDEW, the definition of the terms provided to the LLMs is formal and not intuitive, which might contribute to the classification error. \\n- In most of the cases, at least for humans, the expression can be solved by rules, without really understanding the spatial configuration of the scene.\\n\\n> 2. Inductive bias in textual templates might be leveraged by SG-prompting.\\n\\nThis is verified when comparing the **SG (no template)** and **SG (with template)** column in the table provided. Moreover, **CoT (no template)** and **CoT (with template)** also shows the bias introduced by textual templates.\\n\\n> 3. full prompts of different settings\\n\\nThere's a misunderstanding in the prompts and demonstrations. I thought \\\"example responses\\\" in listing 2 in Appendix C are output showcases for different settings. But it turns out to be the few-shot demonstrations. I think **the explanations in CoT setting are unfair by using shortened descriptions, ambiguous terms and typos. It hinders the rigorousness of this paper.**\\n- line 834: \\\"The bird is accuracy left of the car. Answer: internal intrinsic.\\\"\\n- line 844: \\\"which makes the back relation based on the observer\\u2019s perspective of the room\\\"\\n\\nBased on above, I'll decrease my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper introduces a frame of reference (FoR) comprehension task for LLMs. The paper specifically focuses on understanding the perspective of spatial relations. For example, can LLMs distinguish between \\\"a cat is to the right of the car from the car's perspective\\\" vs \\\"a cat is to the left of a car from my perspective\\\". The task is posed as a multi-class classification task. The authors test different prompting strategies for this task, and find that textual descriptions of topological relations (inside vs not), frame of reference and distances (far) can help the models perform better on the task.\\n\\nUnderstanding spatial relationships is an important skill and the paper further LLM's limited understanding of spatial relationships. Using text-to-image for understanding spatial understanding of LLMs \\n\\nThe reviewers actively engaged with the authors during rebuttal, but several of the reviewer's concern remained. Specifically, reviewers are concerned that in this instantiation of the task, it is unclear if it tests the spatial understanding of LLMs or just linguistic understanding. Additionally, reviewers believe that further investigation and prompt engineering is required to make the experiments more sound.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers actively engaged with the authors during rebuttal. After the discussion, reviewers were still tending negative and felt several of their concerns remained unaddressed:\\n\\nThe reviewers remain concerned that the the task setup tests the LLM's spatial understanding or not, specially because the inputs are generated from a limited set of templates (xDEW, WWK9). The reviewers also felt that the lack of templates in the dataset construction also makes it harder to truly understand the LLMs capability. Reviewer bEBN also raised concerns about what insights can be drawn from the benchmark to improve LLMs spatial understanding. Paper will benefit from more careful experimental design, and going beyond prompt engineering to fix LLMs bias for spatial understanding.\"}", "{\"comment\": \"Thanks to the authors to providing more experiment results and failure examples. These answer some of my questions but other main concerns remain. It is still not clear to me if the proposed task really evaluates on the spatial understanding ability of LLMs or just linguistic analysis in this specific context. Furthermore, it looks like one of the major reasons for the failure of CoT is the inability to determine what is \\\"intrinsic direction\\\", which seems to be a subjective concept that is not well defined in the prompt. Example:\\n>Context: A cow is in front of a container and outside of the container\\n>\\n> CoT: Explanation: front relation is based on the container's intrinsic direction and the cow is not inside the container. Answer: external intrinsic.\\n\\nIn this case, even SG is not certain about this concept, although it makes a correct guess.\\n> SG: Explanation: Topological: cow is outside of the container. Distance: cow may be little bit far from the front of the container from the context. Direction: Container which is the relative object doesn't have the direction, but the context front relation is referred from observer's perspective that cow is in front of the container location. Answer: external relative. **However, it could also be interpreted as external intrinsic if we consider the container having a front direction.** Without more specific information, the safer categorization would be external relative.\\n>\\n> label: external relative\\n\\nNonetheless, this paper introduces a novel spatial task, shows bias in LLMs, and highlights the importance of prompt structure in aiding LLMs' comprehension of the task. While I would consider raising the score, my overall stance remains negative regarding the acceptance of the work.\"}", "{\"comment\": \"> Spatial understanding\\n\\n\\nOur claim is that identifying FoR (indeed based on linguistic analysis) helps better spatial understanding as measured in the downstream tasks. \\nOur approach is to analyze the spatial language better and provide the LLMs with explicit knowledge of the FoR so that they can analyze the linguistic surface and the properties of landmarks to infer the FoR and use it for spatial reasoning later. \\nSpatial reasoning can be manifested in tasks such as T2I.\\nIn T2I, placing objects in the correct relative location indicates a better spatial understanding.\\nWe show that the linguistic analysis and the knowledge of FoR extracted from SG prompting provide LLMs with a better spatial understanding when creating the spatial layout for the T2I model.\\n\\n> SG prompting\\n\\nWe want to clarify the confusion about our proposed SG approach and CoT approach. We would want to remind you that our proposed prompting approach can also be called CoT since we only try to specify the importance concept for reasoning to identify FoR. The main question is how to explain the reasoning steps for identifying FoR. The innovation of our approach is to characterize the important spatial concepts involved to help improve FoR comprehension. We try to have the baseline CoT that does not focus on these explicit concepts, and we want to show that including these concepts (type of spatial relations, topology, distance, and direction in addition to properties of the relatum) in SG promoting (variation of CoT) helps model for better FoR identification.\"}", "{\"comment\": \"We highly appreciate the reviewer's recognition of our novel contribution and the theoretical support we provide in handling frames of reference in spatial language.\\nWe have already attached the supplementary material, including the dataset's textual part; we will publish the code and the visual part when the non-anonymized paper is made available to the public. \\nWe are also working on our anonymous GitHub for the reviewer period. We will update it when we have the link.\\nAgain, we really appreciate your comment. \\nPlease feel free to share any concerns you may have that we can address to improve our rating.\"}", "{\"comment\": \"We would also like to emphasize that, in addition to proposing SG prompting to enhance frame of reference identification, the primary contribution of this paper is the evaluation framework we introduce to assess LLMs' understanding of spatial frames of reference. We demonstrate the significance of defining the FoR concept in spatial reasoning and layout generation by LLMs for downstream tasks, such as text-to-image generation. We sincerely appreciate the reviewers' time and dedication in providing detailed feedback, and we hope they will consider the multifaceted contributions of our research presented in this paper.\"}", "{\"comment\": \"We appreciate your valuable feedback on our work. We want to address each weakness you mentioned and the question below.\\n\\n\\n# Weaknesses\\n\\n## Concern about using the template\\n\\nThank you for raising the issue of the textual template. While we agree that it is a restricted template, we clarify the motivation and explain why it is still interesting. \\n\\nGiven the language's ambiguity, we need to add more information, such as perspectives and topology, to create a split of the dataset with unambiguous FoR classes.\\nThis information in the template helps the language model to use this information for spatial inference/reasoning. \\nThe templates are provided to characterize the essential elements/concepts that should be used for spatial reasoning. \\nThough this simplifies reasoning if the language model can capture the pattern, our experiments show that it can still fail to recognize the frame of reference (see Example 1,2,3 (CoT fails and resolved by SG) and 10, 11, 12, 15 (SG failure)). \\n\\nMoreover, some spatial expressions are inherently explicit and without ambiguity. \\nWe do not need to add the perspective and topology information following the template for those cases. In other words, a proportion of our data does not follow the template, and the model must also classify the FoR for those unambiguous cases. \\n\\nOur results show that SG Prompting improves the FoR classification significantly when there is no template; see the table below and the qualitative example below. Also, as you pointed out, if we can make the FoR-related concepts explicit, the models can accurately output the FoR, and the text-to-image generation models can use that information for accurate visualization.\\n\\n| Model | CoT (no template) | SG (no template) | CoT (with template) | SG (with template) |\\n|------------|-------------------|------------------|---------------------|--------------------|\\n| Gemma-9B | 2.58 | 35.51 (\\u2191 32.93) | 72.65 | 73.80 (\\u2191 1.15) |\\n| Llama3-8B | 22.22 | 36.90 (\\u2191 14.68) | 73.64 | 71.07 (\\u2193 2.57) |\\n| Llama3-70B | 19.84 | 44.64 (\\u2191 24.80) | 76.72 | 87.39 (\\u2191 10.67) |\\n| Qwen2 | 58.20 | 84.22 (\\u2191 26.02) | 88.36 | 93.86 (\\u2191 5.50) |\\n| GPT-3.5 | 1.58 | 43.25 (\\u2191 41.67) | 77.64 | 85.21 (\\u2191 7.57) |\\n| GPT-4o | 12.50 | 29.17 (\\u2191 16.67) | 87.73 | 90.74 (\\u2191 3.01) |\\n\\n*Table: The table presents the results of our C-Split experiment on our dataset. The \\u201c\\u2191\\u201d symbol indicates an improvement over the CoT baseline, while the \\u201c\\u2193\\u201d symbol denotes a decrease compared to the CoT baseline. The table is divided into two sections: one for context with templates and another for context without templates. It is important to note that the context without templates is inherently clear and does not require additional information.*\\n\\n**We will include the table in the main paper and examples in the appendix for the qualitative analysis of our experiment**\\n\\nFailure examples are in the following response.\", \"title\": \"Response (1)\"}", "{\"comment\": \"**Bias caused by templates**\\n\\nWe believe there is a misunderstanding regarding the advantage of using explicit templates in a portion of our dataset. The purpose of these templates is to disambiguate the Frame of Reference (FoR) in linguistic expressions. We do not think there is significant variety in linguistic utterances for specifying perspective since they typically rely on the relatum, observer, or the speaker's point of view, all of which are addressed in our templates. Therefore, we argue that our templates cover the various ways perspective can be expressed in language. Moreover, we demonstrate that explicitly expressed perspectives help language models more easily recognize the FoR class. \\nIt is important to note that this applies to **only one split of our dataset. We also include other splits where the FoR is implicit.** In cases where the FoR is not explicitly mentioned in the text, we still aim for the models to recognize all possible valid FoRs. This explains the observed bias, which we do not consider a disadvantage. Instead, we hypothesize that explicit perspective information benefits the model, and we seek to leverage this further through improved prompting techniques.\\nTherefore in our approach, we instruct the model to recognize the FoR based on object affordance (e.g., container vs. non-container, possessing intrinsic direction vs. not possessing it) and the types of relationships proposed in the SG prompting framework. We refer to the new table of Qwen as evidence (in addition to all results in our paper) that SG prompting improves accuracy by 26% for non-templated cases compared to 5.5% for templated cases. This difference arises because the templated cases were already straightforward; however, even for these cases, encouraging the model to focus on the type of relationship still provided improvements.\\n\\nFinally, please refer to the examples in our previous response, which illustrate the advantage of SG prompting for untemplated text.\\n\\n**We hope our contribution for both dataset synthesis, evaluation and the prompting solution is more clear here and the reviewer is more convinced about the merits and quality of our works.**\"}", "{\"comment\": \"Thanks to the authors for the detailed response. The explanation of four FoR classes setting is clearer and experiments about qwen is convincing to prove the necessity of SG.\\n The question about figure 3 is why you use different generation models to show your result. \\\"If FoR is inherently ambiguous, multiple valid images exist\\\". How do you show this case? I only see a wrong image generated by SD-2.1. \\\"If we can characterize the FoR as much as possible, then we can get one of those valid images\\\". This image is from \\\" Llam3-8B + GLIGEN\\\"? not SD-2.1? Then how can I see the importance of FoR?\\n The experimental results of Qwen-7b are consistent with my understanding of prompt engineering technology, and the improvement in 4-shot and CoT confirms this. I have some doubts about the validity of the experimental results of llama-8b, but it does not affect the proof of the innovation of this work. I did not find the prompt (only example) you used for CoT and SG in the supplementary materials, and I hope you can also supplement it.\"}", "{\"comment\": \"### Failure Examples of SG-prompting\\n\\n9. Context: A horse is in front of a fire hydrant\", \"sg\": \"Explanation: Topological: umbrella is not inside of the fire hydrant. Distance: umbrella may have some distance from the fire hydrant according to the context. Direction: The fire hydrant, which is the relative object, has an intrinsic front and back, and the umbrella is positioned at the back of the fire hydrant. Answer: external intrinsic\", \"label\": \"external relative\", \"title\": \"Failure example (2)\"}", "{\"comment\": \"Thank you for the quick follow-up. We provide the additional results from the C-split, which context has topology/perspective templates for clarifying the FoR ambiguity in the language.\\nAgain, these templates are added to create a split dataset with unambiguous FoR classes, given the language's possible ambiguity in FoR.\\nThis information in the template helps the language model to use this information for spatial inference/reasoning. \\n\\nOur results demonstrate that SG Prompting significantly enhances the FoR classification accuracy, particularly in the context without such a template. This is evident in the table below and the qualitative example provided. \\nFurthermore, the results from CoT suggest that the models prefer to categorize the context using linguistic cues rather than considering other spatial relations that we explicitly guide the model to consider during SG prompting.\\nSo, our SG-prompting enhances LLMs' spatial understanding rather than relying too much on linguistic clues.\\nWe also provide some qualitative examples of failure cases in response to the reviewer WWK9 (failure example (1) and failure example (2))\\n\\n| Model | CoT (no template) | SG (no template) | CoT (with template) | SG (with template) |\\n|------------|-------------------|------------------|---------------------|--------------------|\\n| Gemma-9B | 2.58 | 35.51 (\\u2191 32.93) | 72.65 | 73.80 (\\u2191 1.15) |\\n| Llama3-8B | 22.22 | 36.90 (\\u2191 14.68) | 73.64 | 71.07 (\\u2193 2.57) |\\n| Llama3-70B | 19.84 | 44.64 (\\u2191 24.80) | 76.72 | 87.39 (\\u2191 10.67) |\\n| Qwen2 | 58.20 | 84.22 (\\u2191 26.02) | 88.36 | 93.86 (\\u2191 5.50) |\\n| GPT-3.5 | 1.58 | 43.25 (\\u2191 41.67) | 77.64 | 85.21 (\\u2191 7.57) |\\n| GPT-4o | 12.50 | 29.17 (\\u2191 16.67) | 87.73 | 90.74 (\\u2191 3.01) |\\n\\n*Table: The table presents the results of our C-Split experiment on our dataset. The \\u201c\\u2191\\u201d symbol indicates an improvement over the CoT baseline, while the \\u201c\\u2193\\u201d symbol denotes a decrease compared to the CoT baseline. The table is divided into two sections: context with templates and context without templates. It is important to note that the context without templates is inherently clear and does not require additional information.*\\n\\n**We will include table in the main paper and examples in the appendix for the qualitative analysis of our experiment**\"}", "{\"comment\": \"## Lack of full prompts for different settings\\n\\nWe apologize for not including the full prompt in the main text due to lack of space. \\nAppendix C provides all prompt settings and some explanations of each setting. \\nWe will emphasize this more in the revised version. \\n\\n## Setting for SG-prompting \\n\\nThe SG prompting follows a few-shot setting to help LLMs recognize the response patterns associated with various spatial relations (directional, topology, and distance). Sorry for the confusion. We refer to examples in the appendix in line 290. Some examples are included in Appendix C, and we intend to incorporate additional examples into the revised version of the main paper. \\n\\n## Abnormal results of GPT family\\n\\nAs shown in Figure 4, the GPT4 family mostly responds that the spatial relation originates from the relatum (intrinsic FoR class). The outperformance of GPT-3.5 is due to its bias to External relative and the fact that many ambiguous cases in A-split can fall in the external relative. We explained this in Lines 385-387 for the result of Gemma2-9B, but we will make sure this is well highlighted in the new version. \\n\\nOverall, the GPT family, epically in the GPT-4o, is biased towards assuming intrinsic direction for each object, leading to lower results in both the A-split and the EI/II of the C-split. \\nWe also manually verified this based on the responses from Chain-of-Thought and SG-prompting, which indicates that the models occasionally claim that objects without intrinsic direction, such as trees, possess a front direction while at other times stating otherwise. We clarify this using an example in the appendix.\\n\\n\\n**We hope that all responses address any confusion the reviewer raised. We hope they are convincing about our work and could increase our score. Again, we really appreciate your comment. We will let you know again when we upload the revised version.**\", \"title\": \"Response (2)\"}", "{\"summary\": \"The paper presents the FoREST benchmark, aimed at testing large language models' (LLMs) understanding of frames of reference (FoR) in spatial reasoning. FoR refers to different perspectives (intrinsic, relative, and absolute) used to describe spatial relationships. The benchmark assesses LLMs' ability to identify FoR in ambiguous and clear spatial contexts and perform text-to-image generation accordingly. Results show that LLMs have biases in FoR interpretation, impacting spatial layout accuracy. They also introduce Spatial-Guided prompting to improve FoR comprehension and performance in related tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"**Novel Perspective**: Introduces an innovative approach to assessing spatial perception in large models, focusing on frames of reference (FoR).\", \"**Theoretical Support**: Draws on established spatial language literature to support the motivations and foundational concepts of FoR.\", \"**Insightful Analysis**: Offers valuable insights into both FoR identification and text-to-image mapping.\"], \"weaknesses\": \"No dataset or code provided\", \"questions\": \"Could you make the datasets and code available?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a new benchmark called FoREST to evaluate LLMs spatial ability in understanding \\\"Frame of Reference\\\". Additionally, it proposes Spatial-Guided prompting to improve the FoR classification task and layout generation in text-to-image generation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Spatial ability of LLMs are an important research topic yet less explored. A scientific benchmark would contribute to this area.\\n2. This work conducts various experiments with a range of LLM models and provide in-depth analysis. It also verifies the proposed prompting method in text-to-image task, adding its value to real world applications.\\n3. The paper is well organized and written.\", \"weaknesses\": \"1. The dataset is pure synthetic and constructed by a limited number of textual templates. I have concerns about the FoR classification task given the template \\\"<locatum> <spatial relation> <relatum> <perspective>\\\". It seems hard to disentangle this task with linguistic and common-sense reasoning of LLMs. For example, LLMs are able to determine whether the perspective is intrinsic or relative by analyzing perspective template, and analyze topology template to determine whether the locatum is external or internal. Both of them don't necessitate understanding the underlying spatial configuration under a specific perspective. On the contrary, the text-to-image task indeed requires the model to interprete the spatial configuration and transform the perspective to camera's.\\n2. Again, since the dataset is synthetic and constructed by textual templates, the inductive bias might be leveraged by SG-prompting. \\n3. Lack of full prompts of different settings, such as few-shot, CoT, SG-prompting, text-to-layout and SG to layout.\", \"questions\": \"1. Is SG-prompting zero-shot or few-shot? In T2I task, what are examples mentioned in line 290?\\n2. In table 4, it's abnormal to see GPT-3.5 outperforms GPT-4o in A-split. And why do GPT family models do exceptionally well in EI and II C-split, yet relatively bad at ER and IR C-split? It would be interesting to dive into these observations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General comment related to SG vs CoT\", \"comment\": \"We want to clarify the confusion about our proposed SG approach and CoT approach.\\nWe would want to remind you that our proposed prompting approach can also be called CoT since we only try to specify the importance concept for reasoning to identify FoR.\\nThe main question is how to explain the reasoning steps for identifying FoR. \\nThe innovation of our approach is to characterize the important spatial concepts involved to help improve FoR comprehension. \\nWe try to have the baseline CoT that does not focus on these explicit concepts, and we want to show that including these concepts (type of spatial relations, topology, distance, and direction in addition to properties of the relatum) in SG promoting (variation of CoT) helps model for better FoR identification.\"}" ] }
9Xt5TgM7Us
Seeing the part and knowing the whole: Object-Centric Learning with Inter-Feature Prediction
[ "Junhong Zou", "Xiangyu Zhu", "Zhen Lei" ]
Humans can naturally decompose scenes into understandable objects, resulting in strong visual comprehension ability. In light of this, Object-Centric Learning (OCL) seeks to explore how to construct object-level representations by encoding the information of objects in the scenes into several object vectors referred to as `slots'. Current OCL models rely on an auto-encoding paradigm that encodes the image feature into slots and reconstructs the images by composing the slots. However, merely reconstruction objectives do not guarantee that each slot exactly corresponds to a holistic object. Existing methods often fail when objects have complex appearances because the reconstruction objective cannot indicate which pixels should be assigned to the same slot. Therefore, additional regularization based on a more general prior is required. For this purpose, we draw on the gestalt ability that humans tend to complete a broken figure and perceive it as a whole, and propose Predictive Prior that features belonging to the same object tend to be able to predict each other. We implement this prior as an external loss function, demanding the model to assign features that can predict each other to the same slot, and vice versa. With experiments on multiple datasets, we demonstrate that our model outperforms previous models by a large margin in complex environments where objects have irregular outlines and intense color changes, according to various tasks including object discovery, compositional generation, and visual question \& answering. Visualization results verify that our model succeeds in discovering objects holistically rather than dividing them into multiple parts, proving that Predictive Prior gives a more general object definition. Code is available at https://anonymous.4open.science/r/PredictivePrior-32EF.
[ "Object-Centric Learning", "Self-Supervised Learning", "Computer Vision" ]
https://openreview.net/pdf?id=9Xt5TgM7Us
https://openreview.net/forum?id=9Xt5TgM7Us
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tha0a5X9GO", "ezq2viB3yZ", "N6mjNLzt6b", "IXZQDXOwWr", "E4YJ38D4h6" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730493846645, 1730199244248, 1730592244944, 1730566711173, 1731652254353 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2939/Reviewer_rLea" ], [ "ICLR.cc/2025/Conference/Submission2939/Reviewer_KKuA" ], [ "ICLR.cc/2025/Conference/Submission2939/Reviewer_y5Yh" ], [ "ICLR.cc/2025/Conference/Submission2939/Reviewer_V4gp" ], [ "ICLR.cc/2025/Conference/Submission2939/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a new regularization for constructing the holistic object slot in OCL, which is achieved by utilizing the inter-predictability among the features from different parts of the same object.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The writing is good and easy to follow.\", \"weaknesses\": \"There are some weakness and questions that required to be answered clearly:\\n\\n1) For the gestalt ability of our human, are the different parts naturally predicted according to the appearance/structure? Or, human predict the missing parts given the prior that we already have a semantic understanding? Any proof?\\n\\n2) For the inter-predictability among the features, how to deal with the semantic/object occurance in the real world? For example, given the high occurance of the keyboard and mouse, their feature could be with high inter-predictability. How to limit the inter-predictability on the component/part level?\\n\\n3) For the prediction of similarity, relative postion should be used. The utilization of absolute position is wrong, which cannot reveal the structural information among the parts. Moreover, it will leads to lots of noise for semantic understanding, since object semantics are postion invariant.\\n\\n4) The training of the similarity prediction seems require supervision, which is unfair for other unsupervised methods.\\n\\n5) Given the training loss for similarity prediction only focuses on increasing the cosine similarity, any possible that a trival solution will appear, which predict high similarity for any pair of features.\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a predictor that can forecast the image features at a specific position based on another feature. Additionally, it introduces an object-centric learning method that encourages the assignment of image features, which can effectively predict each other using the pre-trained predictor, to the same slot, and vice versa. Experiments conducted on multiple datasets demonstrate the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method is innovative and straightforward to implement.\\n\\n2. Experiments demonstrate that this method outperforms recent object-centric learning approaches across three datasets.\", \"weaknesses\": \"1. The proposed predictor does not align with the title \\\"Seeing the part and knowing the whole.\\\" In reality, the predictor only observes one part and recognizes another. Additionally, the proposed OCL method does not demonstrate the ability to complete the entire object based on the occluded part. Figure 1 in the paper is also misleading. I would appreciate it if the authors could clarify whether their method can actually complete whole objects from parts or not.\\n\\n2. The learnable segmentation M is not depicted in Figure 2. I suggest the authors update Figure 2 to include the learnable segmentation M in line 173 around the alpha mask.\\n\\n3. The presentation of the paper could be improved. The figures are not inserted in PDF format, and some expressions are informal (e.g., the term \\u2018clamp(a, b)\\u2019 in line 251).\\n\\n4. Compared methods, such as LSD and DINOSAUR, perform better on complex datasets like CLEVRTEX, MOVi-E, and COCO. However, the datasets selected in this paper are relatively simple, raising concerns about the scalability of this approach to more complex data. I suggest authors include their methods on more complex datasets (CLEVRTEX, MOVi-E, and COCO), or disccus their limitation on applying the proposed method to more complex datasets.\\n\\n5. The rationale for using L1 loss instead of L2 loss as the reconstruction loss is unclear, I would appreciate it if authors could provide their rationale for choosing L1 loss and include an ablation study comparing different loss functions (e.g., L1 vs L2) in their experiments.\", \"questions\": \"1. According to the attachment, the decoder used for the MOVi-C dataset is transformer-based. How does this type of decoder generate the alpha masks needed to compute the prior loss?\\n\\n2. Is it possible to apply this method to more complex datasets, as well as images with higher resolutions? Achieving good results on more challenging datasets could enhance the soundness of the proposed method.\\n\\n3. The visualization in Figure 3 raises some questions, particularly regarding the results of BOQSA on the PTR dataset, which seem to outperform the visualizations in the original paper. Were any techniques implemented to improve the performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces an interesting idea to object centric learning (ocl) called Predictive Prior, inspired by human perception abilities. Traditional ocl models use an auto-encoding paradigm to create object representations by assigning image features to discrete object \\\"slots\\\" and reconstructing images from these slots. However, these models struggle with complex object appearances due to reliance on color or spatial regularities.\\n\\nThe Predictive Prior approach leverages the principle that features belonging to the same object can predict each other. It trains a prediction network to assess the mutual predictability between features across different spatial locations within an image. This prediction-based relationship is then used to guide object-slot assignments. \\n\\nExperiments on datasets such as MOVi-C, Super-CLEVR, and PTR show that the Predictive Prior-based model outperforms previous OCL methods in object discovery, compositional generation, and visual question answering (VQA).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The work shows strong results across various datasets and baselines.\", \"The idea is interesting and is well analyzed.\", \"Clean writing and figures\"], \"weaknesses\": [\"Missing Ablations, There are multiple loss functions used, however they haven't been ablated\", \"Certain SoTA baselines on Dino for unsupervised segmentation are missing such as : CuTLER (https://arxiv.org/pdf/2301.11320https://arxiv.org/pdf/2301.11320)\"], \"questions\": [\"Can the authors compare or discuss results against methods such as CuTLER that use dino features and get good results.\", \"How much role does pre-trained Dino play in the improvement across baselines. What if the authors trained from scratch using the new objective.\", \"Can the authors ablate reconstruction vs their proposed objective, how does switching of one of them affect final accuracy?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Humans instinctively decompose scenes into objects, enabling strong visual understanding. Object-Centric Learning (OCL) seeks to encode scene information into object vectors called \\u2018slots.\\u2019 Traditional OCL models use an auto-encoding approach, reconstructing images from these slots but often fail with complex objects, as reconstruction alone doesn\\u2019t ensure accurate object grouping. To improve this, this paper introduces a Predictive Prior inspired by human gestalt perception, where features of the same object can predict each other. This prior is implemented as an external loss, guiding the model to group predictable features into the same slot and separate those that aren't. The paper shows decent results on SuperCLEVER, MoVI-C etc.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) Overall intuition of the paper makes sense and representing part and whole of a scene is a pretty important problem as discussed in [1] and in the literature multiple times.\\n2) Results on SuperCLEVER, MoVI-C are decent and show the efficacy of the current method pretty well.\", \"weaknesses\": \"1) Results are missing on real world datasets like COCO & OpenImages. The current results are on CLEVEr and MoVI-C which are not very representative of real world results.\\n2) Comparison with diffusion based approaches like SlotDiffuzr[1], SysBinder[2] is missing. Adding diffusion based method will be pretty critical to the paper.\\n3) Computational cost in adding the PREDICTIVE PRIOR? It would be good to see the computational cost added by newer modules introduced in the paper.\\n4) Discussion on [3] should definitely be added in the paper.\", \"references\": \"1) SlotDiffuzr: SlotDiffusion: Object-Centric Generative Modeling with Diffusion Models\\n2) NEURAL SYSTEMATIC BINDER \\n3) How to represent part-whole hierarchies in a neural network.\", \"questions\": \"Overall the paper is good, but the results are missing on large scale real-world datasets which is definitely a issue in the current version.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
9XprjIqkBI
Genshin: General Shield for Natural Language Processing with Large Language Models
[ "Xiao Peng", "Tao Liu", "Ying Wang" ]
Large language models (LLMs) like ChatGPT, Gemini, or LLaMA have been trending recently, demonstrating considerable advancement and generalizability power in countless domains. However, LLMs create an even bigger black box exacerbating opacity, with interpretability limited to few approaches. The uncertainty and opacity embedded in LLMs' nature restrict their application in high-stakes domains like financial fraud, phishing, etc. Current approaches mainly rely on traditional textual classification with posterior interpretable algorithms, suffering from attackers who may create versatile adversarial samples to break the system's defense, forcing users to make trade-offs between efficiency and robustness. To address this issue, we propose a novel cascading framework called Genshin (General Shield for Natural Language Processing with Large Language Models), utilizing LLMs as defensive one-time plug-ins. Unlike most applications of LLMs that try to transform text into something new or structural, Genshin uses LLMs to recover text to its original state. Genshin aims to combine the generalizability of the LLM, the discrimination of the median model, and the interpretability of the simple model. Our experiments on the task of sentimental analysis and spam detection have shown fatal flaws of the current median models and exhilarating results on LLMs' recovery ability, demonstrating that Genshin is both effective and efficient. In our ablation study, we unearth several intriguing observations. Utilizing the LLM defender, a tool derived from the 4th paradigm, we have reproduced BERT's 15% optimal mask rate results in the 3rd paradigm of NLP. Additionally, when employing the LLM as a potential adversarial tool, attackers are capable of executing effective attacks that are nearly semantically lossless. We conduct detailed case analyses using the SHAP interpreter, which could yield insights for systemic enhancements. Lastly, we provide discussions on the architecture of Genshin, underscoring the necessity of each component and outlining the current limitations.
[ "large language model", "texual attack", "interpretable machine learning" ]
https://openreview.net/pdf?id=9XprjIqkBI
https://openreview.net/forum?id=9XprjIqkBI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xpY5WqnIHY", "wWAaykheBn", "jVd5B0hlOc", "Og8DTzEAOS", "EjKJZrvGKN", "4WFBPvJsK4" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1729995121084, 1732771126557, 1730676959019, 1730732930902, 1730257074283, 1730858555208 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3511/Reviewer_CGUr" ], [ "ICLR.cc/2025/Conference/Submission3511/Authors" ], [ "ICLR.cc/2025/Conference/Submission3511/Reviewer_c1su" ], [ "ICLR.cc/2025/Conference/Submission3511/Reviewer_ZZ3G" ], [ "ICLR.cc/2025/Conference/Submission3511/Reviewer_mgqc" ], [ "ICLR.cc/2025/Conference/Submission3511/Reviewer_7iq6" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a defense method using LLMs as a barrier to resist adversarial text attacks, leveraging large language models to recover adversarial samples and make AI systems more secure. Their approach combines the capabilities of medium-sized language models (LMs) and interpretable models (IMs). If needed, the IM can be utilized to explain the predictions of the LM.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"By utilizing large language models (LLMs) as symmetrical recovery tools, the Genshin framework effectively defends various adversarial text attacks while maintaining semantic consistency.\\n\\nThe Genshin framework combines the efficient prediction capability of medium-sized language models (LMs) with the interpretability of interpretable models (IMs), addressing the current limitations of LLMs in transparency. Using interpretable models to explain black-box defense systems is very innovative.\", \"weaknesses\": \"Weakness 1: As an LLM defensive work, there is no discussion of mainstream LLM robustness work after 2023, such as PromptBench[1], etc.\", \"weakness_2\": \"The core method is too simple. It just rewrites the adversarial sample using the LLM. It is essentially a prompt adjustment work and has limited innovation.\", \"weakness_3\": \"Moreover, I think interpretable models play a meaningless role in black-box defense, and there is no experiment to discuss whether they can improve defense performance.\", \"weakness_4\": \"Lack of experiment study, they did not discuss the new attack method but only ran the bert-attack(It is a very old method)\\n\\n\\n[1] Zhu K, Zhao Q, Chen H, et al. Promptbench: A unified library for evaluation of large language models[J]. Journal of Machine Learning Research, 2024, 25(254): 1-22.\", \"questions\": \"Q1 The interpretability methods do not seem to provide any additional benefit for defense. Is there any experiment proving that interpretability methods can better defend against adversarial attacks?\\n\\nQ2 There is no comparative experiment; why not compare with existing adversarial defense methods to highlight your own advantages?\\n\\nQ3 If the defense method is completely white-box, are there any attack methods that can bypass the defense? This also needs to be discussed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper proposes Genshin, a cascading framework designed to defend against adversarial textual attacks on language models. It integrates three components: LLMs for one-time text recovery, median language models for analysis, and interpretable models for transparency. Genshin is validated on sentiment analysis and spam detection, demonstrating substantial resilience against various levels of textual disturbance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"- Originality\\n\\nThe paper presents a unique use of LLMs as recovery agents, using them as one-time plug-ins in an adversarial context. This approach differs from standard uses of LLMs for transformation or classification.\\n\\n- Significance\\n\\nBy demonstrating a framework that can handle adversarial attacks, Genshin has potential applications in high-stakes domains like spam detection, security, and sentiment analysis, making the work relevant and impactful.\", \"weaknesses\": [\"The reliance on LLMs as recovery tools introduces significant computational cost, which may limit the scalability of Genshin for real-time or resource-constrained applications. LLMs (e.g. Llama, Vicuna, GPT) themselves should be robust enough to the token-level perturbations. I do not see the necessity to use LLMs as an intermediate agent for input recovery and send the input to an LM (i.e. BERT or RoBERTa).\", \"The experimental setting is limited in terms of tasks and models. This paper did not justify the necessity of using an LM for inference while LLMs are available, which makes the experiment setting a bit confusing where only LMs are being evaluated.\"], \"questions\": [\"If LLM is available in the framework, why is it not directly used for downstream task? Are LLMs robust against the token-level perturbations used in this paper?\", \"Are there strategies to mitigate the computational demands of the LLM recovery stage, perhaps through a selection mechanism that applies LLMs selectively?\", \"How feasible is the implementation of Genshin in real-time environments? Would latency impact its performance in high-frequency applications like fraud detection?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this work, the author advocate for a defensive framework, Genshin, which leverages LLMs as one-time recovery tools to restore the attacked text against adversarial textual attacks. Genshin manages information in three tiers of processing: denoising with an LLM to get rid of adversarial noise, analysis with a mid-sized LM, and interpretation by an IM to explain the outputs. In applications like sentiment analysis and spam detection, Genshin restores over 80% of the data that are under heavy disturbance.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The figure appears clear and well-presented.\", \"weaknesses\": \"1. The paper\\u2019s writing needs substantial improvement. For example, when \\\"IM\\\" first appears, it lacks an explanation (though I understand it stands for \\\"interpretable model\\\"), and similar issues appear throughout the paper. Additionally, the authors need a clear statement of their objectives and a concise summary of the key contributions in both the abstract and introduction. Without this, it is challenging to understand the authors' intentions, as I only understood their goals upon reaching the experiment settings. I strongly suggest the authors revise this version for clarity.\\n\\n2. The novelty of the paper is limited. Essentially, the authors use an LLM to restore perturbed adversarial text to defend against potential attacks for a BERT-like model, relying on a strong assumption that the attack perturbations are at the character level, which the LLM can detect and recover. However, many other adversarial attacks do not rely on character-level perturbations. Additionally, this approach lacks any performance guarantee, whereas a certified defense approach is generally more favorable in current research.\", \"questions\": \"Have you consider other adversarial attacks beyond char level?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Genshin, a framework designed to defend NLP systems against adversarial attacks by leveraging LLMs as one-time recovery tools to revert manipulated texts to their original state. The reverted text is then classified using mid-sized language models, and SHAP is employed for interpretability in case studies. Experiments conducted on three datasets indicate that the proposed approach effectively reverts manipulated texts.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The proposed method is straightforward, and the presentation is clear.\", \"weaknesses\": \"There are important experimental details missing that make the study challenging to reproduce. For instance, the defense prompt and detailed attacker settings are not provided.\", \"some_aspects_of_the_experimental_setup_also_raise_questions\": [\"It\\u2019s unclear why state-of-the-art attack methods were not employed, as the authors instead used three attack strategies involving \\u201crandom replacement.\\u201d\", \"Additionally, it appears the attackers were tested on the vanilla LM to find adversarial examples and the LLM is then used to revert such changes. In a real-world scenario, attackers would engage directly with a system that has built-in defenses. As such, the current setup leaves the actual effectiveness of the defense and its application in real-world systems somewhat unclear.\", \"The motivation for using GPT-3.5 as a defense tool, while relying on LMs for the main tasks, is also not fully addressed, particularly considering the high cost of LLM inference. Additionally, the study does not compare Genshin with alternative lightweight defenses, such as those proposed by [1] and [2], which would serve as baselines.\", \"[1] Wang et. al. 2021, Natural language adversarial defense through synonym encoding\", \"[2] Jia et. al. 2019, Certified Robustness to Adversarial Word Substitutions\"], \"questions\": [\"What are the AAcc and RAcc using state-of-the-art attackers?\", \"What are the RAcc if the attacker is aware of the defense and directly engage with it?\", \"How is the proposed method compare with baselines?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a General Shield for Natural Language Processing with Large Language Models (Genshin), utilizing LLMs as defensive one-time plug-ins. It includes three stages: 1) denoising stage as a recovery tool to denoise the text, from risky information to recovered information, 2) analyzing stage (LM) where the LM analyzer can then easily execute information analysis on the recovered information and 3) interpreting stage (IM) where the IM interpreter precisely explains the LM\\u2019s outputs. It tested 3 attacks such as Char-level disturbance, \\u2022 Word-level disturbance and Similarity-based disturbance. On the sentimental analysis and spam detection tasks, it shows its great potential over previous bert-base or roberta-based methods.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"originality: I did not find any originality. I did not find any meaningful contribution in this paper. The used tools are all existing and the proposed method can not prove its usefulness with limited evaluation on simple spam detection datasets.\", \"quality\": \"The comparison is unfair (bert as baselines VS ChatGPT3.5).\", \"clarity\": \"this paper is easy to understand.\", \"significance\": \"I did not find any significance of this work because the whole pipeline does not make sense to me at all.\", \"weaknesses\": \"There are so many weaknesses in this paper that I think the authors do not understand what makes a 'standard' paper, such as evaluation, comparison, and motivation. I suggest the authors to read more papers published at ICLR to learn how to write a paper.\\n\\nThe name is \\\"General Shield for Natural Language Processing with Large Language Models\\\" but I did not find any evidence to support the claim of \\\"general\\\".\\n\\nThe contribution is not enough to claim as General Shield for Natural Language Processing when you only test on two simple classification problems.\", \"questions\": \"1. What does this claim mean? \\\"Utilizing the LLM defender, a tool derived from the 4th paradigm, we have reproduced BERT\\u2019s 15% optimal mask rate results in the 3rd paradigm of NLP.\\\" Those are totally different targets, and the 15% optimal rate is meaningless.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
9XabBgqFgy
Unveiling the Backbone-Optimizer Coupling Bias in Visual Representation Learning
[ "Siyuan Li", "Juanxi Tian", "Zedong Wang", "Luyuan Zhang", "Zicheng Liu", "Weiyang Jin", "Yang Liu", "Baigui Sun", "Stan Z. Li" ]
This paper delves into the interplay between vision backbones and optimizers, revealing an inter-dependent phenomenon termed backbone-optimizer coupling bias} (BOCB). Notably, canonical CNNs, such as VGG and ResNet, exhibit a marked co-dependency with SGD, while recent architectures, including ViTs and ConvNeXt, share a strong coupling with adaptive learning rate optimizers. We further show that strong BOCB may result in extra tuning efforts and poor generalization ability for pre-trained neural networks, substantially limiting their real-world applications. Through in-depth analysis and apples-to-apples comparisons, however, we surprisingly observed that certain types of network architecture could significantly mitigate BOCB, which might serve as practical takeaways for backbone design. We hope this work can inspire the community to rethink the long-held assumptions on backbones and optimizers, consider their interplay in future studies, and contribute to more robust vision systems. The source code and models are publicly available.
[ "Network Design", "Optimization", "Bias", "Backbone", "Transformer", "Benchmark" ]
Reject
https://openreview.net/pdf?id=9XabBgqFgy
https://openreview.net/forum?id=9XabBgqFgy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ypKS4gC9zc", "xi2cIRnfaJ", "uGBVlRs1bR", "sapoA1wv1g", "sYYFf4eQHR", "nUIefDH94Q", "nLypuCPywP", "lq9uNOrOAX", "kXgnKF3Ujl", "jyVGUoDGt6", "jJLvn9icYJ", "j926BUbLJO", "hIUHbmtpv8", "f2nN27Z8UH", "cqNey1jIWd", "a3EvjIL8oz", "Y9JdMhrkeW", "XYq03KrhTZ", "WMk4LnfXkK", "UP4SaLzXpI", "PlnLftNA3v", "IVZKrmqZBY", "F7HfZ1Hry1", "DqdYGzJbJd", "DGJgtKRHHJ", "ASEyXqMtdt", "A8U02CGM5G", "A4hxQSRwSv", "9LqZD9Evjj", "7pYWQCVZN3", "77Mzl0ktwh", "2Ta1xO7IaT", "0jKHiGuB5r" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1737523493619, 1732636609254, 1732738433038, 1732522707393, 1732480869196, 1732875385390, 1732483156440, 1729717541615, 1732482520027, 1733210209158, 1731177655725, 1734789581786, 1732522719890, 1732522694372, 1732576545247, 1732522223673, 1732493433440, 1732493586194, 1732644417799, 1732811575307, 1732522100874, 1733204798889, 1730615719792, 1732522170506, 1732483516359, 1733208528538, 1732493302677, 1732522304981, 1732522363361, 1732522335915, 1732494036366, 1732739889880, 1732740025598 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2246/Reviewer_oReR" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Reviewer_FcXi" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Reviewer_oReR" ], [ "ICLR.cc/2025/Conference/Submission2246/Area_Chair_MSNn" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Reviewer_FcXi" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Reviewer_JghS" ], [ "ICLR.cc/2025/Conference/Submission2246/Reviewer_JghS" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ], [ "ICLR.cc/2025/Conference/Submission2246/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thank you for the rebuttal.\\n\\nA few more comments/questions:\", \"w1_w2\": \"The related works demonstrate that it's well known that the choice of optimizer details is crucial for (vision) transformers and that there is an interplay between architectural details and the optimizer choice. The authors write in l307 that \\\"well-designed (vision) backbones should exhibit both superior performance and great performance stability across optimizers\\\" but I am not convinced by this.\\n\\nFrom my perspective, modern architectures exploit the power of strong optimizers to get good empirical results. Why should we require them to work well with older/less advanced optimizers?\\n\\nW2.1: The authors write:\\n\\\"While training loss is undoubtedly a crucial indicator, it can be highly sensitive to specific training setups, including regularization techniques and data augmentation strategies. By emphasizing top-1 accuracy, we aim to provide a more objective and comparable measure across diverse experimental configurations.\\\"\\n\\nTo me, this is preciesly the reason that top-1 accuracy is an oversimplification here. When you change the optimizer, you my need to add/adjust regularization techniques to combat over/underfitting.\", \"w3\": \"Lack of deep analysis: The analysis provided is still just observational, it does not go into the actual mathemetical details that explain what is observed (e.g. are the gradient spikes, the initialization of the network such that adaptive learning rates are crucial, etc).\", \"w5\": \"The authors claim that \\\"MetaFormer ensures that the optimization landscape is neither too simple (leading to underfitting) nor too complex (leading to overfitting and optimization difficulties). This balance makes the model more amenable to a wide range of optimizers\\\" yet they present no evidence in this regard as the analysis is only through top-1 accuracy.\", \"w6\": \"The authors write \\\"The design of FFN blocks in models like ConvNeXt and MogaNet introduces additional layers of complexity into the optimization process. These blocks, often implemented as Point-wise Convolutions or inverted bottleneck layers, are susceptible to overfitting without proper regularization. The intricate interactions within these blocks create a challenging optimization landscape that fixed learning rate optimizers like SGD find difficult to navigate effectively.\\\"\\n\\nSo the layers are susceptible to overfitting but yet they are hard to optimize? Where are the experiments backing these conclusions (train vs validation losses demonstrating overfitting, etc?)\"}", "{\"title\": \"Response to Reviewer oReR\\u2019s Feedbacks (PART 1/3)\", \"comment\": \"Thanks for your detailed feedback, which is constructive to improve our manuscript. We have made a new revision according to your suggestions, and we would like to clarify some concerns or misunderstandings as follows:\\n\\n---\\n### **(W1-W2)**\\n\\n> **Modern architectures often leverage advanced optimizers for superior performance, so requiring them to work well with older optimizers may not be necessary.**\\n\\n**Reply:** We acknowledge the reviewer's perspective and emphasize that our goal is not to mandate that modern architectures should work equally well with all optimizers, including older ones. Instead, our assertion that \\\"well-designed (vision) backbones should exhibit both superior performance and great performance stability across optimizers\\\" is aimed at highlighting the importance of robustness and flexibility in backbone design. This robustness ensures that the backbone can be effectively optimized under various conditions, including different optimizers, without significant performance degradation.\\n\\nModern architectures indeed leverage powerful optimizers to achieve state-of-the-art results. However, our findings reveal a phenomenon we term\\u00a0*Backbone-Optimizer Coupling Bias*\\u00a0(BOCB), where certain backbones exhibit strong dependencies on specific optimizers (as illustrated in Figure 2). This coupling can limit the flexibility and practical applicability of these backbones, especially in real-world scenarios where the choice of optimizer may be constrained by computational resources or other practical considerations.\\n\\nBy advocating for backbones that exhibit stability across optimizers, we aim to promote designs that are more adaptable and less prone to BOCB. This adaptability is crucial for ensuring that vision models can be effectively deployed in diverse environments and tasks where the optimal optimizer may not always be available. Our empirical analysis provides evidence that backbones with weaker BOCB offer greater flexibility and are more user-friendly, even if they may not achieve the absolute best performance with a specific optimizer.\\n\\n> **(W2.1) Top-1 accuracy oversimplifies performance comparison as changing optimizers often require adjusting regularization techniques.**\\n\\n**Reply:** We appreciate the reviewer's insightful comments regarding the use of top-1 accuracy as the primary metric for evaluating the performance of vision backbones with different optimizers. While we acknowledge that top-1 accuracy is a simplification and can be influenced by various training setups, including regularization techniques and data augmentation strategies, we have taken steps to provide a more comprehensive analysis by incorporating additional metrics such as Entropy, $L_{2}$-norm, and the PL exponent alpha (\\u03b1). These metrics offer deeper insights into the intrinsic properties of different network architectures and their interactions with various optimizers, complementing the top-1 accuracy results and addressing the reviewer's concerns. By analyzing the entropy of learned parameters, the scale of the $L_{2}$-norm, and the generalization tendencies quantified by the PL exponent alpha, we provide a more detailed and nuanced understanding of the optimizer-backbone relationship. These supplementary analyses serve as a robust foundation for our case studies and recommendations, ensuring that our findings are both comprehensive and insightful. Therefore, we believe that the additional metrics sufficiently address the reviewer's concerns and provide a more holistic view of the optimizer-backbone interplay without the need to add training loss as a metric. Specifically, the inclusion of these metrics allows us to observe the layer-wise and parameter-wise characteristics of the learned models, offering a more granular and informative perspective on the optimization dynamics and the impact of different optimizers on the parameter space. Therefore, we believe that the additional metrics sufficiently address the reviewer's concerns and provide a more holistic view of the optimizer-backbone interplay without the need to add training loss as a metric.\"}", "{\"comment\": \"Dear Reviewer FcXi,\\n\\nAs the Discussion phase draws to a close and time is running short, we respectfully request that you consider elevating your score if you find our rebuttal and revised submission adequately address your concerns. We also welcome continued dialogue to uphold the standard and integrity of the ICLR review process. Looking forward to your feedback soon!\\n\\nSincerely,\\n\\nAuthors\", \"title\": \"Encouraging Feedbacks\"}", "{\"title\": \"Rebuttal to Reviewer FcXi (PART 1/4)\", \"comment\": \"## Response to Weaknesses\\n\\nWe express our gratitude for your valuable review and constructive feedback. We have adjusted our revision and invite you to go through the general response first, as they may have answered some of your confusion. Then, we respond to your concerns point-by-point as follows.\\n\\n---\\n\\n### **(W1)\\u00a0Soundness and overstated claims in the Introduction.**\\n\\n$\\\\quad$ **Reply:**\\u00a0The introduction is designed to situate the study within the broader context of optimization and architecture design, emphasizing the significance of the research question (as illustrated in Figure 2). While some statements may appear ambitious, they are intended to highlight the potential impact of the findings (as summarized as take-home messages in the latest revision). The contributions are indeed focused, aiming to provide incremental yet meaningful insights into the interplay between optimizers and backbone architectures. The limitations are acknowledged, and the paper strives to offer a nuanced understanding rather than definitive conclusions. The introduction sets the stage for a comprehensive exploration of the topic, which is detailed in the subsequent sections.\\n\\n- **(W1.1)\\u00a0The results suggest that the backbone does not necessarily depend on a specific optimizer but rather certain types.**\\n\\n$\\\\quad$ **Reply:** We agreed that Category (b) of optimizers (e.g., Adam variants) are more likely to achieve better performance in general, while Category (a) is like SGD and Category (d) is like AdaGrad. However, our main concern is whether a given backbone relies on certain optimizers to achieve its full performance, e.g., ViT and ConvNeXt variants can only achieve state-of-the-art performances using the Category (b) optimizers while resulting in worse results than ResNet-50 when using the Category (a) and Category (d) optimizers. Therefore, we mainly analyze and explain the BOCB phenomenon from the view of network designs in Section 4.\\n\\n$\\\\quad$ Meanwhile, as for the superior properties of Adam-like optimizer, i.e., Category (b), we have discussed in Section 2.2 and verified in Table 1 and Section 3.2, which indicates these optimizers exhibit consistent effectiveness across various backbones. Similarly, the AdaGrad-like optimizers yield bad results because they do not apply the estimated first or second moments to ensure a robust estimation of gradients and other statistics. However, our manuscript does not claim universal superiority but highlights their consistent performance, which is a valuable insight for practitioners. It only summarizes the superior optimizers (as shown in Table A4 of the revision that provides a comprehensive view of optimizer behavior). Our paper's findings are supported by empirical data, which demonstrates the general effectiveness of Adam-like optimizers across different scenarios.\\n\\n- **(W1.2) No significant differences in the variances in Figures 3 and 4.**\\n\\n$\\\\quad$ **Reply:**\\u00a0The variances in Figures 3 and 4 reveal consistent trends and offer a nuanced understanding of the BOCB properties and hyper-parameter sensitivity of backbones under varying network designs. Our study does not assert absolute superiority but rather presents relative performance, which is a valid observation for empirical studies in practical scenarios. As for Figure 3, we can easily find three groups, i.e., the stable group (VGG-13, ResNet-50/101, EfficientNet-B0, Swin-T, ConvFormer-S12, AttnFormer-S12), the unstable group with modest average performances (AlexNet, DenseNet-121, MobileNet.V2, RepVGG-A1, DeiT-S, MLP-Mixer), and the unstable group with high uper-bound performances (ConvNeXt, ConvNeXt.V2, MogaNet-S, UniRepLKNet-T, TransNeXt-T, IdentityFormer-S12, PoolFormerV2-S12, and CAFormer-S12). Similarly, in Figure 4, we can classify the sensitivity of optimizer hyper-parameters of various backbones into several groups based on the variances and means (red line). These observations are consistent and provide a foundation for further research into the behavior of different backbones (as cases in Section 4). Therefore, we believe that the analysis of the relative differences between violinplots and boxplots is supportive of our findings.\"}", "{\"title\": \"Respectful Request for Reconsideration - Updates on Manuscript Improvements\", \"comment\": \"Dear Reviewer JghS,\\n\\nWe deeply appreciate your thoughtful feedback on our work exploring Backbone-Optimizer Coupling Bias (BOCB). Through comprehensive revisions and clarifications, we have diligently addressed each of your concerns. Most encouragingly, **Reviewer FcXi has recognized the value of our contribution**, noting that our revised manuscript is **\\\"significantly improved\\\"** and specifically praised the **updated Figures and the \\\"takeaway summarization parts\\\"**. We suppose this external recognition emphasizes the meaningful contribution this work makes to the field.\\n\\nAs you noted, this research tackles \\\"a less discussed aspect about the interplay of different optimizers and architectures.\\\" During the rebuttal phase, we have further enhanced both the theoretical grounding and practical guidance of our findings through clearer takeaway messages and optimization recommendations, as acknowledged in subsequent reviews. \\n\\nGiven the **improved clarity** of our contributions and the **positive comments** from your fellow reviewers in recent discussions, we sincerely hope that you could go through our responses and the revision and consider adjusting your rating accordingly if you are satisfied. We hope this work can **benefit the broader community** by providing valuable insights into the critical relationship between vision backbone architectures and optimizers. Our sincere hope is that this work can reach and benefit more researchers and practitioners in the community. Therefore, **your rating is particularly important** for us. Should you find our responses and revisions satisfactory, we would greatly appreciate your consideration in adjusting your rating accordingly.\\n\\nThank you again for your role in helping us strengthen this research. We look forward to your response.\\n\\nBest regards,\\n\\nAuthors of Submission 2246\"}", "{\"title\": \"Rebuttal to Reviewer FcXi (PART 3/4)\", \"comment\": \"### **(W3) Applicability to larger datasets for more reliable analysis.**\\n\\n**Reply:**\\u00a0Thanks for your detailed and thoughtful suggestions. As you mentioned, our main benchmark results are conducted on smaller datasets (like CIFAR-100 and COCO) with a simple classification task, which indeed limits the reliability of the findings. However, we have three reasons to support these experimental settings. Firstly, we would like to clarify that the landscape of BOCB and our findings are consistent across both CIFAR-100 and ImageNet, as discussed in Section 4.2. Since the coupling bias with optimizers for a specific backbone is an intrinsic property of the network design, it will not be significantly affected by task scenarios. Secondly, we guess that the BOCB phenomenon could be more severe when the dataset is smaller while commonly used visual datasets are on a small scale (e.g., 10k to 100k training samples). So, we conduct numerous benchmarks with the standard CIFAR-100 dataset and practical training settings (e.g., the DeiT and RSB A2 setups for better and robust results) to provide a comprehensive upstanding of BOCB. Then, we verify the generalization of these findings on large-scale datasets. Thirdly, the one-hot image classification with vision backbones is the most fundamental and robust deep learning task. It can be the ideal research object to study the interaction between backbones and optimizers, which introduces less disturbing variables or hyper-parameters than a complex deep learning algorithm. Therefore, we believe the current version of experimental setups and benchmarks might be the ideal solution for a research laboratory to study a foundational problem like BOCB.\\n\\nMeanwhile, We acknowledged this limitation and provided additional large-scale experiments on ImageNet-1k to validate the BOCB phenomenon and support our findings. We report the results of optimizers that can be used as BOCB indicators and the top three optimal optimizers (Adan, LAMB, AdamW). As shown in the following table, the results are consistent with those observed on CIFAR-100, where ResNet-50 (R-50) and ConvFormer-S12 (CF-S12) have weak BOCB while DeiT-S and ConvNeXt-T (CNX-T) showing poor BOCB properties (i.e., large values of std and range). Comparing ConvNeXt-T and ConvNeXt.V2-T, we also verify that reducing the optimization bottleneck in the FFN (using GRN) could alleviate BOCB to some extent.\\n\\n| Optimizer | R-50 | DeiT-S | CNX-T | CNXV2-T | CF-S12 |\\n|---|:---:|:---:|:---:|:---:|:---:|\\n| AdamW | 79.9 | 80.4 | 82.1 | 82.3 | 81.6 |\\n| LAMB | 79.8 | 80.2 | 82.2 | 82.3 | 81.5 |\\n| Adan | 79.9 | 80.8 | 82.6 | 82.8 | 81.8 |\\n| SGD | 78.8 | 75.4 | 71.3 | 76.8 | 79.7 |\\n| AdaBound | 75.4 | 73.0 | 72.4 | 77.1 | 79.6 |\\n| LARS | 79.7 | 73.2 | 75.9 | 79.6 | 79.9 |\\n| RMSProp | 78.0 | 78.0 | 79.6 | 80.2 | 80.4 |\\n| AdaDelta | 74.9 | 55.0 | 73.5 | 77.9 | 78.5 |\\n| Std/Range | 1.9/5.0 | 7.9/25.8 | 4.4/11.3 | 2.3/6.0 | 1.1/3.3 |\\n\\n### **(W4) Evidence for \\u201cstrong coupling can potentially lead to better performance and generalization\\u201d in L322.**\\n\\n**Reply:**\\u00a0Thanks for pointing out this critical assertion. It might be logically vague and we revise it as \\u201cWhile classical CNNs with weaker coupling offer more user-friendliness, modern DNNs with stronger coupling potentially lead to better performance and generalization.\\u201d Since modern DNNs with the MetaFormer macro design can usually yield strong BOCB, we explain the potential superiority of these MetaFormer models from two aspects. As for the better performance, the efficiency of parameter usage and performance upper bounds of MetaFormer models (e.g., ViT and ConvNeXt) is superior to classical CNNs (e.g., ResNet variants) because the MetaFormer\\u2019s block-wise macro design enables explicit modeling of token axis and channel axis. As for better generalizations, modern DNNs like Transformer and ConvNeXt can benefit from various self-supervised pre-training (e.g., contrastive learning like BYOL and DINOv2, and masked image modeling like MAE and A2MIM [1]) and multi-modality pre-training (e.g., multi-modality alignment like CLIP and visual question answer like LLaVA [2]), which make them more flexible and generalizable to pre-training [1] and new task scenarios [3] than classical CNNs like ResNet variants. The modern DNNs are also more likely to learn robust and well-generalized features than classical CNNs because of the macro designs (e.g., intrinsic properties of ViTs [4]) and token mixing operators (e.g., self-attention and depth-wise convolution learn more robust features than classical convolutions [5]).\\n\\n### Reference\\n\\n[1] Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN. In ICML, 2023.\\n\\n[2] Visual Instruction Tuning. In NeurIPS, 2023.\\n\\n[3] ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy. In ICML, 2024.\\n\\n[4] Intriguing Properties of Vision Transformers. In NeurIPS, 2021.\\n\\n[5] Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs. In CVPR, 2022.\"}", "{\"summary\": \"This paper introduces the concept of Backbone-Optimizer Coupling Bias (BOCB) -- i.e., strong dependencies between neural network backbones and specific optimizers -- which affects performance and generalization. To demonstrate this, a comprehensive benchmark is provided, evaluating various vision backbones and popular optimizers across different tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"I believe the problem statement of the paper is promising. Even though it is known that some modern architectures need specific optimizers to achieve high performance (e.g., ViTs use AdamW by default these days), there are only a few comprehensive studies on the relationship between architectures and optimizers.\", \"This paper provides results using a wide range of optimizers and architectures, and it reports the average performance over three runs.\", \"This paper not only reports the phenomenon but also tries to reveal the underlying causes.\"], \"weaknesses\": [\"One of the major weaknesses is soundness. I am not fully convinced by some statements in the introduction, as they seem somewhat overstated. As a result, the contributions are somewhat limited.\", \"The results suggest that the backbone is not necessarily dependent on a specific optimizer, but rather that certain types of optimizers tend to be generally effective. For example, Table 1, which is one of the main results, seems to simply show that Adam-like optimizers are generally effective. AdaGrad-like optimizers consistently achieve poor results.\", \"Likewise, I don\\u2019t believe we could derive strong and consistent results from Figures 3 and 4. In these figures, I don\\u2019t find significant differences in the variances, except in a few cases.\", \"In Figure 5, even though AdamW and LAMB show favorable robustness, this may simply indicate that they are inherently robust optimizers, and we cannot strongly conclude that Adam-like optimizers are robust since other optimizers do not achieve significantly good robustness. Also, I am not convinced that Adagrad-like optimizers have heavy BOCB (L371), as RMSProp achieves good BOCB even though it yields poor performance.\", \"Clarifying the optimizer and backbone design recommendation rules might strengthen the paper. As I mentioned before, the takeaway from the results would be choosing a good optimizer (e.g., AdamW) and a modern backbone, which is a straightforward conclusion.\", \"The paper does not provide a comprehensive analysis of the architectural components introduced in Section 2 and Figure 1. For example, I would expect a thorough analysis of the influence of architectural choices (e.g., block designs and isotropic vs. hierarchical structures). The discussions in Section 4.1 are useful, but more in-depth analysis would improve the paper.\", \"I understand conducting experiments on ImageNet might not be easy, but I am not fully convinced that the conclusions would hold on larger datasets. This is because the performance of ViT families highly depends on the training datasets. It is acceptable if we analyze just the behaviors and properties of ViTs on smaller datasets like CIFAR, but the main results depend on the classification accuracy and variances. For example, the problem mentioned in L266 might be resolved in large data regimes.\", \"In L322, the paper claims that \\u201cwhile weaker coupling offers more user-friendliness, stronger coupling can potentially lead to better performance and generalization.\\u201d However, I couldn\\u2019t find strong evidence. I am not convinced that strong coupling is a direct reason for better performance and generalization.\", \"Similarly, in L354, \\u201cstage-wise hierarchical and attention mechanisms can more effectively navigate complex optimization landscapes, unlocking superior performance and generalization capabilities.\\u201d However, I feel this claim is somewhat contrary to the observation that modern architectures, including attention mechanisms, have BOCB, as it implies that they require specific optimizers to navigate optimization landscapes. Some research, e.g., Park, Namuk, and Songkuk Kim. \\\"How do vision transformers work?\\\"\\u00a0arXiv preprint arXiv:2202.06709\\u00a0(2022), claims that ViTs have many non-convex points that lead to poor optimization properties.\", \"In summary, I believe the questions this paper raises are meaningful. However, some statements lack strong evidence, and the novelty is limited as the takeaways are straightforward. Therefore, I lean toward rejecting this manuscript.\", \"Minor: I couldn\\u2019t find Table 3 for a while. Improving the table layout would enhance readability. Also, a more comprehensive analysis of training dynamics would strengthen the paper. It seems that the transfer learning experiments are among the key results, but they are in the appendix.\"], \"questions\": \"Please refer to the weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal to Reviewer FcXi (PART 2/4)\", \"comment\": \"### **(W1) Soundness and overstated claims in the Introduction.**\\n\\n* **(W1.3) Robustness of AdamW and LAMB in Figure 5 could not indicate the AdaGrad-like optimizers have heavy BOCB.**\\n\\n$\\\\quad$ **Reply:**\\u00a0We appreciate the insightful comments and would like to clarify our findings regarding the robustness of optimizers and the BOCB phenomenon.\\n\\n$\\\\quad$ **Regarding Figure 5 and Optimizer Robustness:** We agree that AdamW and LAMB exhibit favorable robustness in Figure 5, which could be attributed to their inherent design. However, our analysis aims to highlight that these optimizers also demonstrate a higher degree of robustness across various backbones, which is crucial for practical deployment. While inherent robustness is significant, our findings suggest that the interaction between the optimizer and the backbone architecture also plays a crucial role. For instance, AdamW and LAMB maintain robust performance across diverse backbones, indicating they are less prone to BOCB, a valuable characteristic for diverse deployment environments.\\n\\n$\\\\quad$ **Regarding AdaGrad-like Optimizers and BOCB:** While RMSProp does show some robustness in BOCB, its overall performance is poor compared to other optimizers. Our assertion of the Category (b) optimizers (i.e., AdaGrad-like optimizers) having heavy BOCB is based on their overall performance and robustness across multiple backbones and tasks. RMSProp's poor performance in accuracy underscores the challenges associated with AdaGrad-like optimizers in complex optimization landscapes. Our broader conclusion is based on comprehensive metrics, which we believe is a fair assessment given the complexity of modern vision tasks.\\n\\n* **(W1.4) Clarifying the optimizer and backbone design recommendation rules might strengthen the paper.**\\n\\n$\\\\quad$ **Reply:**\\u00a0Thanks for your suggestion. In the latest revision, we have provided take-home messages in Section 4.1 and recommendations of optimal optimizers with a ranking list in Section 4.2 and Appendix D.5. With the BOCB benchmark and empirical evidence, our manuscript could provide a straightforward takeaway for how to choose the modern network macro designs (e.g., case 1, 2, 3 with takeaways) and optimizers (e.g., case 4 and recommendation of useful optimizers). This paper also discusses the nuances of optimizer-backbone interactions, adding depth to the recommendation, which makes the takeaways practical guidelines that can help practitioners design efficient and unbiased backbones in new scenarios. Therefore, we believe that our studies provide a comprehensive understanding of the interactions between optimizers and backbones, which is a valuable contribution to the ICLR community.\\n\\n### **(W2)\\u00a0In-depth and comprehensive analysis of architectural components world improve the paper.**\\n\\n**Reply:**\\u00a0Thanks for the constructive suggestion. The paper provides a preliminary analysis of architectural components in Section 2 with Figure 1 and Table A1 in the latest revision. While a more in-depth analysis would be beneficial, the current discussion establishes a foundation for understanding the influence of architectural choices. Meanwhile, since this manuscript focuses on the interaction between backbones and optimizers, we believe the current analysis of backbones in the main text is adequate for the scope of this study, considering the 10-page limitation of the conference. The paper's findings are supported by empirical benchmarks, which reflect the landscape of the influence of architectural choices with optimizers. Currently, we intend to explain the reasons for BOCB from the views of macro designs to provide a solid starting point, which are more general components of networks.\\nAlso, we acknowledged this point as a limitation and would extend more in our future work. In the Appendix, we have also provided technical details of network architectures in Appendix A and visualized the layer-wise parameter landscape of various backbones with several metrics in Appendix D.\"}", "{\"title\": \"Official Comments for Serious Concerns Regarding the Irresponsibility of Reviewer JghS: A Call for Fair and Responsible Evaluation\", \"comment\": \"Dear (Senior) Area Chairs,\\n\\nWe are writing to express our serious concern regarding the review process of our submission, specifically regarding Reviewer JghS's non-engagement throughout the entire rebuttal stage.\\n\\nWe feel compelled to bring to your attention that despite our comprehensive responses to each initial concern and the substantial improvements acknowledged by other reviewers, Reviewer JghS maintained **complete silence throughout the entire author-reviewer discussion period**, only to provide **a brief, sweeping comment at the very end of the rebuttal phase**.\\n\\nThe **ICLR 2025 Reviewer Guidelines (https://iclr.cc/Conferences/2025/ReviewerGuide)** clearly outline several key responsibilities that we believe have not been met in this case:\\n\\n1. **Active engagement** during the discussion phase\\n2. Maintaining **openness to changing initial recommendations** based on author responses\\n3. Providing **constructive, thorough, and timely** feedback\\n\\nIn particular, Reviewer JghS's **final-hour comment** makes broad claims about analysis depth without providing any specific examples or engaging with our detailed responses and revisions. This type of feedback, delivered at the last possible moment without any prior engagement, **undermines the responsible attitude and collaborative nature** of peer review that ICLR strives to maintain. \\n\\nWe respectfully request that ACs consider these **concerning circumstances** when evaluating our manuscript. The **stark contrast** between Reviewer JghS's conclusions and that of other engaged reviewers (such as Reviewer FcXi) raises **serious questions** about the fairness of this particular review. We believe our submission deserves evaluation based on **thorough engagement** with our responses and revisions, rather than isolated, last-minute comments that **completely ignore** the discussion phase.\\n\\nWe remain confident in the significant contributions of our paper and its value to the ICLR community. Once again, we appreciate your attention to this matter and **trust in your professionalism and commitment** to maintaining the high quality of ICLR 2025 review process.\\n\\nRespectfully,\\n\\nAuthors of Submission #2246\"}", "{\"summary\": \"The paper studies the interaction between popular optimizers and vision backbone architectures. The experiments are conducted over the product of 16 architectures and 20 optimizers on CIFAR-100, ImageNet-1K and COCO.\\n\\nThe authors find that there are notable patterns characterising which combinations of optimizers + architectures work well together, where some architectures are more sensitive to the choice of optimizer -- which they refer to as \\u201cBackbone Optimize Coupling Bias\\u201d (BOCB).\\nFrom this perspective, the authors thoroughly discuss the various popular vision architectures, based on the design of their components as well as their overall architecture.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The motivation of the paper is clear, it's important to understand the relationship between vision backbones and optimizers.\", \"The authors have conducted extensive experiments across a large range of optimizers and vision backbones.\", \"The experiments reveal many interesting observations about how different architectures perform under different optimizers.\", \"The authors are releasing the benchmarking results and code, which could allow for further analysis by the community.\"], \"weaknesses\": [\"There is no discussion or references to related works studying optimizers for vision backbones and/or transformers. A brief search finds a couple of relevant papers:\", \"Scaling Vision Transformers, https://arxiv.org/pdf/2106.04560 which discusses e.g. how to adapt Adafactor for ViTs\", \"How to Fine-Tune Vision Models with SGD, https://arxiv.org/pdf/2211.09359 which studies SGD vs AdamW for finetuning.\", \"Since most vision backbones are transformer based, other transformer optimizer literature is also relevant -- such as https://arxiv.org/pdf/2002.04745\", \"In L 197 the authors write \\\"Generally speaking, it is assumed that backbones and optimizers should be unbiased and generalized\", \"components that can be combined freely without significant interdependence\\\". But I don't believe anyone thinks this is the case -- it's well known that optimizer details very often need to adjusted for new architectures. Furthermore, there are other factors such as model size and overfitting that come to play:\", \"i) a new improved optimizer applied on a an overparameterized architecture may cause it to overfit and hence hurt performance. Why only look at top-1 accuracy but not also training loss?\", \"ii) An architecture may be designed taking an improved optimizer for granted, so why should we expect it to work well with plain old SGD?\", \"ii) A lot of gains can be had from tuning hparams. The authors use the NNI toolbox to pick hparams -- but why not also look at the hparams the authors themselves have proposed for each architecture?\", \"All of the analysis is of the form \\\"this group of optimizers is working well/poorly for this method or group of methods\\\" but does not really go deeper to explain actually why they perform so differently. E.g. in L260 just tell us that SGD + DeiT-S performs poorly but why? Is it consistent with other findings in the literature? etc\", \"Some claims made in the paper do not seem to be clearly justified. E.g.\", \"in L376 the write \\\"The trajectory of vision backbone macro design has significantly sculpted the optimization landscape, progressing through distinct phases that reflect the intricate relationship between network architectural complexity and training challenges (as illustrated in Figure 1)\\\". I am not sure what exactly the authors are trying to claim here but surely Figure 1 does not illustrate it.\", \"in L417 they write: \\\"This innovative macro design [of MetaFormer] refines the optimization landscape by harmonizing with\", \"optimizers, leading to reduced BOCB and enhanced performance.\\\" How does it \\\"refine the optimization landscape by harmonizing with optimizer\\\"?\", \"in L445 they write \\\"AttenFormer effectively mitigates BOCB due to its MetaFormer architecture, which incorporates balanced structural designs and residual scaling across stages, enhancing optimization and generalization.\\\" How is the \\\"balanced structural design and residual scaling\\\" \\\"enhancing optimization and generalization\\\" here?\", \"In L456 they write \\\"BOCB in CNNs is often linked to the design of FFN blocks, which are pivotal in models like ConvNeXt\\\" without pointing to any section/table justifying the claim.\"], \"questions\": \"See weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper investigates the interaction between vision backbone architectures and various optimizers, introducing the concept of Backbone-Optimizer Coupling Bias to explain how certain combinations of optimizers and architectures perform better than others. Through a comprehensive benchmark across popular vision backbones and various optimizers the authors aim to uncover patterns that characterize effective optimizer-architecture pairings. They conduct experiments on well-known datasets and provide a detailed analysis of how different architectures exhibit sensitivity to specific optimizers, revealing important observations about the optimization landscape and performance trade-offs.\\n\\nThe reviewers acknowledge the authors' efforts to improve their manuscript, including adding updated analysis and visualizations, but significant concerns remain: (1) lack of depth in the analyses and inability to substantiate the BOCB claim with clearer evidence, (2) lack of significant insights into optimization-architecture interplay, (3) overstating contributions, lack of theoretical depth, and limited experimental support for several claims. While the study contributes an extensive empirical analysis across a range of datasets, the findings are seen as somewhat expected or insufficiently justified.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers acknowledge the authors' efforts to improve their manuscript, including adding updated analysis and visualizations, but significant concerns remain.\"}", "{\"comment\": \"Dear oReR,\\n\\nAs the Discussion phase draws to a close and time is running short, we respectfully request that you consider elevating your score if you find our rebuttal and revised submission adequately address your concerns. We also welcome continued dialogue to uphold the standard and integrity of the ICLR review process. Thanks again for your efforts, and looking forward to the discussion!\\n\\nBest regards,\\n\\nAuthors\", \"title\": \"Encouraging Discussion\"}", "{\"comment\": \"Dear Reviewer JghS,\\n\\nAs the Discussion phase draws to a close and time is running short, we respectfully request that you consider elevating your score if you find our rebuttal and revised submission adequately address your concerns. We also welcome continued dialogue to uphold the standard and integrity of the ICLR review process. Thanks again, and looking forward to your feedback!\\n\\nBest regards,\\n\\nAuthors\", \"title\": \"Encouraging Discussion\"}", "{\"title\": \"RE: Rebuttal to Reviewer FcXi\", \"comment\": \"Thank you for your response. I am glad to see the updated manuscript, as it is significantly improved. In particular, I enjoyed reading the takeaway summarization parts. Also, the updated Figure 2 looks good to me. However, on the one hand, some concerns still remain -- e.g., the w1 and 6 -- because the results and takeaways haven't changed. On the other hand, I acknowledge that some claimsare more supported compared to the initial manuscript. Initially, my stance was leaning towards rejection, but now, due to the clarification of the claims within the paper, I find myself on the fence. I'd like to increase the score to 6.\"}", "{\"title\": \"Rebuttal to Reviewer oReR (PART 3/6)\", \"comment\": \"### **(W3) Lack of Deep Analysis:**\\n\\n**Reply:** We appreciate the reviewer's insightful feedback and acknowledge the importance of a detailed analysis of optimizer performance across different vision backbones. To address this, we provide a comprehensive explanation of the underlying reasons for the observed performance differences, using DeiT-S and ConNeXt as exemplars. We also clarify where in the paper these issues are thoroughly analyzed and discuss the broader implications for future research.\\n\\n- **Section 2: Roadmaps of Vision Backbones and Optimizers**\\n\\n In Section 2, the paper categorizes vision backbones and optimizers based on their macro design and optimization strategies. This categorization serves as a foundation for understanding the interplay between network architectures and optimizers. The taxonomy of vision backbone architectures highlights the evolution from hierarchical to isotropic designs, and the intra-block micro design discusses the transition from homogeneous to heterogeneous structures. This sets the stage for explaining why certain optimizers are more suitable for specific architectures.\\n\\n- **Section 3: Backbone-Optimizer Coupling Bias (BOCB)**\\n\\n Section 3 presents the empirical findings of the backbone-optimizer benchmark, revealing the phenomenon of BOCB. The paper observes that classical CNNs like VGG and ResNet exhibit a marked co-dependency with SGD, while modern backbones like Vision Transformers (ViTs) and ConvNeXt perform better with adaptive learning rate optimizers like AdamW. This observation is supported by extensive experiments on CIFAR-100, ImageNet-1K, and COCO datasets.\\n\\n- **Section 4: Where does the BOCB come from?**\\n\\n Section 4 provides a deeper analysis of the factors contributing to BOCB. It explores the origins of BOCB from two perspectives: macro design and token mixer trade-offs.\\n\\n1. **Macro Design and Token Mixer Trade-off**:\\n - **Foundational Backbones**: Primary CNNs like AlexNet and VGG established a fundamental paradigm in computer vision. These architectures featured a straightforward design of stacked convolutional and pooling layers, which were effective but set the stage for further optimization of landscape alterations.\\n - **Classical Backbone Advancements**: The introduction of ResNet marked a pivotal shift towards stage-wise hierarchical designs, significantly enhancing feature extraction and representation learning ability. ResNet-50, in particular, demonstrated a well-balanced approach to BOCB, which exhibited strong compatibility with SGD optimizers and a relatively lower BOCB compared to its contemporaries.\\n - **Modern Backbone Evolution**: The transition to Modern DNN backbones introduced simplified block-wise designs (such as MetaNeXt for ConvNeXt variants) or complex block-wise heterogeneous structures (such as MogaNet and UniRepl.KNet), increasing the optimization challenge and the degree of BOCB due to their sophisticated feature extraction mechanisms.\\n\\n2. **Practical Scenarios: Pre-training and Transfer Learning**:\\n - The paper extends its analysis to practical tasks such as object detection and pose estimation on COCO. It observes that optimizers like AdamW, which exhibited a reliable peak in performance during pre-training, sustain their superiority in transfer learning scenarios. This suggests that the choice of optimizer during the pre-training phase can significantly influence the transfer learning outcomes.\\n\\n**Task-Home Message**\\n\\nThe paper's task-home message is that the interplay between backbone designs and optimizer selections significantly impacts the performance and adaptability of vision models. Understanding this interplay, as highlighted in Sections 2-4, is crucial for designing future vision backbones and selecting appropriate optimizers. The empirical findings and deeper analysis provide actionable insights for mitigating BOCB and enhancing training efficiency and performance in computer vision applications.\"}", "{\"title\": \"Rebuttal to Reviewer JghS (PART 2/3)\", \"comment\": \"### **(W3) Empirical reasoning and impact on accuracy is not convincing enough.**\\n\\n**Reply:** Thanks for the constructive suggestion. On the one hand, since this study aims to uncover the complex interactions between backbones and optimizers, the one-hot image classification task is the most fundamental and robust deep learning task to investigate backbone networks and optimizers. It can be the ideal research object that has few disturbing variables or hyper-parameters. We also provided large-scale experiments on ImageNet-1k and transfer learning experiments on COCO to verify our findings in Section 3. On the other hand, the paper investigates various aspects such as performance stability, hyper-parameter robustness, and parameter patterns with well-studied metrics to provide a comprehensive understanding of BOCB rather than only focusing on the simple accuracy metric. The empirical findings are intended to stimulate further research into the underlying mechanisms and theoretical explanations of BOCB.\\n\\n### **(W4) Section 4 is like reverse-reasoning from previous sections, where some theoretical or in-depth analysis is expected.**\\n\\n**Reply:** Thanks for your constructive comment. We would like to clarify two aspects. As for the writing structure of this manuscript, we aim to define the BOCB phenomenon and show the landscape of BOCB with standard benchmarks and analysis on CIFAR-100 in Section 3 and further provide empirical explanations with several cases in Section 4. Therefore, it looks like attribution analysis and reverse-reasoning of the definition and phenomenon of BOCB in Section 3. In the latest revision, we summarize the in-depth findings as takeaways in Section 4 to explain potential causes of BOCB and propose useful network design tips and recommendations for useful optimizers (in Appendix D.5).\\n\\nAs for theoretical analysis, it might not be suitable for the studied BOCB problem. Firstly, although the optimizers are studied from theoretical perspectives, the network design is a large research topic, and it is not easy to provide theoretical formulations. Our studies first unveil the coupling bias of various backbones and existing optimizers with empirical benchmarks and visualizations, which contain many different cases that need to be analyzed. Due to the page limitation of the conference, it is better to explain case-by-case and provide takeaways in Section 4. Secondly, our studies serve as a starting point for deeper theoretical and experimental investigations of the BOCB problem. We encourage the community to explore these insights further and develop more rigorous theoretical frameworks to explain the observed phenomenon. Analysis tools like PL Exponent Alpha also have in-depth theoretical support.\"}", "{\"title\": \"Rebuttal to Reviewer JghS (PART 3/3)\", \"comment\": \"## Response to Questions\\n\\n---\\n\\n### **(Q1) How does each of the specific update steps in Section 2.2 affect the performance of different architectures?**\\n\\nAs discussed in Section 2.2 and Section 4, it is important to understand how each specific update in Step 2 and Step 3 in Algorithm 1 affects the performance and BOCB properties of various architectures. Here, we explain these for the four steps in Algorithm 1:\\n\\n* **Step 1: Gradient Computation.**\\n\\n As it is known all, computing gradients is fundamental for effective DNN optimization with the backpropagation algorithm, which is automatically achieved by deep learning libraries like PyTorch and TensorFlow.\\n\\n* **Step 2: Gradient Estimation.**\\n\\n - **Impact:**\\u00a0How to estimate the accurate gradients varies in different optimizers. Techniques like momentum smooth the gradient estimates, which can be particularly beneficial for architectures with complex loss landscapes. Different architectures may have varying sensitivities to the outlier and noises of gradients.\\n\\n - **Example:**\\u00a0Momentum-based methods like SGD with momentum are well-suited for classical CNNs with hierarchical structures, as they help navigate the loss landscape more smoothly. Meanwhile, modern DNNs like Vision Transformers (ViTs) with complex attention mechanisms may require more precise gradient computations to avoid optimization instabilities, which are more challenging to optimize than classical CNNs like ResNet, as explained in [1].\\n\\n* **Step 3: Learning Rate Calculation.**\\n\\n - **Impact:**\\u00a0Whether to use adaptive learning rates to adjust the parameter-wise learning rate can be crucial for network optimization. As for modern DNNs with the heterogeneous block-wise design of token mixing and channel mixing, optimizers with adaptive learning are crucial to adjust the learning rates for parameters in different blocks with varying gradient scales [2].\\n\\n - **Example:**\\u00a0As discussed in Section 2.2, optimizers of Category (2) and (3) are effective for modern architectures using the macro design of token mixing and channel mixing, especially ViTs or DNNs with self-attention blocks. As explained in [2], the self-attention block is a high-order operation with high Lipschitz constant and high Hessian condition numbers, which is heterogeneous to classical convolutions or the FFN block. Therefore, the learning rate of different types of blocks should be dynamically adjusted with some special mechanisms, which might be the cause of the backbone-optimizer coupling bias. If not using the optimizers with adaptive learning rates (e.g., using SGD instead of Adam), it will be hard to suit different parts of the modern networks and cause bad results.\\n\\n* **Step 4: Parameter Update.**\\n\\n - **Impact:**\\u00a0The parameter update step, including regularization techniques like weight decay, can significantly affect the convergence and generalization of different architectures.\\n - **Example:**\\u00a0Weight decay is particularly important for preventing overfitting in deep networks like ResNet, while modern architectures like ConvNeXt.V2 may benefit from more sophisticated regularization techniques.\\n\\nWhile these thoughts are based on empirical observations or backgrounds of previous works, they provide a foundation for future theoretical and experimental investigations into the specific effects of each update step on different architectures. The empirical findings of our studies highlight the need for a more nuanced understanding of these interactions, which can guide the development of more effective optimization strategies and network designs.\\n\\n### Reference\\n\\n[1] How Do Vision Transformers Work? In ICLR, 2022.\\n\\n[2] Understanding Optimization of Deep Learning via Jacobian Matrix and Lipschitz Constant. In arXiv, 2023.\\n\\n---\\n\\nIn conclusion, thanks again for your constructive feedback, and we have considered your valuable comments for the revision and future work. We believe that the issues raised by the reviewers will be addressed in responses and the revision, and we hope you can reconsider our submission. If you are satisfied with our response and effort, please consider updating your score. If you need more explanation, please feel free to contact us!\"}", "{\"title\": \"Response to Reviewer FcXi\\u2019s Feedback\", \"comment\": \"We would like to express our sincere appreciation for your thoughtful feedback and acknowledgment of the improvements in the revised manuscript. We are pleased to hear that you found the takeaway summarization parts and the updated Figure 2 to be valuable. We also appreciate your willingness to increase the score to 6, reflecting the progress made in addressing the concerns raised. We would like to address the remaining concerns, mainly related to W1 and W6, as follows.\\n\\n### **Response to W1: Soundness of Claims and Objectivity of Results.**\\n\\nThank you for your recognition of the improvements in the clarity of our claims. We understand that in the initial manuscript, there were concerns about the objectivity of some of our statements regarding the results. Specifically, some claims may have appeared overly subjective, potentially overstating the generalizability of the findings. In response to this, we have made significant adjustments in the revision, particularly in how we present the relationship between backbone architectures and optimizers. The claims in the updated version are now more cautious and nuanced, and we have carefully avoided overstating the implications of our results.\\n\\nWe would like to emphasize that while the results themselves have not changed, the revised claims reflect a more balanced and empirically grounded interpretation. We have consciously weakened the language where necessary to ensure that the conclusions are stated with the appropriate level of caution, highlighting the trends observed rather than asserting definitive conclusions. This revision ensures that our findings are presented objectively and in a manner that is more aligned with the data. We hope that this addresses your concern, and we believe it resolves the issue of subjectivity without altering the core results.\\n\\n### **Response to W6: Takeaways and Novelty.**\\n\\nRegarding your comments on the novelty of the takeaways, we understand the concern that the takeaways might not appear to be groundbreaking at first glance. However, we would like to emphasize that the main contribution of our paper lies in identifying and thoroughly analyzing the Backbone-Optimizer Coupling Bias (BOCB)\\u2014a critical and previously underexplored issue in vision model design. While the takeaways themselves may seem straightforward, they are the direct result of highlighting a new and significant phenomenon in the community: the interaction between backbone architectures and optimizers, which has been largely overlooked until now.\\n\\nTo further clarify the novelty of our work, we would like to point out that the appendix contains additional, more detailed takeaways that are novel and practically useful. In Appendix Table A4 and Appendix D.5, we have provided a ranking of optimizers and a series of practical guidelines based on the BOCB benchmark. These insights are designed to help practitioners make informed decisions when selecting optimizers for different backbone architectures, offering actionable recommendations grounded in our empirical analysis. We intentionally placed these detailed takeaways in the appendix to ensure the main text remained focused on the core narrative without overwhelming the reader with excessive detail.\\n\\nBy presenting these more direct and actionable takeaways in the appendix, we provide a comprehensive set of practical tools for the community to engage with the findings. This strategic organization allows us to maintain the clarity and focus of the main manuscript while still offering the depth of insight expected from such a nuanced analysis. We believe this approach, which balances clarity in the main text with richness in the appendix, makes the paper more accessible to a wide audience, from researchers to practitioners.\\n\\n### **Clarifying the Boarder Impact of BOCB.**\\n\\nWe agree with your suggestion that a broader community understanding of BOCB is crucial. Our work aims not only to provide new insights into optimizer-backbone interactions but also to highlight an issue that we believe has been largely ignored in the current literature. By systematically analyzing the Backbone-Optimizer Coupling Bias, we hope to foster further discussion and exploration in the community, leading to more effective and optimized vision models. This paper\\u2019s focus on BOCB offers a new perspective that could be explored in future research, helping to refine how we design and optimize deep learning models.\\n\\n---\\n\\nWe believe these clarifications would address your concerns and strengthen the technical rigor of the manuscript! We respectfully hope that this work can be seen by more researchers and practitioners in the community. Thus, your rating is particularly valuable to us. We also look forward to and welcome you to increase your rating further, and we would be happy to provide more information based on your feedback or further questions. Thanks again for your efforts and looking forward to your feedback!\\n\\nWarm regards,\\n\\nAuthors\"}", "{\"title\": \"Encouraging Final Check and Feedback\", \"comment\": \"Dear Esteemed Reviewers,\\n\\nWe hope this message finds you well. We are writing to express our profound gratitude for the invaluable feedback and thoughtful discussions we have engaged in over the past few weeks. Your insights have been pivotal in significantly elevating the quality of our manuscript, and the improvements we have made are substantial. We believe the current version of the manuscript now stands as a testament to the collaborative effort and dedication we have all invested.\\n\\nWe understand that we are currently unable to submit further revisions, but we want to assure you that we remain fully committed to considering any additional suggestions or concerns you may raise. Should you find the revised manuscript satisfactory, we kindly request that you consider increasing the score accordingly. We are also eager to continue our dialogue and are open to further discussions to address any lingering doubts or questions you might have.\\n\\nMoreover, we wish to emphasize that we are willing to make further refinements to the manuscript based on any subsequent discussions. Your continued engagement and feedback are of utmost importance to us, and we are dedicated to ensuring that our work meets the highest standards of quality and relevance.\\n\\nYour participation and the time and effort you have invested in reviewing our work are deeply appreciated. We are hopeful that this revised manuscript will gain the recognition it deserves within the community and contribute meaningfully to the field.\\n\\nThank you once again for your unwavering support and constructive feedback. We look forward to the possibility of further collaboration and discussion, and we remain committed to refining our work to meet your expectations.\\n\\nWarmest regards,\\n\\nAuthors\"}", "{\"title\": \"Rebuttal to Reviewer oReR (PART 1/6)\", \"comment\": \"## Response to Weakness\\n---\\n\\n### **(W1)\\u00a0Lack of Related Work Discussion**\\n\\n**Reply:** We appreciate the reviewers' suggestions and acknowledge the importance of referencing related works on the optimization challenge for Transformers. We have incorporated these references into our revised manuscript (e.g., Introduction) to provide a more comprehensive context of similar research (adding a section in the Appendix) for our study. There are two aspects of relevant studies:\\n\\n**(1) Task-specific analysis of optimization challenges of Transformers.**\\nExisting works have investigated the optimization challenges of Transformers with different types of optimizers (e.g., SGD and Adam) in various task-specific scenarios. As for NLP tasks, [1, 2] analyzed and explained the reason why Transformers are hard and unstable to train from the view of Pre-LN vs. Post-LN [1] and the view of inductive bias [2]. As for Vision Transformers, [3] demonstrates that AdaFactor, with appropriate scaling, could learn Transformers with both efficiency and effectiveness in large-scale parameters, while [4] shows that SGD requires the embedding layer frozen and warm-up strategies to achieve competitive performance on Transformers (similar findings are also proposed in MoCo.V3).\\n\\n**(2) Improvements of network macro designs.**\\nRegarding the macro design of Transformer variants, modifications on Normalization layers and residual connections could improve the challenge of training heterogeneous blocks of self-attention and FFN. These parts of techniques are discussed in Section 2 and Appendix A. Specifically, as for the normalization in Transformers, the block-wise design with normalization layers, such as Pre-Norm vs Post-Norm [5] and DeepNorm [6], has been maintained in modern Transformers. Meanwhile, improvement of the residual branch with initialization (e.g., ReZero [7]) and adaptive layer-wise scaling tricks (e.g., GradInit [8] and LayerScale [9]) are also useful to improve the training stabilities of Transformers and make SGD more compatible with Transformers training. However, even with such adjustments, optimizers like AdamW still tend to be more effective for Transformers, as they better handle the sparse gradient updates and the large parameter space typical of these models [10].\\n\\n### Reference\\n\\n[1] Understanding the Difficulty of Training Transformers. In EMNLP, 2020.\\n\\n[2] Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent. In EMNLP, 2021.\\n\\n[3] Scaling Vision Transformers. In CVPR, 2022.\\n\\n[4] How to Fine-Tune Vision Models with SGD. In arXiv, 2022.\\n\\n[5] On Layer Normalization in the Transformer Architecture. In ICML, 2020.\\n\\n[6] Scaling Transformers to 1,000 Layers. In arXiv, 2022.\\n\\n[7] ReZero is All You Need: Fast Convergence at Large Depth. In UAI, 2021.\\n\\n[8] GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training. In NeurIPS, 2021.\\n\\n[9] Going deeper with image transformers. In ICCV, 2021.\\n\\n[10] Why Transformers Need Adam: A Hessian Perspective. In NeurIPS, 2024.\"}", "{\"comment\": \"I really appreciate the authors' efforts in rebuttal and the paper revision.\\nAfter reading the rebuttal and other reviewers' comments, I feel both the strengths and weaknesses in the original reviews still hold. \\nAlthough thorough experiments run across hyperparameters search and various optimizers/back are appreciated, the analyses remain in the 1-level depth that tries to conclude some observations from the numbers. I still more in-depth analyses if the authors try to dig into the observations and why maybe it causes such observations with another level of experiments (visualizations, some indicators) to validate -- that will be helpful for the paper to be convincing. Also, even the proposed BOCB bias is not so clear from the current experiments. Unfortunately I decide to keep the current score.\"}", "{\"summary\": \"This work provides an empirical study of the interplay between vision backbones and optimizers. It experiments on common vision datasets like CIFAR-100, ImageNet-1K, and COCO, to document the performance of different architectures (CNN, transformer, etc) and optimizers (SGD-based with momentum, Adam, RMSProp, etc).\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The paper seems to tackle a less discussed aspect about the interplay of different optimizers and architectures\", \"A comprehensive study on the impact of many hyperparameters\", \"Great summary of existing architectures and optimizers\"], \"weaknesses\": [\"Overall I am not sure the significance of the contribution of this paper is for a conference paper. In my humble opinion, it leans toward a workshop technical report -- certainly empirical study is valuable but I did not find much insights from there.\", \"The claims and explanations about BOCB are basically hand-waving like section 4; more rigorous derivations of the inductive bias will greatly improve\", \"All reasoning is empirical and narrows down to the impact of the accuracy. It is not very convincing to get a clear conclusion from such a study. Section 4 is more like reverse-reasoning from the empirical study from the previous sections. This may be acceptable for a paper that proposes a new method that shows better performance and leaves the community to figure out the true reasons. Still, I am not confident about it for a study paper, which I would expect some theoretical induction or in-depth analysis to make it more convincing.\"], \"questions\": \"What are the authors' thoughts about each of the specific update steps in section 2.2 affecting the performance of different architectures?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal to Reviewer oReR (PART 2/6)\", \"comment\": \"### **(W2.1) Assumptions and Claims:**\\n\\n**Reply:** We appreciate the reviewer's comment and acknowledge that the assumption of complete independence between backbones and optimizers is indeed a simplification. The statement in Line 197 reflects a common starting point in discussions about model components, where it is often implicitly assumed that well-established backbones and optimizers should be broadly applicable without significant tuning. However, as the reviewer rightly points out, this assumption is not entirely accurate, and our work aims to address this gap by empirically exploring the interplay between vision backbones and optimizers.\\n\\nOur study, as detailed in the paper, reveals that certain optimizers are indeed more effective for specific network architectures, particularly as architectures evolve from classical CNNs to modern DNNs. For instance, classical CNNs like VGG and ResNet exhibit a marked co-dependency with SGD, while modern backbones such as Vision Transformers and ConvNeXt perform better with adaptive learning rate optimizers like AdamW. This observation underscores the need for careful selection of optimizers based on the architectural characteristics of the backbone.\\n\\nMoreover, our analysis extends beyond just optimizer selection to include the robustness of hyper-parameters and the layer-wise patterns of backbone parameters. We find that well-designed methods, such as those with robust hyper-parameter settings and stable parameter patterns, are less susceptible to BOCB. This suggests that while model size and overfitting are indeed important factors, the choice of optimizer and its interaction with the backbone architecture play a crucial role in determining overall performance.\\n\\n### **(W2.2) There are other factors, such as model size and overfitting, that come into play:**\\n\\n(i)\\u00a0**Overfitting and Training Loss:**\\n\\n$\\\\quad$ Our primary focus on top-1 accuracy is driven by its role as a clear and widely accepted metric for evaluating the performance of different backbone-optimizer combinations. While training loss is undoubtedly a crucial indicator, it can be highly sensitive to specific training setups, including regularization techniques and data augmentation strategies. By emphasizing top-1 accuracy, we aim to provide a more objective and comparable measure across diverse experimental configurations. \\n\\n(ii)\\u00a0**Optimizer-Specific Architectures:**\\n\\n$\\\\quad$ Our study is designed to explore the general interplay between backbones and optimizers rather than focusing on architectural designs that may be tailored for specific optimizers. Although it is true that some modern architectures are developed with particular optimizers in mind, our objective is to uncover broader insights into the relationship between these components. By assessing the performance of various architectures across a range of optimizers, including plain SGD, we seek to illuminate the inherent biases and limitations of both the architectures and the optimizers.\\n\\n(iii)\\u00a0**Hyper-parameter Tuning:**\\n\\n$\\\\quad$ We leveraged the NNI (Neural Network Intelligence) toolbox for hyperparameter tuning to ensure a fair and systematic comparison across various optimizers and backbones. The NNI toolbox provides a robust framework for hyperparameter search, enabling us to explore a wide range of configurations efficiently. However, we recognize that domain-specific knowledge plays a crucial role in identifying optimal hyperparameters. To enhance the effectiveness of our tuning process, we also conducted manual validation and optimization on top of the automated search provided by NNI. This dual approach allowed us to fine-tune the hyperparameters based on expert insights, leading to more refined and contextually appropriate settings.\"}", "{\"title\": \"Rebuttal to Reviewer FcXi (PART 4/4)\", \"comment\": \"### **(W5) The claim in L354 is somewhat contrary to the observation of modern architecture in some research.**\\n\\n**Reply:**\\u00a0The claim in L354 that stage-wise hierarchical and attention mechanisms can navigate complex optimization landscapes is indeed speculative and compared to the vanilla ViTs. As for the aspect of complex optimization landscapes [6], the macro design of MetaFormer, i.e., the hierarchical stage-wise design and heterogeneous block-wise of token mixing and channel mixing with LayerInit or ResScale [7], could improve the complex optimization drawbacks in the vanilla Transformer (isotropic stage-wise design with self-attention mechanism) by introducing inductive bias in the stage-wise design and alleviating the heterogeneity of the attention block and the FFN block [7]. Actually, this claim is presented as a potential direction for future studies rather than a definitive conclusion. We acknowledge the complexity of optimization landscapes and suggest that further research is needed to understand these mechanisms fully. As for the performance and generalization capacities, the modern DNNs with the macro design of MetaFormer could achieve better performance and generalization abilities with the flexibility to migrate to new scenarios, as we explained in (W4). Generally speaking, this claim is not definitive but rather a hypothesis supported by empirical findings. Our studies suggest that further research is needed to understand the implications of coupling strength fully.\\n\\n### Reference\\n\\n[6] How Do Vision Transformers Work? In ICLR, 2022.\\n\\n[7] Normformer: Improved Transformer Pretraining with Extra Normalization. In arXiv, 2021.\\n\\n### **(W6) Straightforward takeaways lack novelty.**\\n\\n**Reply:** As we mentioned in responses to (W1), we have summarized task-home messages for the network design guidance and recommended optimizers. Meanwhile, this paper aims to raise the Backbone-Optimizer Coupling Bias problem that has been overlooked for years since Transformer and ViTs came out, which might provide a solid foundation for future research. So, we conduct comprehensive benchmarks and empirical analysis to reach the takeaways and findings to demonstrate the BOCB phenomenon. We believe these are actually the novelty and contributions of our manuscript rather than proposing a new backbone instance or specific analysis of network design. Meanwhile, the improvement of the macro design for backbone architectures is incremental but significant (e.g., the MetaFormer design is the most useful version that combines a vast range of Mixer networks). Our manuscript could also strengthen the existing macro design and offer valuable insights for practitioners and researchers alike.\\n\\n### **(W7) Minor issues: The readability of Table 3 and training dynamics can be enhanced.**\\n\\n**Reply:**\\u00a0Thanks for your suggestion. The minor issue regarding Table 3 will be addressed by improving the table layout to enhance readability. Meanwhile, the paper will consider including a more comprehensive analysis of training dynamics, as suggested, to strengthen the overall presentation and depth of the study. The transfer learning experiments, currently in Appendix B.2, will be highlighted in the main text to ensure they are given the attention they deserve. The paper's analysis provides a comprehensive understanding of the interactions between optimizers and backbones, which is a valuable contribution to the field.\\n\\n---\\n\\nOverall, thanks again for your constructive feedback, and we have considered your valuable comments for the revision and future work. If you are satisfied with our response and effort, please consider updating your score. If you need any clarification, please feel free to contact us. We respectfully believe that this work is attuned to the ICLR community, and we hope that more researchers can see our work in the community. We are more than pleased and looking forward to hearing back from you!\"}", "{\"title\": \"Response to Reviewer JghS's Late-Stage Comments and Non-Engagement\", \"comment\": \"Dear Reviewer JghS,\\n\\nWe appreciate you taking the time to provide a final comment. However, we must respectfully express our significant concerns about the review process in this case. While other reviewers, particularly Reviewer FcXi, have **engaged actively** throughout the discussion period and acknowledged the **substantial improvements** in our revised manuscript (\\\"significantly improved... enjoyed reading the takeaway summarization parts\\\"), we note with disappointment that **this is your first engagement** since the initial review, despite our detailed responses to each of your concerns.\\n\\n### About our revisions:\\nOur revised manuscript has incorporated substantial improvements, including:\\n\\n- Enhanced theoretical foundations and empirical evidence supporting the findings;\\n- Clearer presentation of takeaway messages and optimizer recommendations in practice;\\n- More rigorous analysis of architecture components and their interaction with optimizers;\\n- Additional experiments validating our conclusions;\\n\\n### About the ICLR 2025 Reviewer Guidelines:\\nThe **ICLR 2025 Reviewer Guidelines** explicitly emphasize that reviewers should **\\\"engage in discussion\\\"** and be **\\\"actively engaged during this phase,\\\"** maintaining **\\\"a spirit of openness to changing their initial recommendation.\\\"** Furthermore, reviewers are expected to provide **\\\"constructive, thorough and timely\\\"** comments. In light of these explicit requirements, the **complete absence** of any discussion or feedback during the entire rebuttal period has prevented us from addressing the potential concerns you may have had about our revisions. **We strongly encourage Reviewer JghS to review the ICLR 2025 Reviewer Guide (https://iclr.cc/Conferences/2025/ReviewerGuide) to better understand the expectations and responsibilities of ICLR reviewers.**\\n\\nWe find it concerning that such a broad judgment of analysis depth is made without any specific examples or substantive feedback, particularly after **remaining silent throughout the entire reviewer-author discussion period**. Making such sweeping claims without engaging with our detailed responses or providing concrete suggestions for improvement does not align with the review standards that ICLR expects. At this **very late stage**, suggesting the need for additional depth without having participated in any previous discussion or provided specific guidance, is **neither constructive nor scientifically sound** within the conference timeline.\\n\\nWe remain confident in the significant contributions of this work and its value to the ICLR community, as evidenced by the positive comments from other reviewers who have engaged thoroughly with our revisions and responses.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Rebuttal to Reviewer JghS (PART 1/3)\", \"comment\": \"## Response to Weaknesses\\n\\nWe first express our gratitude for your valuable and constructive reviews, and invite you to go through the general response. Then, we respond to your concerns point-by-point as follows.\\n\\n---\\n\\n### **(W1) Lack of significant contributions as a conference paper.**\\n\\n**Reply:** We understand your concern about the perceived significance of proposing some new techniques in our work to serve as a conference paper. After revision, we believe that our paper makes a substantial contribution to the community of network design and optimization in several aspects:\\n\\n$\\\\quad$ **(1) Novel Phenomenon Identification.** We have identified and characterized the Backbone-Optimizer Coupling Bias (BOCB) phenomenon, which, to our knowledge, has not been previously explored in the literature. As illustrated in Figure 2 of the latest revision, this phenomenon has been overlooked for years since Transformers and ViTs came out and are widely accepted. Meanwhile, this phenomenon has significant implications for the design and training of visual networks, affecting both pre-training and transfer learning to various scenarios.\\n\\n$\\\\quad$ **(2) Empirical Breadth.** As empirical studies with benchmarks, our study is one of the most comprehensive empirical analyses to date, evaluating 20 different vision backbones against 20 optimizers across CIFAR-100 and verifying our findings with typical backbones and 20 optimizers on ImageNet-1k and COCO. This breadth of analysis provides valuable insights into the generalizability of BOCB across different model architectures and optimization algorithms.\\n\\n$\\\\quad$ **(3) Practical Relevance.** The insights gained from our study can guide practitioners in selecting appropriate optimizers for their specific model architectures, potentially improving performance and reducing development time. As summarized in the latest revision, we provided several take-home messages for network design and recommended optimal optimizers after considering several aspects (detailed in Appendix D.5). This practical relevance is crucial for the advancement of the field of computer vision and deep learning optimization.\\n\\n### **(W2) Hand-waving claims about BOCB in Section 4. More rigorous derivations and inductive bias will greatly improve.**\\n\\n**Reply:** Thanks for the insightful comment. To address the concern of hand-waving claims, we have revised this section to provide a more systematic analysis and foundations. We formalized the concept of inductive bias in vision backbones by analyzing the architectural components of CNNs and ViTs. Specifically, we linked the local connectivity and hierarchical feature extraction in CNNs to the homogeneous parameter updates assumed by SGD, while the global token-mixing and isotropic designs in ViTs were shown to create optimization landscapes better suited for adaptive optimizers like AdamW, which can dynamically adjust learning rates for heterogeneous parameter groups. To substantiate these claims, we introduced a mathematical characterization of layer-wise parameter distributions using PL exponents and entropy metrics, demonstrating that modern backbones with more heterogeneous parameter spaces intrinsically require adaptive optimization strategies, thereby providing a theoretical basis for BOCB.\\n\\nMeanwhile, we conducted new ablation studies to validate these findings. By isolating key architectural components such as token mixers, attention mechanisms, and block structures, we examined their individual contributions to optimization dynamics. For instance, we found that replacing attention-based token mixers in ViTs with simpler operations reduces BOCB but compromises model capacity for long-range feature interactions. These findings were further supported by experiments measuring the robustness of hyper-parameter settings across backbones and optimizers, revealing consistent patterns that align with our theoretical framework. These revisions collectively provide a deeper and more rigorous explanation of BOCB, addressing the reviewer's concerns and strengthening the manuscript's contributions to understanding the interplay between backbones and optimizers.\"}", "{\"title\": \"Rebuttal to Reviewer oReR (PART 4/6)\", \"comment\": \"### **(W4) Clarifying the trajectory of vision backbone macro design and its impact on the optimization landscape.**\\n\\n**Reply:** We appreciate the reviewer's feedback and acknowledge the need for a more transparent explanation of the claim made in Line 376. The intention of this statement is to highlight the evolving nature of vision backbone architectures and their implications on the optimization landscape. We understand that the clarity of this connection may not have been sufficiently conveyed through Figure 1 alone. To address this, we provide a more detailed explanation below.\\n\\n**Clarification of the Claim**\\n\\nThe claim in Line 376 refers to the progressive changes in the macro design of vision backbones and how these changes have influenced the optimization landscape over time. The trajectory from simpler to more complex architectures has introduced new challenges in training, which are reflected in the evolving optimization strategies required to achieve state-of-the-art performance.\\n\\n**Figure 2: Evolution of vision backbone architectures and optimizers**\\n\\nFigure 2 chronicles the development of vision backbone architectures and their corresponding optimizers, illustrating the interplay between network complexity and optimization strategies. The timeline is divided into three main phases: Primary CNNs (2012-2014), Classical CNNs (2015-2018), and Modern DNNs (2019-2024). Early models like AlexNet and VGG, characterized by simple, isotropic architectures with stacked convolutional and pooling layers, facilitated effective training with basic optimizers like SGD. The introduction of ResNet marked a significant advancement with residual connections and hierarchical designs, maintaining a balance between performance and optimization complexity, still favoring SGD but with increased challenges. Modern architectures, exemplified by ConvNeXt and ViT, feature complex block-wise designs such as MetaFormer blocks and self-attention mechanisms, necessitating more sophisticated optimizers like AdamW due to the heightened optimization landscape complexity. This evolution underscores the critical interplay between network architecture design and optimization strategies, highlighting the need for increasingly sophisticated optimizers as network complexity escalates.\"}", "{\"title\": \"Rebuttal to Reviewer oReR (PART 6/6)\", \"comment\": \"### **(W6) BOCB in CNNs and the design of FFN blocks**\\n\\n**Reply:** We appreciate the reviewer's insightful observation regarding the link between the Backbone-Optimizer Coupling Bias (BOCB) in Convolutional Neural Networks (CNNs) and the design of Feed-Forward Network (FFN) blocks. To clarify this point, we provide a refined analysis of the experimental results and visualizations that support our claim.\\n\\n**Experimental Results and Analysis**\\n\\n**Table 1: Top-1 Accuracy on CIFAR-100**\\n\\nIn Table 1 of the main paper, we present the top-1 accuracy results for various vision backbones paired with different optimizers on the CIFAR-100 dataset. Notably, backbones such as ConvNeXt-T and MogaNet-S, which incorporate complex FFN blocks, exhibit significant performance variations when paired with different optimizers. For instance, while ConvNeXt-T achieves high accuracy with optimizers like AdamW and LAMB, its performance drops notably with SGD and LARS. This variability underscores the sensitivity of these models to the choice of optimizer, indicative of a strong BOCB.\\n\\n**Table 2: Top-1 Accuracy on ImageNet-1K**\\n\\nExtending our analysis to the ImageNet-1K dataset (Table 2), we observe similar trends. ConvNeXt-T and MogaNet-S, when trained with SGD, exhibit a marked decrease in performance compared to their results with adaptive learning rate optimizers. This further reinforces the notion that the design of FFN blocks in these models introduces complexity into the optimization landscape, necessitating more robust optimization strategies.\\n\\n**Ridge Plot Visualizations**\\n\\nTo further elucidate the reasons behind this sensitivity, we provide ridge plots of the\\u00a0L2*L*2-norm parameter patterns for these models (Figure A3 in Appendix D). These plots reveal distinct layer-wise patterns in the learned parameters, which can be attributed to the specific design of the FFN blocks.\\n\\n**Figure A3: Ridge Plot of\\u00a0L2*L*2-norm of Learned Parameters on CIFAR-100**\\n\\n- **ConvNeXt-T (Figure A3(k)):**\\u00a0The\\u00a0L2*L*2-norm distribution shows significant variability across layers, particularly in the FFN blocks. This variability suggests that SGD struggles to maintain stable updates across these layers, leading to suboptimal performance.\\n- **MogaNet-S (Figure A3(l)):**\\u00a0Similar to ConvNeXt-T, MogaNet-S exhibits high\\u00a0L2*L*2-norm variability, especially in the layers associated with complex token-mixing operations. This complexity exacerbates the BOCB, making adaptive optimizers like AdamW more suitable.\\n\\n**Explanation of BOCB in CNNs with FFN Blocks**\\n\\nThe design of FFN blocks in models like ConvNeXt and MogaNet introduces additional layers of complexity into the optimization process. These blocks, often implemented as Point-wise Convolutions or inverted bottleneck layers, are susceptible to overfitting without proper regularization. The intricate interactions within these blocks create a challenging optimization landscape that fixed learning rate optimizers like SGD find difficult to navigate effectively.\\n\\nIn contrast, adaptive learning rate optimizers, such as AdamW, can dynamically adjust the learning rates based on the historical statistics of gradients. This adaptability allows them to better handle the complex interactions within the FFN blocks, leading to more stable and effective updates.\\n\\nOur detailed analysis, supported by empirical results and visualizations, demonstrates the significant impact of FFN block design on the BOCB in CNNs. The complexity introduced by these blocks necessitates the use of robust optimization strategies, such as adaptive learning rate optimizers, to achieve optimal performance. This insight underscores the importance of considering both the architectural design and the optimization strategy when developing vision backbones.\"}", "{\"title\": \"Rebuttal to Reviewer oReR (PART 5/6)\", \"comment\": \"### **(W5) Harmonizing macro design with optimizers to refine the optimization landscape**\\n\\n**Reply:** We appreciate the reviewer's insightful question regarding the statement: \\\"This innovative macro design [of MetaFormer] refines the optimization landscape by harmonizing with optimizers, leading to reduced BOCB and enhanced performance.\\\" To clarify this point, we provide a detailed explanation of how the macro design of MetaFormer achieves this harmonization and refinement.\\n\\n**Understanding BOCB and Its Implications**\\n\\n$\\\\quad$ The Backbone-Optimizer Coupling Bias (BOCB) phenomenon refers to the observed dependency between the design of vision backbones and the choice of optimizers. A strong BOCB indicates that a particular backbone architecture performs significantly better with specific optimizers, while a weak BOCB suggests that the backbone can achieve optimal performance across a broader range of optimizers. The latter scenario is generally more desirable as it enhances the flexibility and practicality of the backbone in various deployment scenarios.\\n\\n**Macro Design and Optimization Landscape**\\n\\n$\\\\quad$ The macro design of a vision backbone encompasses the overall architectural layout, including stage-wise and block-wise structures, as well as the choice of core operators (e.g., convolutions, self-attention). The optimization landscape refers to the set of challenges and complexities that the optimizer must navigate to train the model effectively.\\n\\n**MetaFormer's Innovative Design**\\n\\n$\\\\quad$ MetaFormer introduces a balanced and versatile macro design that incorporates both stage-wise and block-wise heterogeneity. This design is characterized by:\\n\\n1. **ResScale**: A layer-wise initialization trick that stabilizes the training of deep models, reducing the risk of optimization issues such as vanishing gradients.\\n2. **Flexible Token Mixers**: The ability to integrate various token mixers (e.g., identity, pooling, attention, convolution) within the same framework, allowing for a more adaptable and robust optimization process.\\n\\n**Harmonizing with Optimizers**\\n\\n$\\\\quad$ The harmonization between the MetaFormer architecture and optimizers is achieved through several key design principles:\\n\\n1. **Balanced Complexity**: By balancing the complexity of the token mixers and the overall architecture, MetaFormer ensures that the optimization landscape is neither too simple (leading to underfitting) nor too complex (leading to overfitting and optimization difficulties). This balance makes the model more amenable to a wide range of optimizers.\\n2. **Robustness to Hyperparameters**: The design of MetaFormer, particularly the use of ResScale and flexible token mixers, enhances the robustness of the model to variations in optimizer hyperparameters. This robustness reduces the sensitivity of the model to the specific settings of the optimizer, thereby minimizing BOCB.\\n3. **Efficient Optimization**: The hierarchical and block-wise heterogeneous design of MetaFormer facilitates more efficient gradient propagation and parameter updates. This efficiency allows optimizers to navigate the optimization landscape more effectively, leading to faster convergence and better performance.\\n\\n**Empirical Validation**\\n\\n$\\\\quad$ Our empirical experiments, as detailed in the manuscript, demonstrate that MetaFormer backbones exhibit reduced BOCB compared to other architectures. For instance, MetaFormer variants like ConvFormer show consistent performance across a variety of optimizers, indicating a weaker BOCB. This consistency is a direct result of the harmonized macro design that refines the optimization landscape.\"}", "{\"title\": \"General Response\", \"comment\": \"We would like to extend our sincere thanks to Reviewers oReR, JghS, and FcXi for their insightful comments and constructive feedback. In response to their suggestions, we have made significant revisions to the manuscript to enhance its clarity, empirical robustness, and the depth of its contributions. The key points of revision are highlighted in $\\\\color{Brown}brown$ color.\\n\\n**1. Clarifying the Research Problem and Contributions**\\n\\n$\\\\quad$ This study addresses a crucial yet under-explored phenomenon in visual representation learning, termed Backbone-Optimizer Coupling Bias (BOCB). This phenomenon has been overlooked for years since Transformers came out and is widely used in various scenarios (as illustrated in Figure 2 in the revision). BOCB reflects the inherent dependencies between specific vision backbones, such as CNNs and ViTs, and the choice of optimizers, significantly affecting training stability and model generalization. Through a comprehensive empirical framework, we evaluated 20 vision backbones against 20 optimizers across multiple datasets (e.g., CIFAR-100 and ImageNet-1k), uncovering patterns of architectural and optimization interdependence. Key findings include the strong coupling of classical CNNs with SGD optimizers and the pronounced reliance of modern architectures like ViTs on adaptive optimizers such as AdamW. These insights not only deepen the theoretical understanding of backbone-optimizer interplay but also provide actionable guidelines for designing architectures and selecting optimizers that mitigate BOCB, ensuring more robust and generalizable vision models.\\n\\n**2. Summary of Revision Updates**\\n\\n$\\\\quad$ In this revision, we refined our analysis of how architectural choices in Vision Backbones interact with optimizers to influence BOCB. By systematically evaluating architectural refinements, we demonstrated how BOCB can be mitigated while preserving competitive performance. This work elucidates the mechanisms underlying BOCB and offers practical guidelines for designing resilient vision models. Additionally, we provide a deeper analysis of how BOCB manifests across architectures, emphasizing its implications for model generalization and robustness.\\n\\n$\\\\quad$ To further strengthen the manuscript, we developed a refined methodology for detecting and characterizing BOCB. This includes a detailed evaluation of optimizers, with a ranking of 20 widely used methods based on their effectiveness. Our analysis shows that certain optimizers, such as AdamW, are more robust against BOCB across diverse architectures. These findings, supported by extensive empirical evidence, offer generalizable recommendations for practitioners and researchers.\\n\\n$\\\\quad$ Lastly, we expanded our experimental scope to include additional datasets and architectures, enhancing the consistency and generalizability of our conclusions. Experiments on ImageNet, alongside CIFAR-100, confirm that BOCB is a pervasive issue in vision models and that our proposed solutions are effective across different scales and complexities of data. These revisions significantly advance the manuscript's contributions, providing a comprehensive framework for understanding and mitigating BOCB while addressing the reviewers' concerns.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer oReR\\u2019s Feedbacks (PART 2/3)\", \"comment\": \"### **(W3)**\\n> **The analysis lacks deep mathematical insights, merely observing phenomena.**\\n\\n**Reply:** Thanks for your suggestion and we also provided deeper insights into the underlying mechanisms in the revision. However, we have to clarify that this work primarily focuses on empirical observations to study the interplay between vision backbones and optimizers.\\n\\n**(1) Empirical Control and Focus on Training Results:**\\n\\nOur experiments are designed with a rigorous control of variables to ensure the reliability of our observations. We meticulously benchmarked 20 representative backbones against 20 optimizers on mainstream vision datasets (CIFAR-100, ImageNet, and COCO). To isolate the effects of different optimizers on various backbones, we have to focus on the training results rather than the training process. These allow us to observe the phenomenon of BOCB directly by calculating the standard deviation and range of performance metrics to measure the risk of BOCB.\\n\\n**(2) Layer-wise Parameter Analysis:**\\n\\nAs for the concern about lacking deep mathematical analysis, we believed that a thorough layer-wise analysis of the learned parameters could be more direct in explaining the cause of BOCB. We visualized the layer-wise patterns of learned parameters using ridge plots and calculated the PL exponent alpha metrics, providing insights into the parameter space and optimization complexity. Analysis metrics, like entropy and $L2$-norm of parameters, offer a quantitative understanding of how different network layers influence the optimization process. For example, we found that the trivial parameter patterns of FFN modules might be the direct cause of bad results of ConvNeXt-T and MogaNet-T, as shown in Figure 6 (e)-(g) in the latest revision. Similarly, the PL exponent alpha measures the fitting quality of models to a certain task, with smaller alpha values indicating better fitting [1]. This analysis helps us understand the intrinsic properties of different network architectures and their interactions with various optimizers.\\n\\n**(3) Transfer Learning for Parameters Initializations:**\\n\\nWe already conducted transfer learning experiments on COCO to explore further the impact of different parameter initialization with the BOCB issue. As discussed in Sec. 4.2, these experiments involved pre-training models with different optimizers and evaluating their performance on downstream tasks such as object detection. The results consistently showed that optimizers like AdamW, which exhibited robust performance during pre-training, maintained their superiority in transfer learning scenarios. This suggests that the choice of optimizer during the pre-training phase significantly influences the transfer learning outcomes, thereby providing insights into the impact of model initialization on BOCB.\\n\\n---\\n\\n### **(W5)**\\n> **The authors assert that MetaFormer balances optimization complexity, making it adaptable to various optimizers, but provide no evidence beyond top-1 accuracy, lacking deeper analysis.**\\n\\n**Reply:** We appreciate your insightful comment. However, there might be some misunderstanding. We delve into the details of our analysis to provide a more nuanced understanding of MetaFormer's optimization landscape.\\n\\n**Layer-wise Parameter Analysis and Figure 7:**\\n\\nFigure 7 presents a detailed analysis of the PL exponent alpha metrics for various backbones and optimizers on CIFAR-100. The PL exponent alpha [1] measures the fitting quality of models to a specific task. Smaller alpha values indicate better fitting, suggesting that the model is neither underfitting nor overfitting. For MetaFormer, we observe that the alpha values are consistently within a moderate range across different optimizers, indicating a well-balanced optimization landscape. For instance, the alpha values for MetaFormer (e.g., ConvFormer-S12) are generally between 2 and 4, which is indicative of a balanced optimization landscape. In contrast, other architectures like DeiT-S and MLP-Mixer-S exhibit more extreme alpha values, either too low (indicating underfitting) or too high (indicating overfitting). This variability highlights the robustness of MetaFormer's design in maintaining a balanced optimization landscape.\\n\\n**Top-1 Accuracy Tables and Hyper-parameter Robustness:**\\n\\nThe benchmarking tables provide empirical evidence to show that MetaFormer variants consistently achieve high top-1 accuracy across a range of optimizers, suggesting that MetaFormer is amenable to a wide range of optimizers. Meanwhile, our analysis of hyper-parameter robustness, as discussed in Section 3.2, measures the variation of optimal learning rates and weight decays across different optimizers. MetaFormer macro design demonstrates a relatively low variation in optimal hyper-parameters, indicating its robustness to different optimizers. This robustness further supports our claim that MetaFormer's optimization landscape is balanced and amenable to a wide range of optimizers.\"}", "{\"title\": \"Response to Reviewer oReR\\u2019s Feedbacks (PART 3/3)\", \"comment\": \"### **(W6)**\\n> **The authors claim that FFN blocks in models like ConvNeXt and MogaNet are prone to overfitting and difficult to optimize but lack empirical evidence, such as train vs. validation losses, to support these conclusions.**\\n\\n**Reply:** We appreciate the reviewer's insightful question regarding the susceptibility of FFN blocks to overfitting and the challenges they pose in optimization. To clarify, our statement about the FFN blocks in models like ConvNeXt and MogaNet being susceptible to overfitting while being hard to optimize is based on a combination of empirical observations and theoretical understanding of these architectures.\\n\\n**Empirical Evidence and Analysis**\\n\\n* **(1) Training Dynamics and Overfitting**:\\n\\n - **ConvNeXt and MogaNet**: These models, particularly their FFN blocks, exhibit complex interactions that can lead to overfitting if not properly regularized. This is evident from the training dynamics observed during our experiments. For instance, when training ConvNeXt-T with SGD, we noticed that the model tends to overfit quickly, as indicated by a significant gap between training and validation losses. This overfitting behavior is less pronounced when using adaptive optimizers like AdamW, which can better navigate the intricate landscape of these blocks.\\n- **Figure 6 (e) and (f)**: The ridge plots of the\\u00a0*L*2-norm of learned parameters for ConvNeXt-T and MogaNet-S (Figure 6 (e) and (f)) show higher variability in the parameter magnitudes across layers when trained with SGD compared to AdamW. This variability suggests that SGD struggles to maintain stable updates, potentially leading to overfitting.\\n\\n* **(2) Optimization Challenges**:\\n\\n - **Fixed Learning Rate Optimizers**: As mentioned, fixed learning rate optimizers like SGD find it difficult to navigate the complex optimization landscape created by the FFN blocks. This is supported by the performance metrics in Table 1, where ConvNeXt-T and MogaNet-S achieve significantly lower accuracy when paired with SGD compared to adaptive optimizers.\\n\\n - **Figure 7**: The PL exponent alpha metrics (Figure 7) further illustrate this point. For ConvNeXt-T, the alpha values are higher (indicating potential overfitting) when trained with SGD, whereas they are lower and more stable with AdamW.\\n\\n### Reference\\n[1] Martin, C. H., et al. Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning.\\u00a0JMLR, 22(165), 2021, 1-73.\\n\\n---\\n\\nOverall, we appreciate your help in improving our manuscript and believe that the current version of BOCB has the potential to raise a wide investigation for the community. We sincerely hope that you could go through our responses and the revision and consider adjusting your rating accordingly if you are satisfied. We are also pleased to improve our paper according to your additional constructive comments if you have more concerns or questions about the current manuscript. Looking forward to your feedback soon!\\n\\nBest regards,\\n\\nAuthors\"}" ] }
9XXBsLWMF3
PRUC & Play: Probabilistic Residual User Clustering for Recommender Systems
[ "Wenyuan Wang", "Yusong Zhao", "Zihao Xu", "Hengyi Wang", "Shreya Venugopal", "Desmond Lobo", "Chengzhi Mao", "Qi Xu", "Zhigang Hua", "Yan Xie", "Bo Long", "Shuang Yang", "Hao Wang" ]
Modern recommender systems are typically based on deep learning (DL) models, where a dense encoder learns representations of users and items. As a result, these systems often suffer from the black-box nature and computational complexity of the underlying models, making it difficult to systematically interpret their outputs and enhance their recommendation capabilities. To address this problem, we propose *Probabilistic Residual User Clustering (PRUC)*, a causal Bayesian recommendation model based on user clustering. Specifically, we address this problem by (1) dividing users into clusters in an unsupervised manner and identifying causal confounders that influence latent variables, (2) developing sub-models for each confounder given the observable variables, and (3) generating recommendations by aggregating the rating residuals under each confounder using do-calculus. Experiments demonstrate that our *plug-and-play* PRUC is compatible with various base DL recommender systems, significantly improving their performance while automatically discovering meaningful user clusters.
[ "Recommendation System", "Causal Inference", "Bayesian Deep Learning" ]
Reject
https://openreview.net/pdf?id=9XXBsLWMF3
https://openreview.net/forum?id=9XXBsLWMF3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z1D6l4JTV1", "x6vNMNykpm", "sxq4Cf38Pl", "sWTopudIhv", "orSlc2J3B0", "iIZvFNQyNm", "hLsaB2HvtW", "gnLjdObVwY", "YBTbTjq797", "UXxMeHsIFz", "SIUVcLydTZ", "P62RZVdU70", "L2b47oQUTJ", "JRqkGNdRCw", "J5pzmSaWRs", "EIRdAJooFL", "Cn9GOnkqHF", "5gL52lHMwO", "0ptzGZVdwS" ], "note_type": [ "official_review", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1730709684609, 1730152877349, 1732764687582, 1733118015458, 1737523567131, 1732764273811, 1730639759424, 1732763983872, 1732764061503, 1730717986760, 1732764856515, 1732764311005, 1732764439337, 1733115858841, 1732764595592, 1733116793343, 1732764165773, 1733119333146, 1734764049046 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3282/Reviewer_oZTX" ], [ "ICLR.cc/2025/Conference/Submission3282/Reviewer_wNzu" ], [ "ICLR.cc/2025/Conference/Submission3282/Authors" ], [ "ICLR.cc/2025/Conference/Submission3282/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3282/Authors" ], [ "ICLR.cc/2025/Conference/Submission3282/Reviewer_6N4v" ], [ "ICLR.cc/2025/Conference/Submission3282/Authors" ], [ "ICLR.cc/2025/Conference/Submission3282/Authors" ], [ "ICLR.cc/2025/Conference/Submission3282/Reviewer_ny3m" ], [ "ICLR.cc/2025/Conference/Submission3282/Authors" ], [ "ICLR.cc/2025/Conference/Submission3282/Authors" ], [ "ICLR.cc/2025/Conference/Submission3282/Authors" ], [ "ICLR.cc/2025/Conference/Submission3282/Authors" ], [ "ICLR.cc/2025/Conference/Submission3282/Authors" ], [ "ICLR.cc/2025/Conference/Submission3282/Authors" ], [ "ICLR.cc/2025/Conference/Submission3282/Authors" ], [ "ICLR.cc/2025/Conference/Submission3282/Authors" ], [ "ICLR.cc/2025/Conference/Submission3282/Area_Chair_7XGa" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents Probabilistic Residual User Clustering (PRUC) as a solution for challenges in modern recommender systems relying on deep learning models. These systems are often complex and lack transparency, making it difficult to understand and improve recommendations. PRUC automatically categorizes users into clusters, identifies causal influences on hidden variables, and constructs specialized models for each cluster based on causal reasoning. By combining rating residuals based on causal factors using do-calculus, PRUC enhances recommendation quality. Experimental results indicate that PRUC can seamlessly enhance various deep learning recommender systems, leading to performance improvements and the discovery of meaningful user groupings in an automated fashion.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The authors designed a plug-and-play PRUC to enhance the performance of existing DL-based recommenders, capable of discovering meaningful user clusters and improving the interpretability of recommendation results.\\n\\n2. The authors simulated cold-start scenarios to test the PRUC method, validating its corresponding performance.\", \"weaknesses\": \"1. The experimental description in the paper is unclear. Why did the authors choose to conduct tests in cold-start scenarios rather than performance tests in full-shot scenarios? What are the specific meanings of source domain and target domain in the experimental section? Will the base models be pre-trained on the source domain?\\n\\n2. The lack of a more direct case study to demonstrate the role of user clustering makes it difficult to understand the motivation behind the paper.\\n\\n3. The selection of base models is limited, lacking some common collaborative filtering (CF) and GNN-based recommendation system methods, such as NCF [1] and LightGCN [2].\\n\\n[1] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In WWW. 173\\u2013182.\\n\\n[2] He, Xiangnan, et al. \\\"Lightgcn: Simplifying and powering graph convolution network for recommendation.\\\" Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 2020.\", \"questions\": \"Please refer to the issues mentioned in the Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a Bayesian recommendation model to enhance cross-domain recommendations through a plug-and-play approach called Probabilistic Residual User Clustering (PRUC). PRUC creates user clusters based on latent confounders, increasing the model's interpretability.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Strong theoretical foundation\\n2. Addresses a problem with extensive real-world, especially industrial, applications\", \"weaknesses\": \"W1. Lacks comparison with previous state-of-the-art approaches.\\n\\nW2. Experiments are limited in scope, focusing only on user and item settings. For instance, while conducted on a reliable dataset, they do not extend to more complex datasets encompassing diverse domains (e.g., age, demographics) beyond geographical location.\", \"questions\": \"Q1. How does PRUC adapt to an online training setting where item popularity constantly changes?\\n\\nQ2. Are there any ablation studies examining the effect of the number of domains (M) on PRUC's performance? How does this hyperparameter impact the degree of model quality improvement PRUC provides?\\n\\nQ3. What are the system overheads of deploying PRUC compared to previous state-of-the-art methods? For example, can the authors provide an analysis of PRUC's impact on metrics such as Area Under Curve (AUC) to illustrate any potential system overheads from adopting this approach?\\n\\nQ4. Many modern recommendation systems treat recommendation as a binary classification problem (e.g., whether to recommend an item to a user). Could the authors explain how PRUC could enhance model quality if the recommendation task is framed in this binary classification context?\\n\\nQ5. In Table 3, where CKL is used as the base model, PRUC does not appear to improve overall model quality across different domains. Could the authors provide further explanation on this outcome?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank You for the Encouraging and Constructive Comments [2/3]\", \"comment\": \"### Table B. Compare with NCF and LightGCN on Different User Clusters\\n\\n| Data | Cluster | Method | Recall@20 | F1@20 | MAP@20 | NDCG@20 | Precision@20 |\\n| ------------------------------------ | ------- | ------------------ | --------- | ------ | ------ | ------- | ------------ |\\n| France, Italy, India \\u2192 Japan, Mexico | 1 | NCF (Base Model) | 0.0090 | 0.0010 | 0.0019 | 0.0005 | 0.0005 |\\n| | | PRUC w/o Causality | 0.0232 | 0.0027 | 0.0035 | 0.0013 | 0.0014 |\\n| | | PRUC (Full) | **0.1581** | **0.0176** | **0.0476** | **0.0122** | **0.0093** |\\n| | 2 | NCF (Base Model) | 0.0165 | 0.0019 | 0.0032 | 0.0010 | 0.0010 |\\n| | | PRUC w/o Causality | **0.1603** | **0.0192** | **0.0366** | **0.0115** | **0.0102** |\\n| | | PRUC (Full) | 0.1062 | 0.0130 | 0.0280 | 0.0084 | 0.0069 |\\n| | 3 | NCF (Base Model) | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |\\n| | | PRUC w/o Causality | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |\\n| | | PRUC (Full) | - | - | - | - | - |\\n\\n| Data | Cluster | Method | Recall@20 | F1@20 | MAP@20 | NDCG@20 | Precision@20 |\\n| ------------------------------------ | ------- | -------------------- | ---------- | ------ | ------ | ------- | ------------ |\\n| France, Italy, India \\u2192 Japan, Mexico | 1 | lightGCN (Base Model)| 0.0106 | 0.0014 | 0.0018 | 0.0008 | 0.0007 |\\n| | | PRUC w/o Causality | 0.0264 | 0.0029 | 0.0024 | 0.0012 | 0.0016 |\\n| | | PRUC (Full) | **0.1265** | **0.0137** | **0.0397** | **0.0097** | **0.0072** |\\n| | 2 | lightGCN (Base Model)| 0.0246 | 0.0028 | 0.0078 | 0.0019 | 0.0015 |\\n| | | PRUC w/o Causality | **0.1524** | **0.0183** | **0.0504** | **0.0129** | **0.0097** |\\n| | | PRUC (Full) | 0.0844 | 0.0101 | 0.0244 | 0.0067 | 0.0054 |\\n| | 3 | lightGCN (Base Model)| **0.0084** | **0.0008** | **0.0004** | **0.0003** | **0.0004** |\\n| | | PRUC w/o Causality | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |\\n| | | PRUC (Full) | - | - | - | - | - |\\n\\n\\n\\nTable A shows the performance of our PRUC with NCF and LightGCN as the base recommender tested with same metrics in the paper. We can see that our PRUC, even without the causality component (i.e., \\u201cPRUC w/o Causality\\u201d) can enhance the performance of different base models, our full PRUC (i.e., \\u201cPRUC (Full)\\u201d) can further improve the results when using NCF as the base recommender.\\nTable B shows the performace of PRUC compared with NCF and LightGCN as the base model on different user clusters from our selected data spilt. We can see that our PRUC, even without the causality component (i.e., \\u201cPRUC w/o Causality\\u201d) can enhance the performance of the base model consistently across clusters, and full PRUC (i.e., \\u201cPRUC (Full)\\u201d) can further improve the results in many cases. \\n\\n\\n\\n**Q2: \\\"Experiments are limited in scope, focusing only on user and item settings. For instance, while conducted on a reliable dataset, they do not extend to more complex datasets encompassing diverse domains (e.g., age, demographics) beyond geographical location.\\\"**\\n\\nYes, we agree that more complex datasets should encompass diverse domains beyond geographical location. Note that PRUC is a plug-and-play model capable of enhancing *any* deep learning recommender. It can be seamlessly adapted to other datasets through automatic user clustering, incorporating various domains such as age and demographic information.\\n\\n\\n\\n**Q3: \\\"How does PRUC adapt to an online training setting where item popularity constantly changes\\\"**\\nThis is a good question. PRUC can be adapted to datasets where item ratings are constantly changing. For example, in the online training setting, given an unseen user, it can be assigned to one of the three clusters, and the corresponding submodel will then learn the residual rating of this user, which stands for the difference between the ground truth (GT) and the base model's prediction. By following our update rules, PRUC can effectively learn and adapt to these changes, ensuring accurate and dynamic recommendations.\"}", "{\"title\": \"Sincerely Looking Forward to Further Discussion\", \"comment\": \"Dear Reviewer 6N4v,\\n\\nThank you for your review and engagement during the discussion period.\\n\\nIn response to your suggestions, we have:\\n\\n- Comprehensively refined the PDF. \\n\\n- Expanded the experimental details to provide greater clarity. \\n\\n- Included an additional appendix section investigating the relationship between user clusters and items.\\n\\nWith the ICLR Discussion period concluding soon (Dec. 2nd (AOE) for reviewers and Dec. 3rd (AOE) for authors), we kindly request your feedback on whether our responses address your concerns or if there are additional questions or suggestions you would like us to address.\\n\\nThank you once again for your time!\\n\\nBest,\\n\\nThe Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Thank You for the Encouraging and Constructive Comments [2/3]\", \"comment\": \"**Q3: \\\"The selection of base models is limited, lacking some common collaborative filtering (CF) and GNN-based recommendation system methods, such as NCF and LightGCN.\\\"**\\n\\nThank you for mentioning this. Following your suggestion, we have conducted extensive experiments on NCF and LightGCN models with one of our data splits (France, Italy, India $\\\\rightarrow$ Japan, Mexico). Table C and Table D below show the results:\\n\\n### Table C. Compare with NCF and LightGCN on Average\\nOn average of all clusters, the comparisons between PRUC and base models are as follow:\\n| Data | Method | Recall@20 | F1@20 | MAP@20 | NDCG@20 | Precision@20 |\\n| ------------------------------------- | ------------------ | --------- | ------ | ------ | ------- | ------------ |\\n| France, Italy, India \\u2192 Japan, Mexico | NCF (Base Model) | 0.0131 | 0.0015 | 0.0026 | 0.0008 | 0.0008 |\\n| | PRUC w/o Causality | 0.1056 | 0.0126 | 0.0235 | 0.0074 | 0.0067 |\\n| | PRUC (Full) | **0.1137** | **0.0137** | **0.0309** | **0.0090** | **0.0073** |\\n\\n\\n| Data | Method | Recall@20 | F1@20 | MAP@20 | NDCG@20 | Precision@20 |\\n| ------------------------------------- | ------------------ | --------- | ------ | ------ | ------- | ------------ |\\n| France, Italy, India \\u2192 Japan, Mexico | lightGCN (Base Model) | 0.0182 | 0.0021 | 0.0050 | 0.0014 | 0.0011 |\\n| | PRUC w/o Causality | **0.0940** | **0.0112** | **0.0289** | **0.0076** | **0.0059** |\\n| | PRUC (Full) | 0.0905 | 0.0106 | 0.0266 | 0.0072 | 0.0056 |\\n\\n\\n### Table D. Compare with NCF and LightGCN on Different User Clusters\\n\\n| Data | Cluster | Method | Recall@20 | F1@20 | MAP@20 | NDCG@20 | Precision@20 |\\n| ------------------------------------ | ------- | ------------------ | --------- | ------ | ------ | ------- | ------------ |\\n| France, Italy, India \\u2192 Japan, Mexico | 1 | NCF (Base Model) | 0.0090 | 0.0010 | 0.0019 | 0.0005 | 0.0005 |\\n| | | PRUC w/o Causality | 0.0232 | 0.0027 | 0.0035 | 0.0013 | 0.0014 |\\n| | | PRUC (Full) | **0.1581** | **0.0176** | **0.0476** | **0.0122** | **0.0093** |\\n| | 2 | NCF (Base Model) | 0.0165 | 0.0019 | 0.0032 | 0.0010 | 0.0010 |\\n| | | PRUC w/o Causality | **0.1603** | **0.0192** | **0.0366** | **0.0115** | **0.0102** |\\n| | | PRUC (Full) | 0.1062 | 0.0130 | 0.0280 | 0.0084 | 0.0069 |\\n| | 3 | NCF (Base Model) | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |\\n| | | PRUC w/o Causality | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |\\n| | | PRUC (Full) | - | - | - | - | - |\\n\\n| Data | Cluster | Method | Recall@20 | F1@20 | MAP@20 | NDCG@20 | Precision@20 |\\n| ------------------------------------ | ------- | -------------------- | ---------- | ------ | ------ | ------- | ------------ |\\n| France, Italy, India \\u2192 Japan, Mexico | 1 | lightGCN (Base Model)| 0.0106 | 0.0014 | 0.0018 | 0.0008 | 0.0007 |\\n| | | PRUC w/o Causality | 0.0264 | 0.0029 | 0.0024 | 0.0012 | 0.0016 |\\n| | | PRUC (Full) | **0.1265** | **0.0137** | **0.0397** | **0.0097** | **0.0072** |\\n| | 2 | lightGCN (Base Model)| 0.0246 | 0.0028 | 0.0078 | 0.0019 | 0.0015 |\\n| | | PRUC w/o Causality | **0.1524** | **0.0183** | **0.0504** | **0.0129** | **0.0097** |\\n| | | PRUC (Full) | 0.0844 | 0.0101 | 0.0244 | 0.0067 | 0.0054 |\\n| | 3 | lightGCN (Base Model)| **0.0084** | **0.0008** | **0.0004** | **0.0003** | **0.0004** |\\n| | | PRUC w/o Causality | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 |\\n| | | PRUC (Full) | - | - | - | - | - |\"}", "{\"summary\": \"This paper presents Probabilistic Residual User Clustering (PRUC), a novel causal Bayesian model for recommendation systems that enhances interpretability and mitigates biases in deep learning-based recommenders. PRUC adopts a plug-and-play approach, making it compatible with various existing DL recommendation models. The experimental results demonstrate that PRUC consistently enhances the performance of multiple base DL recommender systems by addressing biases and uncovering latent user groupings.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\tThis paper introduces an innovative recommendation approach that uniquely applies causal inference to deep learning recommenders, addressing interpretability and bias correction.\\n\\n2.\\tThis paper addresses critical limitations in existing DL-based recommenders, particularly around transparency, interpretability, and domain shift adaptability, marking a significant advancement for recommender systems research.\", \"weaknesses\": \"1.\\tThe introduction is too brief, especially in describing the proposed method. Expanding this section would clarify the paper\\u2019s contributions and provide a more comprehensive overview of the approach.\\n\\n2.\\tThe \\\"Problem Setting and Notations\\\" subsection is disorganized and unclear, making it challenging for readers unfamiliar with this work to understand.\\n\\n3.\\tExperimental setup details are critical to the paper, yet they are insufficiently described, and additional information could not be found in the appendix.\\n\\n4.\\tThe paper lacks an in-depth analysis of the experimental results. Although many results are presented, the analysis remains overly simplistic, with no detailed examination in the appendix either.\", \"questions\": \"1.\\tCould you expand the introduction to provide a clearer and more comprehensive overview of PRUC? More details on how the proposed method addresses interpretability and bias correction would help readers understand its unique contributions early on.\\n\\n2.\\tCould you revise Problem Setting and Notations subsection for better organization and clarity? A clear and well-structured presentation of the notations and problem setup would help readers understand your approach more effectively. \\n\\n3.\\tThe paper lacks sufficient detail on the experimental setup, which is critical for evaluating the model's effectiveness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank You for the Encouraging and Constructive Comments [1/2]\", \"comment\": \"Dear Reviewer ny3m,\\n\\nThank you for your constructive and encouraging comments. We are glad that you found our method ``\\\"solid\\\"`` , our experiments ``\\\"extensive\\\"``, providing ``\\\"robust validation of the model's effectiveness\\\"`` and demonstrating ``\\\"impact across various evaluation metrics and base recommender systems\\\"``. Below, we address your questions one by one. \\n\\n\\n**Q1: \\\"The presentation could benefit from refinement for clarity and conciseness. For instance, Equation 11 seems self-evident and might not require such an extensive derivation, as it could detract from the focus on more critical aspects.\\\"**\\n\\nThanks for your suggestion. We included the detailed derivation to ensure completeness and to cater to a broader audience, particularly those less familiar with the underlying concepts.\\n \\n However, we agree that this equation might appear self-evident to some readers. We will focus on summarizing the derivation, such as the Equation 11 to improve clarity and conciensness.\\n\\n**Q2: \\\"The motivation for employing confounders or clustering to enhance interpretability lacks clarity. A more robust explanation of why these techniques specifically contribute to interpretability and how they address issues in recommendation systems would strengthen the paper.\\\"**\\n\\nThank you for your suggestion. Our current dataset comprises highly diverse user groups, posing a significant challenge for a single trained model to effectively accommodate such variability. To address this, our proposed method identifies distinct user clusters through advanced clustering techniques. For each identified cluster, we develop a specialized submodel designed to predict the residuals specific to that cluster. This cluster-specific approach allows our method to capture the unique characteristics of different user groups, leading to substantial performance improvements. Comparative analysis across clusters further validates the effectiveness of our approach.\\n\\n**Q3: \\\"The complexity and scalability of the approach are not thoroughly addressed. Providing insights into the computational demands of the clustering process and the scalability of the model when applied to large datasets would enhance its practical relevance.\\\"**\\n\\nThank you for your suggestion. For the clustering process, we utilize the Gaussian Mixture Model (GMM) algorithm to segment users, as detailed in our paper. Regarding scalability, the complexity of our method, PRUC, scales approximately linearly with the dataset size \\nN, making it both efficient and practical for application to large datasets. This ensures that our approach remains computationally feasible even as the data size grows, thereby enhancing its usability in real-world scenarios.\"}", "{\"title\": \"Thank You for the Encouraging and Constructive Comments [2/2]\", \"comment\": \"**Q4.The paper would benefit from a more detailed evaluation of the quality of clustering and the confounder identification process.\\nAdditional metrics or analyses could validate the interpretability and meaningfulness of the clusters and confounders.**\\n\\nThank you for your suggestion. In our paper, we address the issue of cold start in recommender systems, as existing methods tend to perform poorly in this setting. Specifically, we identify two primary reasons for this: \\n\\n1. **User Diversity and Heterogeneity:** Users exhibit highly diverse and heterogeneous behavior, making it challenging to generalize effectively. \\n2. **Impact of Sparse Features:** In the cold start and multi-domain setting, models are prone to being influenced by sparse features, which negatively affects their performance. \\n\\nTo address these challenges, we propose our method, which automatically clusters users into meaningful groups and identifies causal confounders that influence latent variables. This approach enhances model performance by mitigating the underlining issues.\\n\\n**Additional Metrics and Analyses**\\n\\nTables A and B below present the average ratings and proportions of items being top-rated within the three inferred clusters, respectively.\\n\\n### Table A. Average Score\\n\\n\\n| Cluster | ASUS VS197T-P 18.5 (B00B2HH7GK) | BLU R1 HD ArmorFlex Case + Screen Protector \\u2013 White/Gold (B01GIRTG7G) | Kingston DT-Micro USB Flash 32GB, NeroB009CMN3V0 |\\n|----------|----------------------------------|------------------------------------------------------------------------|---------------------------------------------------|\\n| cluster1 | 0.5325 | -0.0028 | 0.0312 |\\n| cluster2 | 0.0156 | 0.2922 | 0.0109 |\\n| cluster3 | 0.0429 | 0.0281 | 0.0873 |\\n\\n---\\n\\n### Table B. Proportion\\n\\n| Cluster | ASUS VS197T-P 18.5 (B00B2HH7GK) | BLU R1 HD ArmorFlex Case + Screen Protector \\u2013 White/Gold (B01GIRTG7G) | Kingston DT-Micro USB Flash 32GB, NeroB009CMN3V0 |\\n|----------|----------------------------------|------------------------------------------------------------------------|---------------------------------------------------|\\n| cluster1 | 78.50% | 0 | 0 |\\n| cluster2 | 0 | 39.55% | 0 |\\n| cluster3 | 0 | 0 | 7.57% |\\n\\nThe tables demonstrate that PRUC effectively identifies meaningful user clusters, with each cluster exhibiting a distinct preference for one of the three electronic products.\\n\\nWe will include the above discussion in the camera-ready version of the paper.\"}", "{\"summary\": \"The paper proposes Probabilistic Residual User Clustering (PRUC) for interpretability and computational complexity.\\\\\\nPRUC is a causal Bayesian model that clusters users and identifies influential causal factors.\\\\\\nPRUC enhances recommendations by modeling these clusters, applying confounder-specific sub-models and do-calculus.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The paper includes solid mathematical equations and derivations, which lend rigor to the proposed approach and offer a detailed understanding of the methodology.\\n2. Extensive experiments provide robust validation of the model's effectiveness, demonstrating its impact across various evaluation metrics and base recommender systems.\", \"weaknesses\": \"1. The presentation could benefit from refinement for clarity and conciseness.\\\\\\nFor instance, Equation 11 seems self-evident and might not require such an extensive derivation, as it could detract from the focus on more critical aspects.\\n2. The motivation for employing confounders or clustering to enhance interpretability lacks clarity.\\\\\\nA more robust explanation of why these techniques specifically contribute to interpretability and how they address issues in recommendation systems would strengthen the paper.\\n3. The complexity and scalability of the approach are not thoroughly addressed.\\\\\\nProviding insights into the computational demands of the clustering process and the scalability of the model when applied to large datasets would enhance its practical relevance.\\n4. The paper would benefit from a more detailed evaluation of the quality of clustering and the confounder identification process.\\\\\\nAdditional metrics or analyses could validate the interpretability and meaningfulness of the clusters and confounders.\", \"questions\": \"Please refer to weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank You for the Encouraging and Constructive Comments [3/3]\", \"comment\": \"**Q4: \\\"Are there any ablation studies examining the effect of the number of domains (M) on PRUC's performance? How does this hyperparameter impact the degree of model quality improvement PRUC provides?\\\"**\\n\\nThank you for your great advice! We have conducted some preliminary experiments, which demonstrate that dividing users into larger number of domains is also reasonable. However, to balance the complexity of the method with the model performance, we decided to set the number of domains (M) to three.\\n\\n**Q5: \\\"What are the system overheads of deploying PRUC compared to previous state-of-the-art methods? For example, can the authors provide an analysis of PRUC's impact on metrics such as Area Under Curve (AUC) to illustrate any potential system overheads from adopting this approach?\\\"**\\n\\nThanks for your great suggestion. We have added the AUC metric to evalute the effectiveness of our methods, as shown in Table C below.\\n\\nTable C\\n| Cluster | AUC (Proposed Method) | AUC (PRUC w/o Causality) | AUC (Base Model) |\\n|---------|------------------------|--------------------------|------------------|\\n| 0 | 0.5830 | 0.4409 | 0.3696 |\\n| 1 | 0.4867 | 0.5265 | 0.4737 |\\n| 2 | - | 0.5498 | 0.4421 |\\n| **Weighted AUC@300** | **0.5690** | **0.5078** | **0.4290** |\\n\\nThe results show that PRUC performs well in terms of AUC score across all the user clusters. Also note that PRUC introduces minimum system overhead to augment the base models. \\n\\n**Q6: \\\"Many modern recommendation systems treat recommendation as a binary classification problem (e.g., whether to recommend an item to a user). Could the authors explain how PRUC could enhance model quality if the recommendation task is framed in this binary classification context?\\\"**\\n\\nThank you for your helpful suggestion. PRUC can indeed be adapted to binary classification tasks. We elaborate on this adaptation as follows: \\n\\n**Modifying Equation (6):** \\n The original equation, $R_{ij} - \\\\mathbf{u}_i^\\\\top \\\\mathbf{v}_j - \\\\mathbf{s}_m^\\\\top \\\\mathbf{w}_R$, can be reformulated to fit a logistic regression model. By applying a logistic function, our method can effectively handle binary classification tasks, where the target values are probabilities for binary outcomes. \\n\\nWe will ensure to incorporate this variant of PRUC and its implementation in the camera-ready version of the paper.\\n\\n**Q7: \\\"In Table 3, where CKL is used as the base model, PRUC does not appear to improve overall model quality across different domains. Could the authors provide further explanation on this outcome?\\\"**\\n\\nThank you for pointing out this. While there exists performance decrease in some clusters, we find that PRUC improves the overall performence. Moreover, the number of users in the clusters where the performance decreases is very limited. Additionally, we can achieve relative high performance in these clusters by tuning parameters to trade-off the performance in all clusters. \\n\\nWe will include this discussion in the camera-ready version of the paper. \\n\\n\\n\\n[1] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In WWW. 173\\u2013182.\\n\\n[2] He, Xiangnan, et al. \\\"Lightgcn: Simplifying and powering graph convolution network for recommendation.\\\" Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval. 2020.\"}", "{\"title\": \"Thank You for the Encouraging and Constructive Comments [3/3]\", \"comment\": \"Table C shows the performance of our PRUC with NCF and LightGCN as the base recommender tested with same metrics in the paper. We can see that our PRUC, even without the causality component (i.e., \\u201cPRUC w/o Causality\\u201d) can enhance the performance of different base models, our full PRUC (i.e., \\u201cPRUC (Full)\\u201d) can further improve the results when using NCF as the base recommender.\\nTable D shows the performace of PRUC compared with NCF and LightGCN as the base model on different user clusters from our selected data spilt. We can see that our PRUC, even without the causality component (i.e., \\u201cPRUC w/o Causality\\u201d) can enhance the performance of the base model consistently across clusters, and full PRUC (i.e., \\u201cPRUC (Full)\\u201d) can further improve the results in many cases. \\n\\n\\n[1] Lam, Xuan Nhat, et al. \\\"Addressing cold-start problem in recommendation systems.\\\" Proceedings of the 2nd international conference on Ubiquitous information management and communication. 2008.\\n\\n[2] Wei, Yinwei, et al. \\\"Contrastive learning for cold-start recommendation.\\\" Proceedings of the 29th ACM International Conference on Multimedia. 2021.\\n\\n[3] Wei, Jian, et al. \\\"Collaborative filtering and deep learning based recommendation system for cold start items.\\\" Expert Systems with Applications 69 (2017): 29-39.\"}", "{\"title\": \"Thank You for the Encouraging and Constructive Comments\", \"comment\": \"Dear Reviewer 6N4v,\\n\\nThank you for your encouraging and constructive comments. We are glad that you found our method ``\\\" innovative\\\"`` and ``\\\"addressing interpretability and bias correction\\\"``, the probelm we address ``\\\"critical\\\"``, and our paper ``\\\"a significant advancement for recommender systems research\\\"``. Below, we address your questions one by one. \\n\\n**Q1: \\\"The introduction is too brief, especially in describing the proposed method. Expanding this section would clarify the paper\\u2019s contributions and provide a more comprehensive overview of the approach.\\\"**\\n\\nWe appologize for any confusion. We appreciate your suggestion, and followed it to expand the introduction, particularly regarding the description of the proposed method. In response, we have significantly revised this section to provide a clearer and more comprehensive overview of the approach. The updated introduction highlights the key contributions of our method, ensuring that readers can easily understand its significance and how it advances the field. We hope that this revision addresses your problem effectively.\\n\\n**Q2: \\\"The 'Problem Setting and Notations' subsection is disorganized and unclear, making it challenging for readers unfamiliar with this work to understand.\\\"**\\n\\nThank you for highlighting the problem with our \\\"Problem Setting and Notations\\\" subsection. We fully acknowledge that clarity is crucial, especially for readers who may not be immediately familiar with our research context.\\n\\nIn light of your valuable feedback, we have comprehensively restructured the subsection to:\\n\\n1. Introduce notations and definitions more systematically.\\n2. Provide clearer explanations of each key variable and parameter.\\n3. Include a concise table of symbols and their meanings to enhance readability.\\n4. Reorganize the content to follow a more logical flow, guiding readers step-by-step through the problem setting.\\n5. Add brief explanatory remarks to contextualize why certain notations and settings are important for understanding our method.\\n\\nThese revisions aim to make the problem setting more accessible and transparent, helping readers better understand the fundamental concepts and theoretical framework underlying our research.\\n\\nWe appreciate your feedback in improving our paper's clarity and hope these changes effectively address your concerns.\\n\\n**Q3: \\\"Experimental setup details are critical to the paper, yet they are insufficiently described, and additional information could not be found in the appendix.\\\"**\\n\\nThank you for your suggestion. We have incorporated detailed descriptions of the experiments to further clarify our experimental setup and ensure transparency.\\n\\nFor example, in Section 3.1, we clarify our experimental settings as follows:\\n\\n**Source and Target Domain.** For the source/target domain defination, source domain cotains counrties where item ratings from all the users within it are used for base model training (finetuning), while countries in target domain contains users whose ratings are used for both training and testing set. Specifically, for each user in target domain, only one of their ratings is used for training while the rest of them are left for testing. Note that the base models were indeed finetuned on the source domain data. \\n\\nThank you again for your valuable feedback. \\n\\n**Q4: \\\"The paper lacks an in-depth analysis of the experimental results. Although many results are presented, the analysis remains overly simplistic, with no detailed examination in the appendix either.\\\"**\\n\\nThank you for your suggestion. To address this concern, we have expanded the analysis of our experimental results in the appendix. Specifically, we performed an in-depth investigation into the relationship between user clusters and items. For each user, we identified the item with the highest rating, recorded its rating, and visualized the results. This visualization clearly demonstrates the distinct preferences exhibited by different user clusters, providing valuable insights into how user clustering enhances the performance of our method. We believe this additional analysis significantly strengthens the interpretation and comprehensiveness of our experimental findings.\"}", "{\"title\": \"Sincerely Looking Forward to Further Discussion\", \"comment\": \"Dear Reviewer ny3m,\\n\\nThank you for your review and engagement during the discussion period.\\n\\nIn response to your suggestions, we have: \\n\\n- Revised the clarity and conciseness of the PDF to improve readability. \\n\\n- Conducted a comprehensive theoretical analysis of the clustering method, focusing on its motivation, complexity, scalability, and role.\\n\\n- Performed additional experiments, as shown in **(Rebuttal) Table A** and **(Rebuttal) Table B**, which demonstrate that different clusters exhibit preferences for different items. \\n\\nWith the ICLR Discussion period concluding soon (Dec. 2nd (AOE) for reviewers and Dec.3rd (AOE) for authors), we kindly request your feedback on whether our responses address your concerns or if there are additional questions or suggestions you would like us to address. \\n\\nThank you once again for your time! \\n\\nBest, \\n\\nThe Authors\"}", "{\"title\": \"Thank You for the Encouraging and Constructive Comments [1/3]\", \"comment\": \"Dear Reviewer wNzu,\\n\\nThank you for your encouraging and constructive comments. We are glad that you found our theory ``\\\"strong\\\"``, and that our method ``\\\"addresses a problem with extensive real-world, especially industrial, applications\\\"``. Below, we address your questions one by one. \\n\\n**Q1: \\\"Lacks comparison with previous state-of-the-art approaches.\\\"**\\n\\nThank you for your suggestion. We have added additional experiments to comprehensively demonstrate the effectiveness of our method. Specifically, we have conducted extensive experiments on NCF[1] and LightGCN[2] models with one of our data splits (France, Italy, India $\\\\rightarrow$ Japan, Mexico). Table A and Table B below show the results:\\n\\n### Table A. Compare with NCF and LightGCN on Average\\nOn average of all clusters, the comparisons between PRUC and base models are as follows:\\n| Data | Method | Recall@20 | F1@20 | MAP@20 | NDCG@20 | Precision@20 |\\n| ------------------------------------- | ------------------ | --------- | ------ | ------ | ------- | ------------ |\\n| France, Italy, India \\u2192 Japan, Mexico | NCF (Base Model) | 0.0131 | 0.0015 | 0.0026 | 0.0008 | 0.0008 |\\n| | PRUC w/o Causality | 0.1056 | 0.0126 | 0.0235 | 0.0074 | 0.0067 |\\n| | PRUC (Full) | **0.1137** | **0.0137** | **0.0309** | **0.0090** | **0.0073** |\\n\\n\\n| Data | Method | Recall@20 | F1@20 | MAP@20 | NDCG@20 | Precision@20 |\\n| ------------------------------------- | ------------------ | --------- | ------ | ------ | ------- | ------------ |\\n| France, Italy, India \\u2192 Japan, Mexico | lightGCN (Base Model) | 0.0182 | 0.0021 | 0.0050 | 0.0014 | 0.0011 |\\n| | PRUC w/o Causality | **0.0940** | **0.0112** | **0.0289** | **0.0076** | **0.0059** |\\n| | PRUC (Full) | 0.0905 | 0.0106 | 0.0266 | 0.0072 | 0.0056 |\"}", "{\"title\": \"Sincerely Looking Forward to Further Discussion\", \"comment\": \"Dear Reviewer oZTX,\\n\\nThank you for your review and engagement during the discussion period.\\n\\nIn response to your suggestions, we have:\\n\\n- Elaborated on the cold-start problem setup, clarified the meanings of the source domain and target domain, and provided additional details on the training process.\\n\\n- Explained the motivation behind user clustering in detail.\\n\\n- Conducted additional experiments with base models (**NCF** and **LightGCN**) to demonstrate the effectiveness of PRUC, as shown in **(Rebuttal) Table C** and **(Rebuttal) Table D**.\\n\\nWith the ICLR Discussion period concluding soon (Dec. 2nd (AOE) for reviewers and Dec. 3rd (AOE) for authors), we kindly request your feedback on whether our responses address your concerns or if there are additional questions or suggestions you would like us to address.\\n\\nThank you once again for your time!\\n\\nBest,\\n\\nThe Authors\"}", "{\"title\": \"Thank You for the Encouraging and Constructive Comments [1/3]\", \"comment\": \"Dear Reviewer oZTX,\\n\\nThank you for your constructive and encouraging comments. We are glad that you found our method to ``\\\"enhance the performance of existing DL-based recommenders, capable of discovering meaningful user clusters and improving the interpretability of recommendation results\\\"``, and PRUC's experiments ``\\\"validating its corresponding performance\\\"``. Below, we address your questions one by one. \\n\\n\\n\\n\\n**Q1: \\\"The experimental description in the paper is unclear. Why did the authors choose to conduct tests in cold-start scenarios rather than performance tests in full-shot scenarios? What are the specific meanings of source domain and target domain in the experimental section? Will the base models be pre-trained on the source domain?\\\"**\\n\\n\\nThis is a good question. We wanted to clarify the following points:\\n\\n**Cold-Start Settings.** Our work focus on the cold-start recommendation systems, which is an important problem in this field [1-3]. Specifically, we focus on multi-domain and diverse user scenario. Existing methods performe poorly in this case because the users are very heterogeneous and the model performance are easily affected by spurious features under multi-domain and cold-start setting. In contrast, our method solve those problems by automatically dividing users into clusters and identifying causal confounders that influence latent variables, developing sub-models for each confounder given observable variables, and generating recommendations by aggregating the rating residuals under each confounder using do-calculus.\\n\\n**Source and Target Domain.** For the source/target domain defination, source domain cotains counrties where item ratings from all the users within it are used for base model training (finetuning), while countries in target domain contains users whose ratings are used for both training and testing set. Specifically, for each user in target domain, only one of their ratings is used for training while the rest of them are left for testing. Note that the base models were indeed finetuned on the source domain data. \\n\\n**Q2: \\\"The lack of a more direct case study to demonstrate the role of user clustering makes it difficult to understand the motivation behind the paper.\\\"**\\n\\nThank you for mentioning this. In a heterogeneous dataset containing different type of users, it is difficult to train one single model to address all of those users properly. For example, in a dataset, there are users who usually purchase electronics and users who usually buy foods, which makes it challenging for a single model to conduct all the recommendations. \\n\\nTo adress this problem, our method can automatically divide users into clusters, so that the users buying electronics and users buying foods are seperated and clustered. Further, we assign a certain level of model capacity to different clusters. i.e., a smaller model on each user cluster is trained and used to predict the rating residuals and make recommendations based on the ratings. In those scenarios, significant performance improvement can be observed . As shown in Table 3 and Table 4 of the paper, the prediction results of our method outperform single base model, such as CDL and DLRM, on different user clusters most of the time across various metrics.\"}", "{\"title\": \"Sincerely Looking Forward to Further Discussion\", \"comment\": \"Dear Reviewer wNzu,\\n\\nThank you for your review and engagement during the discussion period.\\n\\nIn response to your suggestions, we have:\\n\\n- Added base models and conducted experiments, as shown in **(Rebuttal) Table A** and **(Rebuttal) Table B**; included experiments for calculating the AUC metric, as shown in **(Rebuttal) Table C**. All experiments validate the effectiveness of our method. \\n\\n- Elaborated on the scalability of our method. \\n\\n- Explained the selection of the number of domains (M). \\n\\n- Clarified that PRUC is adaptable to binary classification tasks.\\n\\nWith the ICLR Discussion period concluding soon (Dec. 2nd (AOE) for reviewers and Dec. 3rd (AOE) for authors), we kindly request your feedback on whether our responses address your concerns or if there are additional questions or suggestions you would like us to address.\\n\\nThank you once again for your time!\\n\\nBest,\\n\\nThe Authors\"}", "{\"metareview\": \"This paper introduces Probabilistic Residual User Clustering (PRUC), a method designed to address the complexity and lack of transparency in modern deep learning-based recommender systems. These systems often pose challenges in understanding and optimizing recommendations. PRUC tackles this by automatically clustering users, uncovering causal relationships among latent variables, and creating specialized models for each cluster using causal reasoning. By leveraging do-calculus to integrate rating residuals influenced by causal factors, PRUC significantly improves recommendation quality. Experimental results show that PRUC can be seamlessly integrated into various deep-learning recommender systems, yielding performance gains and enabling the automated identification of meaningful user clusters.\", \"the_reviewers_propose_the_following_strengths_of_the_paper\": [\"The proposed idea is interesting and can solve real-world problems\", \"Extensive experiments are conducted to demonstrate the effectiveness of the proposed model\", \"However, many negative points are also proposed by the reviewers:\", \"The paper writing can be further improved.\", \"The motivation for employing confounders can be further clarified.\", \"Lacks comparison with previous state-of-the-art approaches.\"], \"additional_comments_on_reviewer_discussion\": \"In the rebuttal period, the authors have addressed many concerns of the reviewers. However, some concerns can not be easily alleviated (such as the motivation of the paper). Considering that all the reviewers give negative scores on the paper, I tend to reject the paper.\"}" ] }
9XETcRsufZ
Mixture of Parrots: Experts improve memorization more than reasoning
[ "Samy Jelassi", "Clara Mohri", "David Brandfonbrener", "Alex Gu", "Nikhil Vyas", "Nikhil Anand", "David Alvarez-Melis", "Yuanzhi Li", "Sham M. Kakade", "eran malach" ]
The Mixture-of-Experts (MoE) architecture enables a significant increase in the total number of model parameters with minimal computational overhead. However, it is not clear what performance tradeoffs, if any, exist between MoEs and standard dense transformers. In this paper, we show that as we increase the number of experts (while fixing the number of active parameters), the memorization performance consistently increases while the reasoning capabilities saturate. We begin by analyzing the theoretical limitations of MoEs at reasoning. We prove that there exist graph problems that cannot be solved by any number of experts of a certain width; however, the same task can be easily solved by a dense model with a slightly larger width. On the other hand, we find that on memory-intensive tasks, MoEs can effectively leverage a small number of active parameters with a large number of experts to memorize the data. We empirically validate these findings on synthetic graph problems and memory-intensive closed book retrieval tasks. Lastly, we pre-train a series of MoEs and dense transformers and evaluate them on commonly used benchmarks in math and natural language. We find that increasing the number of experts helps solve knowledge-intensive tasks, but fails to yield the same benefits for reasoning tasks.
[ "Mixture of Experts", "memorization", "reasoning" ]
Accept (Poster)
https://openreview.net/pdf?id=9XETcRsufZ
https://openreview.net/forum?id=9XETcRsufZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "hPFVDYXULk", "fFPurw9tiz", "SxEquurQFK", "SMfBmycunH", "J5b29WPulW", "GAe72DNMvy" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "meta_review", "decision" ], "note_created": [ 1730268378089, 1730700304357, 1730643041750, 1731368221706, 1734768976823, 1737524091709 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10917/Reviewer_toS5" ], [ "ICLR.cc/2025/Conference/Submission10917/Reviewer_7TRg" ], [ "ICLR.cc/2025/Conference/Submission10917/Reviewer_A95z" ], [ "ICLR.cc/2025/Conference/Submission10917/Reviewer_HxYz" ], [ "ICLR.cc/2025/Conference/Submission10917/Area_Chair_ja9y" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes to study the performance tradeoffs between MoE models and standard dense transformers. More specifically, the paper proposes a theoretical analysis of MoE models at reasoning that validates the fact that MoEs can better handle knowledge-intensive tasks but are inferior to traditional dense transformers on those that need generalization ability for reasoning. Both synthetic datasets and real-world dataset verify the conclusion of this paper.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. The paper is well-written and easy to follow, the problem this paper focuses is important to the community.\\n2. Theoretical analysis is sound.\\n3. The experiments are solid, the authors use synthetic data to provide a straightforward illustration and then extend to real-world datasets. The experiments support the claims of this paper sufficiently.\", \"weaknesses\": \"1. The proposed claim serves as a general conclusion. Have the authors tried on other reasoning tasks? For example, logical reasoning and code generation tasks are widely used to validate the effectiveness of reasoning.\\n2. If the $k$ in top-k routing (k=2 in the main paper) influence the conclusion of this paper? And have the authors tried different routing machanisms?\\n3. The analysis focuses on depth-1, however, in the synthetic data experiments, the authors use $L=12$ transformer layers to verify. Why does the authors not choose 1-layer transformer to verify the results on synthetic data?\\n4. Compared to dense transformers, if there are configurations that can perform better than dense transformers theoretically?\\n5. I'm curious about if the MoE performance on memory tasks and reasoning tasks is robust to the initialization or distribution of parameters.\", \"questions\": \"I am curious to hear the authors thoughts regarding the reasoning ability of MoEs, if there exists possible future directions to mitigate the performance gap between MoEs and dense transformers? This is quite an important direction for both development of MoEs and LLM reasoning.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores the performance of the Mixture-of-Experts (MoE) architecture in tasks requiring different capabilities, specifically memorization and reasoning. It examines how increasing the number of experts in MoEs affects performance, particularly comparing MoEs to dense transformers. The authors find that while MoEs improve memorization substantially, they fall short in reasoning tasks compared to dense transformers. The paper includes both theoretical and empirical analyses across synthetic graph problems, closed-book retrieval tasks, and benchmarks in math and natural language, ultimately concluding that MoEs do well in memory-intensive tasks but struggle with reasoning.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Interesting: interesting question to differentiate the performance of MoEs and dense transformers across task types, particularly highlighting the unique strengths of MoEs vs MLPs in a task oriented way\\n2. Clarity: The paper is clear and well-organized\\n3. Glad generative tasks like GSM8K and MATH were included.\\n4. Solid motivation of MoEs in the literature in intro.\", \"weaknesses\": \"Scientific Weakness\\n1. Insufficient Empirical Validation: The paper would benefit from a broader set of empirical evaluations beyond the synthetic tasks used. For example, adding MMLU subtasks (eg Abstract Algebra and College mathematics are good starts. Under philosophy there is a formal logic task too. Big-Bench has an Induction task and Identify Theorem) and OlympiadBench [1] could provide richer insights into MoEs\\u2019 effectiveness on real-world tasks. This would also address the issue of over-reliance on benchmarks like GSM8K and MATH, which may be too narrow. Additional experiments at a larger model scale would also help validate the generalizability of the findings.\\n2. Limited Real-World Applicability: Many of the toy experiments and theoretical results are not clearly connected to practical outcomes. This should be made directly and explicitly early in the paper. The assumptions made, especially regarding data distribution, need clearer justification to demonstrate their relevance in real-world applications. Without this, it\\u2019s difficult to assess if these experiments are accurate models for practical settings. A toy example that actually encompasses reasoning is needed for me to raise my score. Something that is real human reasoning and thus doesn\\u2019t have a deterministic algorithm that solves it (like random graphs or arithmetic).\\n3. Alternative Approaches Left Unexplored: The paper does not examine whether MoEs would perform better if used with experts trained via supervised fine-tuning (SFT) on domain-specific data (e.g., separate experts for medical, math, coding, etc.). This approach may be more practical and representative of real-world applications, as it allows fine-tuning the routing based on relevant domain experts. The authors might believe this is a seemingly irrelevant point but part of the reviewing process is to identify if the authors solved the right problem. I am suggesting the paper did not solved the right problem and at least an argument of why their paper matters in practice compared to this suggested approach is important. Having multiple experts fine-tuned (or continually pre-trained) on high quantity on expert knowledge, combining them in a MoE then fine-tuning the router is likely a more realistic setting. If not, the authors should explain why not and more clearly why their results are relevant in practice. I\\u2019d like to know why the authors think they solved \\u201cthe right problem\\u201d, see my question 7. \\n4. Ambiguity in Memorization Definition: The authors equate a large train-test accuracy gap with \\u201cmemorization,\\u201d but this is problematic. Memorization, requires the exact reproduction of a training target, whereas the LLM evaluations typically do not evaluate entire reasoning sequences, and train errors are not zero. The mathematical benchmarks used only check the final answer and for the MATH dataset use solvers like SymPy to report \\u201creasoning accuracy\\u201d. This distinction in terminology affects the clarity and interpretation of the findings, and I would recommend revisiting and clarifying this.\\n5. Question 1 in question section.\\n6. Intuitive explanation is your results, see my question 8.\\n7. Given my reservations, I think the title is overstated and requires a more scientific name and thus revision. I recommend removing the word parrot from it and change it in favor of something scientific.\\n\\nWilling to raise my score if authors address the weakness well, ideally with experiments. It\\u2019s already sorted in order of importance/urgency. \\n\\nWriting Weakness\\n1. Abstract Needs Broader Implications: The final sentence of the abstract is vague about the implications of the work. Explicitly stating the broader significance of the findings.\\n2. Lack of Connection to Main Contribution in Abstract: Phrases such as \\u201cwe pre-train a series of MoEs and dense transformers and evaluate them on commonly used benchmarks in math and natural language\\u201d do not contribute to the clarity of the abstract. It would improve the paper if this evaluation connected back to the main contribution and quantified the practical significance of the results, which would help contextualize their importance.\\n3. Overemphasis on Model Scale: The claim that language models\\u2019 capabilities stem from simply scaling model size (i.e., parameter count) is misleading. The quality, diversity, and alignment of data to task objectives (aligning with the \\u201cno free lunch\\u201d theorem) are equally critical in model performance. Thus, this paper underplays the impact of data performance, which should be crucially revised in the introduction.\", \"questions\": \"1. Fairness in Model Width Comparison: Why don\\u2019t the authors match the width of the MoE to that of the dense model? A wider MoE model could potentially invalidate some of the theoretical findings and should be considered for comparison.\\n2. Practical Value of Theoretical Results: How does the theoretical analysis, particularly regarding graph problems, inform real-world applications? Are the assumptions made realistic? This should be stated explicitly\\n3. Definition of Memorization: What is the precise definition of memorization used here? Typically, memorization implies reproducing the exact sequence encountered during training. Is this the criterion used? Or at least per token accuracy. \\n3.1. Why do the authors think a large generalization gap is a good definition of memorization? Memorization should a statement only about the training performance. Once the test error is included we are instead thinking about lack of generalization, which is **not** the same as memorizing.\\n4. Random Graph as a Reasoning Task: Why does the shortest path in random graphs qualify as reasoning? Tasks that can be solved by a deterministic algorithm do not generally require reasoning but rather algorithmic execution.\\n5. Reverse Results in Practice: If the no free lunch theorem implies the necessity of specialized experts, then intuitively, MoEs should make sense for diverse tasks. Why, then, do the results show MoEs underperforming dense models in reasoning tasks? (note, the random graph problem has a deterministic algorithm and therefore does not count as real human reasoning. Coming up with the algorithm for that wold have been the reasoning)\\n6. Memorization in Synthetic Tasks: For the phone book memorization task, how many epochs were used to achieve memorization? In other fields like vision, 100 epochs are often required for complete memorization. What was the exact train accuracy?\\n7. Limitations in Problem Framing: Why not model MoEs with SFTed experts for specific domains (e.g., medical, math, translation) and then fine-tune the router? This approach seems more relevant in practice than the current setup, as it could allow routing to be specialized by domain without imposing limitations on MoEs\\u2019 capacity to compete with dense MLPs.\\n8. Scaling Challenges in Reasoning: Why does scaling MoEs fail to match dense models in reasoning tasks? Can the theoretical results be explained intuitively? My intuition is that reasoning requires a lot of fact knowledge (eg Theorem Proving). Thus, shouldn\\u2019t \\u201cmemorizing\\u201d models be able to retrieve facts better and reason about mathematics better? Perhaps this implies the benchmarks chosen weren\\u2019t good and should have chosen Olympiad Bench and the aforementioned MMLU benchmarks.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper investigates the difference between how performance scales with the number of experts on reasoning vs memorization tasks. Specifically, they show through theoretical results, synthetic datasets and common benchmarks that reasoning tasks scale more with the number of active parameters whereas memorization tasks scale more with the number of total parameters.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Tradeoffs between dense and sparse models is a very relevant problem as it\\u2019s a differentiator between many existing foundation models.\\n\\nThe authors clearly demonstrate an interesting finding that scaling experts improves knowledge/memorization tasks but has a lesser benefit on reasoning tasks (worse than dense activation-matched alternatives). \\n\\nThe results are very convincing, as this trend is demonstrated on theoretical tasks, synthetic tasks designed for reasoning/memorization and on popular benchmarks\\n- Experiments in each section seem reasonable to back up the claim!\\n\\nGraphs are very clear and the overall story is very easy to follow.\", \"weaknesses\": \"More elaboration on what is meant by \\u201cthis suggests that MoEs do have a bias towards learning functions that memorize the training data\\u201d when talking about figure 6a would be helpful:\\n- Wouldn\\u2019t this bias show by just having lower perplexity? Why does the perplexity-accuracy relationship show this?\\n\\nMany tasks people care about require a mix of knowledge and reasoning. It would strengthen the study to do more targeted synthetic or real benchmarks designed to explicitly test the scaling properties for these tasks.\", \"questions\": \"Why did you choose different architectures for the synthetic and non-synthetic tasks?\\n\\nDid you consider looking at tasks that are a mix of both knowledge and reasoning (ex solving crossword puzzles)? It would be interesting to see more tradeoffs in these mixed tasks between scaling depth and number of experts.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper explores the effectiveness of Mixture-of-Experts (MoE) architectures compared to standard dense Transformers, focusing on their abilities in memorization and reasoning tasks. Using both theoretical analysis and extensive experiments, the authors demonstrate that MoEs excel in tasks requiring high memorization capacity but fall short in reasoning tasks compared to dense models. Through synthetic tasks, such as shortest path and phone-book memorization, and real-world benchmarks in natural language and mathematics, the authors establish that while increasing the number of experts in MoEs enhances memorization, it does not lead to equivalent improvements in reasoning tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Theoretical Foundation: The paper offers theoretical analysis, including formal proofs and establishing clear upper and lower bounds on MoEs' capabilities for both memorization and reasoning tasks. Utilizes communication complexity theory to highlight fundamental differences in parameter scaling and capabilities between MoEs and dense Transformers.\", \"empirical_validation\": \"Extensive experiments on both synthetic tasks (e.g., shortest path, phone-book memorization) and real-world benchmarks (natural language and mathematical reasoning tasks) support the theoretical claims.\\n\\nProvides insights that increasing the number of experts in MoEs enhances memorization performance but does not significantly improve reasoning abilities. Offers valuable guidance for model selection and scaling strategies, suggesting MoEs for memory-intensive tasks and dense models for reasoning tasks.\", \"weaknesses\": \"Limited Scale in Experiments: The largest models evaluated contain about 2.1 billion parameters, which is small compared to state-of-the-art models with tens or hundreds of billions of parameters. Uncertainty remains about how these results generalize to larger scales and whether larger MoEs might exhibit different performance characteristics.\", \"limited_exploration_of_moe_variants_and_routing_strategies\": \"Focuses mainly on standard MoE architectures with top-2 token-choice routing. Does not explore other routing mechanisms or architectural variations that might influence performance on reasoning tasks.\", \"diversify_reasoning_tasks\": \"The division of tasks into \\\"memorization\\\" and \\\"reasoning\\\" is somewhat binary. Many real-world tasks require a combination of both, which the paper does not deeply explore. It would be useful to include a broader range of reasoning tasks, such as logical reasoning, commonsense reasoning, multi-hop reasoning, and tasks from varied domains like code understanding. This could help analyze tasks that combine memorization and reasoning to reflect real-world complexities.\", \"ablation_studies\": \"There is a lack of ablation studies on hyperparameters such as depth, width, and number of experts, which could provide more insights into the factors influencing performance.\", \"questions\": \"Generality of Findings: Could the authors discuss how their findings might generalize to larger models or different types of tasks, especially those that blend memorization and reasoning?\", \"role_of_chain_of_thought_techniques\": \"Considering that Chain of Thought prompting has been shown to enhance reasoning capabilities in language models by encouraging step-by-step reasoning, have the authors considered integrating CoT techniques into MoE architectures? Chain of Thought techniques enable models to break down complex reasoning tasks into intermediate steps, which could help MoEs overcome the reasoning bottleneck identified in the paper. Exploring this integration might reveal new insights into how MoEs can be adapted or extended to better handle reasoning-intensive tasks. It would be interesting to investigate whether CoT could mitigate the limitations of MoEs on reasoning tasks and potentially improve their performance to match or exceed that of dense transformers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"(a) Summary of scientific Claims and Findings:\\nThe paper investigates performance tradeoffs between Mixture-of-Experts (MoE) and dense transformer architectures. The key findings are:\\n- As the number of experts increases (while fixing active parameters), memorization/recall performance improves while reasoning capabilities saturate\\n- The authors prove theoretical limitations showing certain graph problems cannot be solved by MoEs of a given width, while being solvable by slightly wider dense models\\n- MoEs can effectively leverage a small number of active parameters with many experts for memorization/recall tasks\\n\\n(b) Strengths:\\n1. Comprehensive analysis combining theoretical foundations, synthetic experiments, and real-world evaluations\\n2. Clear demonstration of architecture-specific tradeoffs with practical implications for model selection\\n3. Extensive experimental validation across multiple task types and benchmarks\\n4. Well-structured presentation with clear graphs and coherent narrative\\n5. Important and timely contribution given the growing use of MoE architectures\\n\\n(c) Weaknesses:\\n1. Scale limitations - experiments only conducted up to 2.1B parameters, leaving uncertainty about generalization to larger scales\\n2. Limited exploration of MoE variants and routing strategies beyond standard top-2 token choice\\n3. Initial framing of results in terms of \\\"memorization\\\" rather than more precise concepts like generalization gaps and recall performance\\n4. Some synthetic reasoning tasks (like random graphs) may not fully capture real-world reasoning requirements\\n5. Could benefit from more analysis of tasks combining both recall and reasoning aspects\\n\\n(d) Reasons for Accept:\\n1. Strong theoretical foundation complemented by comprehensive empirical validation\\n2. Clear and important insights about architectural tradeoffs that can guide practical model design decisions\\n3. Well-executed study with thorough experimental design and clear presentation\\n4. Authors demonstrated willingness to clarify and refine terminology (e.g., shifting from \\\"memorization\\\" to \\\"recall\\\")\\n5. Additional MMLU experiments during rebuttal further validated main claims\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers raised several key concerns. HxYz questioned the scalability of findings to larger models and suggested exploring Chain of Thought techniques. 7TRg focused on terminology issues around \\\"memorization\\\" and requested MMLU evaluations. A95z asked for clarification on perplexity-accuracy relationships and suggested exploring mixed knowledge-reasoning tasks. ToS5 inquired about routing mechanisms and depth choices.\\nThe authors responded constructively to these concerns. They acknowledged scale limitations while emphasizing reproducible research, conducted new MMLU experiments that supported their main findings, and agreed to revise terminology to be more precise about memorization vs recall. They also provided clear rationales for their methodological choices regarding routing mechanisms and model depth.\\nIn weighing the final decision, the authors' thorough responses and additional experiments effectively addressed the main concerns. While some limitations remain regarding scale and routing variations, they don't detract from the paper's core contributions. The authors' willingness to refine terminology and provide additional empirical validation, combined with their strong theoretical foundation and initial experimental work, supports the decision to accept the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
9XEBFywIW7
Spread them Apart: Towards Robust Watermarking of Generated Content
[ "Mikhail Pautov", "Danil Ivanov", "Andrey V. Galichin", "Oleg Rogov", "Ivan Oseledets" ]
Generative models that can produce realistic images have improved significantly in recent years. The quality of the generated content has increased drastically, so sometimes it is very difficult to distinguish between the real images and the generated ones. Such an improvement comes at a price of ethical concerns about the usage of the generative models: the users of generative models can improperly claim ownership of the generated content protected by a license. In this paper, we propose an approach to embed watermarks into the generated content to allow future detection of the generated content and identification of the user who generated it. The watermark is embedded during the inference of the model, so the proposed approach does not require the retraining of the latter. We prove that watermarks embedded are guaranteed to be robust against additive perturbations of a bounded magnitude. We apply our method to watermark diffusion models and show that it matches state-of-the-art watermarking schemes in terms of robustness to different types of synthetic watermark removal attacks.
[ "data watermarking", "ai safety", "generative content" ]
https://openreview.net/pdf?id=9XEBFywIW7
https://openreview.net/forum?id=9XEBFywIW7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yg3aWXPA61", "x72utu5YpQ", "syNsuZ6WA2", "og6ahMNiCe", "kVMzUgw0jA", "dSZKCIpXyE", "Y724yaX4Ks", "TpgGodnxaw", "R2ucXsdwuG", "PDZBHkLCzb", "PBJnCIzlML", "P2Q3okX2vZ", "KQUBz95K74", "HPAOWi4cqf", "GG7QNDl1tL", "E9LsEtLxeU", "7OBYoIUylI", "5rlk4JKITa", "4ykjT8ICKo" ], "note_type": [ "official_review", "official_comment", "official_review", "comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730194339340, 1732487001873, 1730591581063, 1732713764907, 1730404602809, 1732444815360, 1732445307900, 1732562501282, 1730695182806, 1732713751321, 1732619717348, 1732713181152, 1732444963879, 1730608074612, 1732616146432, 1732447422344, 1732449083980, 1732661540793, 1732632193386 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission401/Reviewer_C9YY" ], [ "ICLR.cc/2025/Conference/Submission401/Reviewer_uwy1" ], [ "ICLR.cc/2025/Conference/Submission401/Reviewer_uwy1" ], [ "ICLR.cc/2025/Conference/Submission401/Authors" ], [ "ICLR.cc/2025/Conference/Submission401/Reviewer_yiE4" ], [ "ICLR.cc/2025/Conference/Submission401/Authors" ], [ "ICLR.cc/2025/Conference/Submission401/Authors" ], [ "ICLR.cc/2025/Conference/Submission401/Reviewer_uXyP" ], [ "ICLR.cc/2025/Conference/Submission401/Reviewer_MwGz" ], [ "ICLR.cc/2025/Conference/Submission401/Authors" ], [ "ICLR.cc/2025/Conference/Submission401/Reviewer_MwGz" ], [ "ICLR.cc/2025/Conference/Submission401/Authors" ], [ "ICLR.cc/2025/Conference/Submission401/Authors" ], [ "ICLR.cc/2025/Conference/Submission401/Reviewer_uXyP" ], [ "ICLR.cc/2025/Conference/Submission401/Authors" ], [ "ICLR.cc/2025/Conference/Submission401/Authors" ], [ "ICLR.cc/2025/Conference/Submission401/Authors" ], [ "ICLR.cc/2025/Conference/Submission401/Reviewer_yiE4" ], [ "ICLR.cc/2025/Conference/Submission401/Reviewer_C9YY" ] ], "structured_content_str": [ "{\"summary\": \"A framework called \\u201cSpread them Apart\\u201d for watermarking generated content is presented, specifically targeting images produced by diffusion models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"A method of embedding watermarks during the model inference phase is proposed to simultaneously detect and attribute generated images.\", \"Proven robustness against bounded additive perturbations.\"], \"weaknesses\": [\"The expression and logic of the paper can be further improved to enhance readability. For instance, a clear explanation of \\\"$\\\\mathcal{L_{qual}}$\\u200b\\\" is lacking. Additionally, there are a few typos, such as the use of \\\"the\\\" in line 115.\", \"To enhance the understanding of the paper's innovation, the authors should compare their approach with using pixel differencing in digital watermarking [1]. By highlighting the differences between the two methods, the paper can further illustrate the unique contribution and novelty of the proposed approach.\", \"While resilience against certain \\\"watermark removal\\\" attacks is demonstrated, this paper lacks consideration of the effectiveness of the proposed watermarking method against ambiguity attacks.\", \"The evaluation is not very convincing. It is best to supplement some experiments that include the latest methods, such as WOUAF [2].\", \"[1] A Robust and Computationally Efficient Digital Watermarking Technique Using Inter Block Pixel Differencing.\", \"[2] WOUAF: Weight Modulation for User Attribution and Fingerprinting in Text-to-Image Diffusion Models.\"], \"questions\": [\"The method presented in this paper cannot provide robustness against transformation attacks such as cropping and rotation. The authors claim that embedding watermarks in a local area can overcome this problem. How is this achieved?\", \"Can the proposed method resist watermark overwriting attacks?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your reply!\\n\\nI will keep my score because the results in Table 1 and Table 2 indicate that the robustness is worse than others in most of image corruption and the time cost is also high than other methods.\"}", "{\"summary\": \"The paper \\\"Spread them Apart\\\" introduces a robust watermarking technique for images generated by diffusion models, embedding watermarks directly during the generation process to prevent unauthorized removal.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The watermarking method enables accurate identification of the user who sent the query, ensuring traceability.\\n2. The watermark minimally alters the image, maintaining high visual quality while embedding robust identifiers.\", \"weaknesses\": \"1. The robustness evaluation lacks common post-generation attacks like rotation, resizing, grayscale conversion, and cropping, and examples of corrupted images are not provided.\\n2. The watermark embedding process increases the image generation time. The paper should provide the experiments of generation time cost.\\n3. Experiments assessing the impact of watermarking on image quality are limited. More quantitive metrics like PSNR should be used\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces an image watermark method named spread them apart. When generating images from latent $z$, the model measures the generated image to see if it satisfies the constraint $L_{wm} < \\\\epsilon$. If the watermark doesn\\u2019t successfully embed into the image, the latent $z$ is optimized, otherwise, the image is given to the user. To decide on the threshold, the method uses double-tail detector as described [1]. To embed the watermark, the method generate a unique secret $s(u_i)$ and a watermark $w(u_i)$, then embed it on the pixel level. If the embedding is not successful, the method further fine-tunes the latent vector $z$, with a two-component loss. The paper also provides a proof of robustness against additive attacks. The experiment shows that the method demonstrates strong robustness against various watermarking removal attacks, while still maintaining high image quality.\\n\\n\\n[1] Zhengyuan Jiang, Jinghuai Zhang, and Neil Zhenqiang Gong. 2023. Evading Watermark based Detection of AI-Generated Content. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (CCS '23). Association for Computing Machinery, New York, NY, USA, 1168\\u20131181. https://doi.org/10.1145/3576915.3623189\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. Compared with stable signature [1] and SSL [2], the method demonstrates strong robustness against attacks like brightness, contrast shift, gamma correction, sharpening, hue, satuation adjustment, noise attack, JPEG and PGD [3].\\n2. The paper is well-written, and has a clear structure.\\n\\n[1] Fernandez, Pierre, et al. \\\"The stable signature: Rooting watermarks in latent diffusion models.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[2] Fernandez, Pierre, et al. \\\"Watermarking images in self-supervised latent spaces.\\\" ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022.\\n\\n[3] M\\u0105dry, Aleksander, et al. \\\"Towards deep learning models resistant to adversarial attacks.\\\" stat 1050.9 (2017).\", \"weaknesses\": \"1. The paper does not include ablation study, which is a large missing. For example, you could present the impact of different watermark length, the effect of epsilon parameter to the robustness nad visibility, the loss weight ($\\\\lambda_{wm}$ and $\\\\lambda_{qual}$) to the robustness and visibility.\\n2. The method embeds watermark in the pixel space. The latent vector $z$ is optimized during inference to constraint the loss within the acceptable region . It seems all models that have a decoder structure can perform this method, which is pretty general, like VAE or GAN. Could the authors explain the reason why only include experiments on diffusion model? Or could the authors suggest the possible disadvantages of other types of models? Or the authors can pointing out the property that the diffusion model adapted well on their proposed method.\\n3. The experiment only includes stable signature and SSL as baselines, which is not comprehensively compared. Can the authors include more baselines like WOUAF [1]?\\n4. The optimization process introduces more computation during inference. The authors are encouraged to report the extra time used in the optimization. For example, the authors can present hte average inference time with and without watermarking, or the optimization time scales with watermark length. You could also present your computation overhead compared to other watermarking methods (baselines).\\n\\n[1] Kim, C., Min, K., Patel, M., Cheng, S., & Yang, Y. (2024). Wouaf: Weight modulation for user attribution and fingerprinting in text-to-image diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8974-8983).\", \"questions\": \"1. The method embeds and extracts the watermark in a pixel-wise manner. A single pixel can be easily altered by attackers. Could you consider applying your method to block-wise areas, similar to the approach in [1]?\\n\\n[1] Yang, Z., Zeng, K., Chen, K., Fang, H., Zhang, W., & Yu, N. (2024). Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12162-12171).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Answer to Reviewer uXyP\", \"comment\": \"## Thank you for your valuable feedback. In our response below, we address the weaknesses you indicate, answer your questions and provide results of additional experiments, where requested.\\n\\n\\n### W1 and W2.\\u00a0 \\nWe agree that our method does not provide robustness to rotations, translations, and cropping as is. However, slight modifications to our approach can be made to provide robustness to rotations and translations. Namely, one can embed watermarks in the invariants in the Fourier space (see Theorem 6 for rotations and Theorem 3 for translations from [1]).\\u00a0\\n\\nWhen the watermarks are embedded, due to invariance from above, they are guaranteed to retain when the corresponding transformation is applied. We present the results in the updated version of the appendix. And report them here, in Table 1. Note that the invariance to rotations in general can not be guaranteed in practice due to interpolation errors.\\n\\n\\n\\nTable 1. TPRs under geometric transformations, JPEG, cropping and erasing, detection problem. We fix FPR = 10^-6 \\n| Method | Rot(10) | Translation (30 X 30) | JPEG(50) | Crop(400 X 400) | Erase(50 X 50) |\\n| :---------------- | :------: | ----: |:--------------------: | :-------: | :------: |\\n| Ours (Fourier) | 0.85 | 1.00 | 0.70 | 0.80 | 0.90 |\\n| Stable sign. | 0.97 | - | 0.88 | 0.98 | - |\\n| SSL | - | - | 0.97 | 1.00 | - |\\n| AquaLora | 1.00 | - | 0.99 | 0.91 | - |\\n| WOUAF | 0.99 | - |0.97 | 0.98 | 0.99 |\\n\\n\\n\\nWe evaluate our method against cropping and image erasing when the watermark is embedded in the invariant to translations (Theorem 3 from [1]). \\n\\n\\n### W3.\", \"regarding_the_scalability_of_the_approach\": \"we have evaluated the time cost required to extract the watermark and compare it to the public keys of the users in the database. Please see the results in Table 2 below.\", \"table_2\": \"Average time in seconds required to extract a watermark, depending on the number m of users.\\n\\n|m | Time, seconds|\\n|:-----:|:-----:|\\n|1| 7.5x10^-5 |\\n|10 | 7.4x10^-4|\\n|1000| 7.2x10^-2 |\\n|10000 | 6.9x10^-1|\\n|1000000 | 71.2 |\\n\\nIt is noteworthy that our method allows to extract the watermark in less than 1 second when the number of users is 100000.\\n\\n\\n\\n### Q1.\\nFollowing your suggestion, we have added examples of corrupted images in the manuscript (please see the appendix, Fig 6)\\n\\n### Q2.\\n\\nFollowing you suggestion, we have added explanation for the transform \\\"Contrast -\\\" in the manuscript (specifically, we indicated how it differs from \\\"Contrast +\\\"). \\n\\n\\n\\n## References:\\n[1] Lin, Feng, and Robert D. Brandt. \\\"Towards absolute invariants of images under translation, rotation, and dilation.\\\" Pattern Recognition Letters 14.5 (1993): 369-379.\"}", "{\"title\": \"answer to Reviewer uwy1\", \"comment\": \"## Thank you for your valuable feedback. In our response below, we address the weaknesses you indicate, answer your questions and provide results of additional experiments, where requested.\\n\\n\\n### W1.\\nWe agree that our method does not provide robustness to post-generation attacks as is. However, slight modifications to our approach can be made to provide robustness to rotations and translations. Namely, one can embed watermarks in the invariants in the Fourier space (see Theorem 6 for rotations and Theorem 3 for translations from [1]).\\u00a0\\n\\nWhen the watermarks are embedded, due to invariance from above, they are guaranteed to retain when the corresponding transformation is applied. We present the results in the updated version of the appendix. And report them here, in Table 1. Note that the invariance to rotations in general can not be guaranteed in practice due to interpolation errors.\\n\\n\\n\\nTable 1. TPRs under geometric transformations, JPEG, cropping and erasing, detection problem. We fix FPR = 10<sup>-6 \\n| Method | Rot(10) | Translation (30 X 30) | JPEG(50) | Crop(400 X 400) | Erase(50 X 50) |\\n| :---------------- | :------: | ----: |:--------------------: | :-------: | :------: |\\n| Ours (Fourier) | 0.85 | 1.00 | 0.70 | 0.80 | 0.90 |\\n| Stable sign. | 0.97 | - | 0.88 | 0.98 | - |\\n| SSL | - | - | 0.97 | 1.00 | - |\\n| AquaLora | 1.00 | - | 0.99 | 0.91 | - |\\n| WOUAF | 0.99 | - |0.97 | 0.98 | 0.99 |\\n\\n\\n\\nWe evaluate our method against cropping and image erasing when the watermark is embedded in the invariant to translations (Theorem 3 from [1]). \\n\\n\\n### W2.\\nPlease see the results in Table 2 below.\", \"table_2\": \"Average time in seconds required to embed a watermark.\\n\\n|Method | Time, seconds|\\n|:-----:|:-----:|\\n|Ours| 36.7 |\\n|Stable sign | ~ 60.0|\\n|SSL | - |\\n|AquLora | ~ 0.0|\\n|WOUAF | 1.1 |\\n\\nIt is noteworthy that our method allows to extract the watermark in less than 1 second when the number of users is 100000.\\n\\n\\n## W3. \\nPlease note that the quality metrics like PSNR were reported in the original manuscript. (Table 1). In the updated version, we add the comparison with other baselines in terms of the computation cost. \\n\\n\\n## References:\\n[1] Lin, Feng, and Robert D. Brandt. \\\"Towards absolute invariants of images under translation, rotation, and dilation.\\\" Pattern Recognition Letters 14.5 (1993): 369-379.\"}", "{\"comment\": \"I appreciate the author for their effort on additional experiment results. However, I do have some additional concerns. I understand that embedding in the Fourier space can indeed provide some robustness against geometric distortion, however, in my experience, there may come a price on image quality. Please correct me if I'm wrong but I do not see any specific mechanism in the proposed method that can neglect the image quality difference between embedding in pixel domain or frequency domain. So there exists a potential trade-off between robustness in geometric distortion and image quality when frequency domain embedding is implemented. Since this trade-off is still unclear to me, the robustness results in geometric distortion are not very informative at the paper's current stage. So I will keep my current score unless further clarification can be made.\"}", "{\"summary\": \"This paper proposed to watermark the images generated by latent diffusion models, for detection and attribution, where the watermark is embedded by optimizing the denoised latent representation passed to the VAE decoder of the latent diffusion model. The evaluation of the proposed watermarking method focuses on the robustness against different watermark removal attacks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed watermarking method is shown to be robust against various types of removal attacks, by conducting comprehensive experiments.\\n2. The bound of the robustness against additive watermark removal attacks is theoretically analyzed.\\n3. The paper is well-written overall.\", \"weaknesses\": \"1. The paper does not provide the evaluation of robustness against cropping, rotation, and translation attacks.\\n2. The proposed method needs 700 iterations to optimize the latent representation before generating an image, slowing down the generation speed.\\n3. More relevant watermarking methods should be included for comparison, for example, AquaLoRA [1].\\n\\n[1] Feng, Weitao, et al. \\\"AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models via Watermark LoRA.\\\" arXiv preprint arXiv:2405.11135 (2024).\", \"questions\": \"1. Compared with the original inference process, what is the additional time cost introduced by the extra optimization process of the denoised latent vector for watermarking?\\n2. The paper claims that \\u201cThe watermarking method does not provide robustness against cropping, rotation, and translation attacks. However, this limitation can be overcome by inserting watermarks in the localized areas or the frequency domain of the image\\u201d. Why inserting watermarks in the localized areas or the frequency domain of the image can improve the robustness against cropping, rotation, and translation attacks?\\n3. Is the proposed method robust to image editing attacks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"official comment from the authors\", \"comment\": \"We thank you for the suggestions also provided by reviewer MwGz. Regarding the answer to W2 (which only motivates to stick to diffusion models in the assessment of the proposed approach): it is pretty confusing how any conclusion about the knowledge of DM can be made from it.\"}", "{\"comment\": \"Thanks for addressing my concerns and I appreciate your effort in conducting additional experiments. I want to keep my rating. Here are two more suggestions: 1) You have mentioned that the proposed method embeds 100 bits long watermarks, while the competitors (Stable Sign., AquaLoRA, WOUAF) embed much shorter (at most 48 bits) watermarks in your experiments. I think this information about the embedding capacity (which is an advantage of your method) should also appear in the table. 2) Your method does not need additional training of generative model, while the competitors (Stable Sign., AquaLoRA, WOUAF) do, which I think is another important piece of information that should appear in your table.\"}", "{\"title\": \"Official comment from authors\", \"comment\": \"We thank you for the important suggestions, we will include them in the updated version of the manuscript.\"}", "{\"title\": \"Author response to Reviewer MwGz\", \"comment\": \"### Thank you for your valuable feedback. In our response below, we address the weaknesses you indicate and answer your questions.\\n\\n\\n## W1 and Q2.\\u00a0 \\n\\nIndeed, our method does not provide robustness to rotations, translations, and cropping. However, slight modifications to our approach can be made to provide robustness to rotations and translations. Namely, one can embed watermarks in the invariants in the Fourier space (see Theorem 6 for rotations and Theorem 3 for translations from [1]).\\u00a0\\n\\nWhen the watermarks are embedded, due to invariance from above, they are guaranteed to retain when the corresponding transformation is applied. We present the results in the updated version of the appendix. Note that the invariance to rotations in general can not be guaranteed in practice due to interpolation errors.\\n\\n\\n\\nTable 1. TPRs under geometric transformations, JPEG, cropping and erasing, detection problem. We fix FPR = 10^-6 \\n| Method | Rot(10) | Translation (30 X 30) | JPEG(50) | Crop(400 X 400) | Erase(50 X 50) |\\n| :---------------- | :------: | ----: |:--------------------: | :-------: | :------: |\\n| Ours (Fourier) | 0.85 | 1.00 | 0.70 | 0.80 | 0.90 |\\n| Stable sign. | 0.97 | - | 0.88 | 0.98 | - |\\n| SSL | - | - | 0.97 | 1.00 | - |\\n| AquaLora | 1.00 | - | 0.99 | 0.91 | - |\\n| WOUAF | 0.99 | - |0.97 | 0.98 | 0.99 |\\n\\n\\n\\nWe evaluate our method against cropping and image erasing when the watermark is embedded in the invariant to translations (Theorem 3 from [1]). \\n\\n\\n## W2\\u00a0 and Q1. \\nPlease find the comparison with the other approaches in terms of time required to embed a watermark in the Table 2 below.\", \"table_2\": \"Average time in seconds required to embed a watermark.\\n\\n|Method | Time, seconds|\\n|:-----:|:-----:|\\n|Ours| 36.7 |\\n|Stable sign | ~ 60.0|\\n|SSL | - |\\n|AquLora | ~ 0.0|\\n|WOUAF | 1.1 |\\n\\n\\n## Q3.\\nAmong image editing attacks, we consider image erasing (please see the results in Table 1 here and in Table 9 in the appendix).\\n\\n\\n\\n\\n## References:\\n[1] Lin, Feng, and Robert D. Brandt. \\\"Towards absolute invariants of images under translation, rotation, and dilation.\\\" Pattern Recognition Letters 14.5 (1993): 369-379.\"}", "{\"summary\": \"This paper introduces \\\"Spread Them Apart\\\", an in-processing watermarking algorithm that aims to achieve robust image attribution. The idea is to randomly generate n index pairs (A_i, B_i). By comparing the pixel intensities at these indexed locations, a tuple of n binary keys is established, based on the relative magnitudes of the pixel values in each pair. This tuple can then serve as a unique, verifiable identifier for the user. The proposed method demonstrates robustness against additive noise attacks within a bounded magnitude, ensuring the integrity of the watermark under various perturbations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper is well-written and easy to follow. Although the methodology itself is straightforward, it is proven to be effective and robust against simple image manipulation.\", \"weaknesses\": \"The approach presented in this paper is relatively straightforward, demonstrating theoretical robustness against perturbations such as brightness adjustment, contrast shifts, and additive noise. However, its limitations are also apparent. As the authors acknowledge, the method lacks resilience against geometric distortions, which alter the image's size and indexing\\u2014common forms of attacks that the paper does not address in detail.\\n\\nWhile the authors suggest that \\\"this limitation can be addressed by embedding watermarks in localized areas or the frequency domain of the image,\\\" this raises the question: given these potential solutions, why these methods are not implemented in the current study? It would be insightful to understand the trade-offs here.\\n\\nA further concern is the scalability of the proposed approach. The watermark retrieval process requires comparing each image against every user's secret key before generating a binary string, which appears impractical as the user base grows. This limitation suggests that, as the number of users increases, the approach may face significant challenges in maintaining efficiency and scalability.\", \"questions\": \"1. In this paper, there are some constraints set in place on the attacks (i.e. L infinite norm for additive noise, etc.) To evaluate the effectiveness of these constraints, it would be helpful to include visualizations of the attacked images in the appendix. Such visual aids could convincingly illustrate the impact of these constrained attacks on image quality and watermark robustness.\\n\\n2. For the \\\"contrast -\\\" result in Table 2, it is 0.998. This value is influenced by a negative sign introduced during processing. Given that the method employs a two-tailed detection approach, this result is indeed the optimal outcome in this context. However, a brief explanatory note could help clarify this for readers, potentially reducing confusion about the significance of the negative sign and its impact on the results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official comment by the Authors\", \"comment\": \"Indeed, embedding the watermark in the Fourier space can lead to the degradation of the image quality. However, in our approach, embedding is done by minimizing the loss function that controls both the watermarking process and the deviation in quality from an unwatermarked image; hence, we can maintain the image quality. Please see Fig. 7 in the updated appendix for the examples of images watermarked in the Fourier space (visually, they can be even better than those watermarked in the pixel space).\"}", "{\"title\": \"Answer to Reviewer C9YY\", \"comment\": \"### Thank you for your valuable feedback. In our response below, we address the weaknesses you indicate and answer your questions.\\n\\n\\n## W1.\\n\\nThank you for the careful reading. We have fixed several typos in the updated manuscript and clarified our notations, where needed.\\n\\n## W2.\\nThe paper you mention is very relevant to our work. However, the method described is more about computing the fingerprint of the particular image (namely, when the image is ready, one can compute the fingerprint, making it infeasible, for example, for the attribution problem); in contrast, in our method, an optimization is performed to obtain an image with the predefined watermark. \\n\\n## W3 and Q2.\\n\\nIn the manuscript, we consider two types of ambiguity attacks: (i) unintentional ones, occurring because of the large number of users in the database, and (ii) adversarial ones, where an attacker aims to erase an embedded watermark.\\n\\nWe report the robustness to attack of the first type by depicting TPR in the attribution problem (please see Table below).\", \"table\": \"TPR given different values of m of number of users in the database.\\n| m | TPR@FPR=10^-6 | \\n| :---------------- | :------: \\n| 10 | 1.00 |\\n| 1000 | 1.00 |\\n| 10000 | 1.00 |\\n| 1000000 | 1.00 |\\n| 10000000 | 1.00 |\\n| 100000000 | 1.00 |\\n\\nThe robustness to the second type of attacks is demonstrated by the evaluation of TPR against PGD attack, aimed at erasing the watermark. Please see one in the Tables (3-4) in the manuscript. \\n\\nPlease note that to perform a watermark overwriting attack, an attacker has to have a white-box access to the generation and watermarking pipeline; in our work, we do not consider such access. \\n\\n## W4.\\n\\nFollowing your suggestion, we have added more relevant work for the comparison (please see tables 2-3-4 in the main text and table 9 in the appendix).\\n\\n## Q1.\\n\\nIt is noteworthy that slight modifications to our approach can be made to provide robustness to rotations and translations. Namely, one can embed watermarks in the invariants in the Fourier space (see Theorem 6 for rotations and Theorem 3 for translations from [1]).\\u00a0\\n\\nWhen the watermarks are embedded, due to invariance from above, they are guaranteed to retain when the corresponding transformation is applied. We present the results in the updated version of the appendix. And report them here, in Table 1. Note that the invariance to rotations in general can not be guaranteed in practice due to interpolation errors.\\n\\n\\n\\nTable 1. TPRs under geometric transformations, JPEG, cropping and erasing, detection problem. We fix FPR = 10^-6 \\n| Method | Rot(10) | Translation (30 X 30) | JPEG(50) | Crop(400 X 400) | Erase(50 X 50) |\\n| :---------------- | :------: | ----: |:--------------------: | :-------: | :------: |\\n| Ours (Fourier) | 0.85 | 1.00 | 0.70 | 0.80 | 0.90 |\\n| Stable sign. | 0.97 | - | 0.88 | 0.98 | - |\\n| SSL | - | - | 0.97 | 1.00 | - |\\n| AquaLora | 1.00 | - | 0.99 | 0.91 | - |\\n| WOUAF | 0.99 | - |0.97 | 0.98 | 0.99 |\\n\\n\\n\\nWe evaluate our method against cropping and image erasing when the watermark is embedded in the invariant to translations (Theorem 3 from [1]). \\n\\n\\n\\n## References:\\n[1] Lin, Feng, and Robert D. Brandt. \\\"Towards absolute invariants of images under translation, rotation, and dilation.\\\" Pattern Recognition Letters 14.5 (1993): 369-379.\"}", "{\"title\": \"Answer to Reviewer yiE4\", \"comment\": \"### Thank you for your valuable feedback. In our response below, we address the weaknesses you indicate and answer your questions.\\n\\n\\n## W1.\\n\\nWe have added the ablation study on the hyperparameters you proposed, namely, watermark length, epsilon, and loss weights. In terms of the trade-off between attack resilience and perceptual quality of the generated images (Table 1-2 here), we see that the default parameters are close to the optimal ones. We report the results of the ablation study in the appendix, Section A.1.3.\\n\\nTable 1. Ablation study: the effect of the parameter values on the robustness of the watermark. We report an average bit-wise error and study the robustness to JPEG, Hue, Saturation, Sharpness and Gaussian noise, since our approach provide robustness to brightness, contrast and gamma shifts by design. In the \\\"Parameter\\\" column, we report the varying parameter; the other parameters are set to default values (n=50, eps=0.2, `\\\\lambda_wm`=0.9, `\\\\lambda_qual`=150)\\n\\n| Parameter | Value | JPEG | Hue | Saturation | Sharpness | Noise |\\n| :---------------- | :------: | :----: |:--------------: | :-------: | :------: | :------:|\\n|n|50|0.123|0.013| 0.095|0.002|0.049|\\n|n|100|0.143|0.011|0.104|0.001|0.056|\\n|n|150| 0.157|0.013|0.112|0.001|0.063|\\n|n|250|0.159|0.015|0.120|0.001|0.069|\\n--\\n|eps| 0.0|0.313|0.109|0.206|0.016|0.202|\\n|eps|0.05|0.261|0.055|0.169|0.005|0.159|\\n|eps|0.2|0.143|0.011|0.104|0.001|0.056|\\n|eps|0.5|0.054|0.001|0.041|0.000|0.003|\\n--\\n|`lambda_wm`|0.5|0.150|0.015|0.108|0.002|0.060|\\n|`lambda_wm`|0.9|0.143|0.011|0.104|0.001|0.056|\\n|`lambda_wm`|2.0|0.136|0.012|0.103|0.001|0.056|\\n--\\n|`lambda_qual`|10.0|0.059|0.014|0.071|0.004|0.035|\\n|`lambda_qual`| 50.0|0.088|0.008|0.082|0.001|0.040|\\n|`lambda_qual`| 150.0|0.143|0.011|0.104|0.001|0.056|\\n|`lambda_qual`|200.0|0.160|0.013|0.109|0.001|0.060|\\n-----\\n\\nTable 2. Ablation study: the effect of the parameter values on the image quality. We report the values of SSIM, PSNR, LPIPS image quality metrics.\\n\\n| Parameter | Value | SSIM | PSNR | LPIPS|\\n| ------------- |:-------------:|:-------------:|:-------------:|:-------------:|\\n| n |50|0.897|31.104|0.006|\\n| n |100|0.856|29.381|0.007|\\n| n |150|0.827|28.309|0.009|\\n| n |250|0.777|26.726|0.013|\\n|eps|0.0|0.878|30.142|0.006|\\n|eps|0.05|0.873|29.937|0.007|\\n|eps|0.2|0.856|29.381|0.007|\\n|eps|0.5|0.820|28.378|0.010|\\n| `lambda_wm` |0.5|0.869|29.830|0.006|\\n| `lambda_wm` |0.9|0.856|29.381|0.007|\\n| `lambda_wm` |2.0|0.842|28.912|0.008|\\n| `lambda_qual` |10.0|0.752|26.200|0.057|\\n| `lambda_qual` |50.0|0.806|27.601|0.019|\\n| `lambda_qual` |150.0|0.856|29.381|0.007|\\n| `lambda_qual` |200.0|0.869|29.918|0.005|\\n\\n\\n\\n## W2.\\n\\nIndeed, our approach is not limited to the diffusion models and, in principle, can be applied to any decoder-based model. We have chosen to stick to DM, since the quality of images generated by the diffusion models significantly surpasses that of VAE and GANs. Therefore, there is a much greater interest in their protection and safe deployment. \\n\\n## W3. \\nFollowing your suggestion, we have included more baselines to compare our approach against (Please see Tables 2-3-4 in the main text and Table 9 in the Appendix).\\n\\n## W4.\\nPlease find the comparison with the other approaches in terms of time required to embed a watermark in the Table 3 below.\", \"table_3\": \"Average time in seconds required to embed a watermark.\\n\\n|Method | Time, seconds|\\n|:-----:|:-----:|\\n|Ours| 36.7 |\\n|Stable sign | ~ 60.0|\\n|SSL | - |\\n|AquLora | ~ 0.0|\\n|WOUAF | 1.1 |\\n\\n\\n## Q1. \\nThank you for the idea on the possible enhancement of our method. The paper that you have shared with us inserts a watermark during the sampling process, therefore we cannot benefit from this idea straightforwardly. \\nRegarding the block-wise watermark insertion in our setting per se, likely the perceptual image quality will degrade, because more pixels will be involved in the watermark insertion, leaving artifacts in an image. In such a case, more restrictions on pixel values should be added, and it needs further research and pondering.\\n\\n\\n\\n\\n## References:\\n[1] Lin, Feng, and Robert D. Brandt. \\\"Towards absolute invariants of images under translation, rotation, and dilation.\\\" Pattern Recognition Letters 14.5 (1993): 369-379.\"}", "{\"comment\": \"I appreciate authors for their efforts in this work. After reconsideration, I can\\u2019t raise my score. Table 1 in the paper indicates that the image consistency of the proposed method is worse than others in most metrics. Table 5 indicates that the time required in the watermark embedding is high. Also, the authors\\u2019 reply to W2 indicates that their understanding of diffusion model is limited. Owing to these factors, I will keep my score.\\n\\nHere are some suggestions for authors. Your method can embed high-capacity watermarks. It would be better to demonstrate them in the experiment. Also, your method is training-free, which should also be emphasized. I guess there is a trade-off between the image quality and the watermark capacity, so make sure you achieve the balance. And finally please use different color texts for the revision, which makes it more clear for reviewers to know the difference you made with respect to the previous version. \\n\\nAlso, I can\\u2019t understand why in many tables you use a horizontal line where there\\u2019s supposed to be a number, for example the SSIM and PSNR for WOUAF, and the Hue, Saturation for AquaLora. Can the authors explain?\"}", "{\"title\": \"Official Comment by Reviewer C9YY\", \"comment\": \"Thanks for your response. It seems that the robustness of the proposed method is not signifantly better than that of the current method. Therefore, I remain my score.\"}" ] }
9Wghi9fKFA
Multi-Atlas Brain Network Classification through Consistency Distillation and Complementary Information Fusion
[ "Jiaxing Xu", "Mengcheng Lan", "Xia Dong", "Kai He", "Wei Zhang", "Qingtian Bian", "Yiping Ke" ]
In the realm of neuroscience, identifying distinctive patterns associated with neurological disorders via brain networks is crucial. Resting-state functional magnetic resonance imaging (fMRI) serves as a primary tool for mapping these networks by correlating blood-oxygen-level-dependent (BOLD) signals across different brain regions, defined as regions of interest (ROIs). Constructing these brain networks involves using atlases to parcellate the brain into ROIs based on various hypotheses of brain division. However, there is no standard atlas for brain network classification, leading to limitations in detecting abnormalities in disorders. Some recent methods have proposed utilizing multiple atlases, but they neglect consistency across atlases and lack ROI-level information exchange. To tackle these limitations, we propose an Atlas-Integrated Distillation and Fusion network (AIDFusion) to improve brain network classification using fMRI data. AIDFusion addresses the challenge of utilizing multiple atlases by employing a disentangle Transformer to filter out inconsistent atlas-specific information and distill distinguishable connections across atlases. It also incorporates subject- and population-level consistency constraints to enhance cross-atlas consistency. Additionally, AIDFusion employs an inter-atlas message-passing mechanism to fuse complementary information across brain regions. Experimental results on four datasets of different diseases demonstrate the effectiveness and efficiency of AIDFusion compared to state-of-the-art methods. A case study illustrates AIDFusion extract patterns that are both interpretable and consistent with established neuroscience findings.
[ "Brain Network", "fMRI Biomarker", "Graph Neural Network", "Graph Transformer", "Neurological Disorder" ]
Reject
https://openreview.net/pdf?id=9Wghi9fKFA
https://openreview.net/forum?id=9Wghi9fKFA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vkTbL9EsgQ", "ubRZDiDKrY", "uA16OQrGSb", "sBkGe6mvQ6", "pnbIkVBVER", "jUrVToAnTw", "i87aIJWAj0", "gsFM0PYlqK", "djLW99i7ty", "daXJEGqY1o", "VpbaEO26l5", "Uj62zLU2Mx", "TF8BJTdT7Q", "QOSNb71P00", "Lzq8ddhj8S", "L1TRcan9Ky", "JH0uQfuR4b", "DTmpI4RmvH", "B5KFaOlq6g", "2hVIjXNt8c" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730189972217, 1732186332597, 1732186198515, 1732186239241, 1731005688068, 1731021984828, 1733221982058, 1732608778826, 1733222086126, 1734783122687, 1732186517516, 1732186300835, 1732547811488, 1737523656700, 1732609361628, 1732186095070, 1730698527651, 1732186482567, 1733121477957, 1732482985698 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4706/Reviewer_9YF5" ], [ "ICLR.cc/2025/Conference/Submission4706/Authors" ], [ "ICLR.cc/2025/Conference/Submission4706/Authors" ], [ "ICLR.cc/2025/Conference/Submission4706/Authors" ], [ "ICLR.cc/2025/Conference/Submission4706/Reviewer_CMqs" ], [ "ICLR.cc/2025/Conference/Submission4706/Reviewer_VhSn" ], [ "ICLR.cc/2025/Conference/Submission4706/Authors" ], [ "ICLR.cc/2025/Conference/Submission4706/Authors" ], [ "ICLR.cc/2025/Conference/Submission4706/Authors" ], [ "ICLR.cc/2025/Conference/Submission4706/Area_Chair_Stxz" ], [ "ICLR.cc/2025/Conference/Submission4706/Authors" ], [ "ICLR.cc/2025/Conference/Submission4706/Authors" ], [ "ICLR.cc/2025/Conference/Submission4706/Reviewer_VhSn" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4706/Authors" ], [ "ICLR.cc/2025/Conference/Submission4706/Authors" ], [ "ICLR.cc/2025/Conference/Submission4706/Reviewer_3zUh" ], [ "ICLR.cc/2025/Conference/Submission4706/Authors" ], [ "ICLR.cc/2025/Conference/Submission4706/Reviewer_CMqs" ], [ "ICLR.cc/2025/Conference/Submission4706/Reviewer_CMqs" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a method called AIDFusion, designed to improve brain network classification using rs-fMRI data.\\n\\nThe authors note that existing methods often rely on single atlases for classification, while approaches that utilize multiple atlases tend to overlook cross-atlas consistency and fail to facilitate information exchange at the ROI level. To address these issues, the authors first introduce a disentangled Transformer to learn atlas-level embeddings. They then propose an inter-atlas message-passing mechanism to fuse complementary information across brain regions. Additionally, subject-level and population-level consistency losses are employed to enhance cross-atlas coherence.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Originality: This paper demonstrates some novelty. For instance, the concept of inter-atlas message-passing is interesting.\", \"Quality: The methodology is validated across four datasets representing different diseases, effectively showcasing its effectiveness and efficiency.\", \"Clarity: The paper is well-structured and easy to follow, enhancing comprehension for readers.\", \"Significance: This research offers new insights into brain network analysis within the field of neuroscience.\"], \"weaknesses\": \"**1. There are too many hyperparameters to tune, which undermines the credibility of the results.**\\n\\nFor example, the training loss comprises five components, with four parameters requiring tuning. Additionally, hyperparameters from other modules of the method, such as the k value in KNN (lines 258-259), the keeping ratio (lines 278-279), and the temperature parameter (lines 302-303), also need to be set appropriately.\\n\\n**2. The presence of numerous networks to learn complicates the training process, potentially leading to instability.**\\n\\nThis includes the transformers in lines 216-240 and three GCNs in lines 263-267 and 280-285.\\n\\n**3. The motivations for some modules are unconvincing.**\\n\\nFor example, in lines 216-221 and 242-245, the authors mention introducing incompatible nodes and using orthogonal loss to filter out inconsistent atlas-specific information, citing the [CLS] token in NLP as motivation. However, the relationship is not adequately established, and the effect of these incompatible nodes is not clearly illustrated or justified.\\n\\n**4. The comparisons are insufficient.**\\n\\nGiven that the atlas embedding learning is a transformer-based method, comparisons with additional transformer-based methods, such as [1], should be included. Additionally, other multi-atlas methods, like [2], should be considered. the manuscript may lack ablation studies that directly concatenate embeddings from single-atlas methods across different atlases similar in [2][3].\\n\\n*[1] Kan, X., Dai, W., Cui, H., Zhang, Z., Guo, Y., & Yang, C. (2022). Brain network transformer.\\u00a0Advances in Neural Information Processing Systems,\\u00a035, 25586-25599.*\\n\\n*[2] Wang, W., Xiao, L., Qu, G., Calhoun, V. D., Wang, Y. P., & Sun, X. (2024). Multiview hyperedge-aware hypergraph embedding learning for multisite, multiatlas fMRI based functional connectivity network analysis.\\u00a0Medical Image Analysis,\\u00a094, 103144.*\\n\\n*[3] Zhao, B. W., You, Z. H., Wong, L., Zhang, P., Li, H. Y., & Wang, L. (2021). MGRL: Predicting Drug-Disease Associations Based on Multi-Graph Representation Learning.\\u00a0Frontiers in genetics,\\u00a012, 657182. https://doi.org/10.3389/fgene.2021.657182*\", \"questions\": \"Given the small sample sizes of rs-fMRI datasets, as stated in lines 325-330, how did you overcome the risk of overfitting during the training process and determine hyperparameters?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 9YF5\", \"comment\": \"**[W1. Too many hyperparameters to tune.]**\\n\\nThank you for pointing this out. We agree that our model includes a range of hyperparameters, but we want to clarify that not all of them require meticulous tuning. Here are the details:\\n\\n- The keeping ratio (sparsify threshold, related to adjacency matrix construction) is applied for GNN-based models and is fixed at 20% based on the dataset\\u2019s recommended setting in [1].\\n- The trade-off hyperparameters $\\\\lambda_4$ (orthogonal loss) and the temperature parameter $\\\\tau$ (subject-level consistency loss) are quite stable across datasets. We found that setting $\\\\lambda_4 = 1.0$ and $\\\\tau = 0.75$ works consistently well, allowing us to use these default values without further tuning when adapting to new datasets.\\n- The key hyperparameters that require tuning are $\\\\lambda_1$, $\\\\lambda_2$, $\\\\lambda_3$ (for the different loss components), and the $k$ value in kNN. We performed a grid search for these parameters.\\n\\n**[W2. Numerous networks to learn complicates the training process that potentially leading to instability.]**\\n\\nWe appreciate this feedback. While AIDFusion involves multiple components, it actually has fewer model parameters compared to several state-of-the-art multi-atlas methods. Table 5 in our manuscript shows that our model requires fewer parameters. Additionally, our experiments demonstrate that AIDFusion converges significantly faster especially on large datasets, requiring fewer epochs compared to existing methods, as shown in Table 5. Besides, regarding model stability, we can observe that the standard deviation of AIDFusion is at the low end among all models (as shown in Table 2). Even when compared with LR and SVM (which much fewer model parameters), the std of AIDFusion still much lower. This demonstrates the stability of our proposed method.\\n\\n**[Q1. How to overcome the risk of overfitting.]** Overfitting is indeed a common challenge in brain network analysis due to the limited sample size. To prevent our model from overfitting, we use the early stopping criterion, i.e., we halve the learning rate when there is no further improvement on the validation loss during 25 epochs and stop the training once the learning rate is smaller than the minimum rate we set. And we also include dropout layers and L2 regularization in our model to prevent overfitting. This description is included in Appendix F.\\n\\n**[W3. Justification of the effect of incompatible nodes and orthogonal loss.]** Please refer to the general response.\\n\\n**[W4. The comparisons are insufficient.]** Please refer to the general response.\\n\\n[1] Data-driven network neuroscience: On data collection and benchmark. NeurIPS 2023\"}", "{\"title\": \"Response to Reviewer CMqs [1/2]\", \"comment\": \"**[W1.1, Q1. Transformer trained on small datasets without any self-supervised pretraining.]**\\n\\nWe appreciate your comment. As you noted, sample sizes for brain network analysis are indeed limited. We used the largest publicly available datasets for this domain. In fact, previous multi-atlas works such as MGRL (184 subjects), LeeNet (470 subjects), and METAFormer (884 subjects) utilized even smaller datasets than ours. For single-atlas methods like BNT, experiments were conducted on ABIDE with 1009 subjects, still fewer than in ours. We are also trying to collect and prepare more data, such as the mentioned UKBiobank. However, it involves different acquisition protocols and extremely time-consuming preprocessing (even 6-7 hours per subject). So we cannot add further data in this stage. We will try to include more data in our further work.\\n\\nWe acknowledge the potential of self-supervised pretraining to address the challenges of limited training data. However, building a brain-specific pretraining model leveraging the unique characteristics of brain data is part of our ongoing research and lies outside the primary scope of this work. Our current focus is on **architecture design** for multi-atlas information fusion rather than pretraining strategies. Notably, our baseline METAFormer employs self-supervised pretraining but does not exhibit significant performance gains compared to our approach. This can be attributed to METAFormer\\u2019s reliance on late fusion, which lacks intermediate interaction across atlases. The observed results underscore the importance of a tailored architecture, such as AIDFusion, to effectively **extract and integrate information** from multiple atlases. \\n\\n**[W1.2, Q2. Use processed connectomes instead of raw data?]** Thank you for the insightful comment. We chose to work with processed connectomes because they effectively capture functional co-activation patterns between brain regions. These functional connectivity metrics have been well-established in neuroscience as reliable markers for neurological disorder classification [1]. Raw fMRI data often contain substantial noise, including motion artifacts and physiological fluctuations, which can obscure critical features. Besides, in the dataset paper we reference [2], the authors compare graph-based methods with time-series-based methods. Their results show that graph-based approaches, utilizing the connectivity matrix, consistently outperform time-series-based methods. This justifies our decision to use processed connectomes, aligning with the existing body of literature.\\n\\n**[W2. Use of only two atlases in experiments.]** We selected AAL and Schaefer atlases because they are among the most widely adopted in the field, representing anatomical and functional parcellation approaches, respectively. Their detailed information of these two atlases are discussed in Appendix A.2. Additionally, we have tested a three-atlases case in Appendix G, on which AIDFusion also outperforms all baselines. We observed that all models tested on three atlases achieved poorer performance than the ones tested on Schaefer100 and AAL116. Therefore, we chose to conduct experiments on these two atlases in the main paper. \\n\\n**[W3, Q3. Unclear significance of results.]** Thank you for pointing this out. We performed statistical tests to assess the significance of our results. Specifically, one-sided paired t-tests between AIDFusion and the best multi-atlas baselines yielded p-values of 0.0116, 0.0380, 0.0830, and 0.0886 respectively on the four datasets. This indicates that our model significantly outperforms existing methods on the two large datasets ABIDE and ADNI. However, on small datasets of PPMI and Matai with large standard deviations in 10 folds, t-tests are highly sensitive to outliers existing in the 10-fold\\nresults, thus decreasing the t-statistic calculated and lowering\\nthe chance of rejecting the null hypothesis. We have included these p-values in the caption of Table 2 in the revised manuscript for transparency.\"}", "{\"title\": \"Response to Reviewer CMqs [2/2]\", \"comment\": \"**[W4.1, Q4. Conventional baseline clarifications.]**\\n\\nWe appreciate your detailed feedback. We selected SVM and LR as conventional baselines because they have been shown to perform robustly in prior studies and the original dataset papers [2]. We also followed the same hyperparameter-tuning protocol as in the original work. Specifically, we tuned LASSO, Ridge, and their combinations for LR, and applied L2 regularization for SVM. The full list of tuned parameters is provided in Appendix F of the revised manuscript.\\n\\nAdditionally, differences in reported accuracies may arise due to variations in data splits, preprocessing, and cross-validation strategies. Note that using static splits may also incur higher accuracy because the hyperparameter-tuning could overfit on the specific split. Our brain network construction follows a standard pipeline and we implement all baselines under the same framework for fair comparison. Our baseline performance is comparable with the results reported in the original paper of the datasets [2]. \\n\\n**[W4.2. Transformer baselines.]** Please refer to the general response.\\n\\n**[W5, Q5. More ablation studies.]** Please refer to the general response.\\n\\n**[W6.1. Coverage of brain regions by atlases.]** While many atlases provide broad coverage of brain regions, not all voxels are assigned to regions across different atlases. As noted in Appendix A.2, up to 33.3% of voxels may differ between atlases like AAL, Schaefer, and HO. Even for the Craddock atlas mentioned in the comment, 16.1% voxels in Craddock atlas are not included AAL atlas, while 16.0% voxels in AAL atlas are not included in Craddock atlas. This variability supports our motivation to integrate multiple atlases for a more comprehensive representation of the brain.\\n\\n**[W6.2. Difference with atlas harmonization methods.]** We appreciate your question. While CAROT and other harmonization methods primarily focus on aligning data across different atlases to ensure consistency, our approach simultaneously emphasizes both consistency and complementary information extraction. Specifically, our method incorporates the Subject- and Population-level Consistency Constraint at a higher representation level, an auxiliary orthogonal loss, and the Disentangle Transformer to capture consistency across data while also retaining atlas-specific information. Furthermore, through Inter-Atlas Message-Passing, we enhance the extraction of complementary features. This dual focus enables our model to retain task-specific variations across atlases that are often overlooked by traditional harmonization methods. Harmonization methods typically concentrate on achieving consistency across datasets, yet this often neglects the task-specific discriminative information that is crucial for improving performance in downstream tasks. In contrast, our framework is designed as an end-to-end classification system, which not only ensures consistency but also maximizes task relevance by allowing complementary features from different atlases to contribute to the final classification task.\\n\\n[1] Modern network science of neurological disorders. Nature Reviews Neuroscience 2014\\n\\n[2] Data-driven network neuroscience: On data collection and benchmark. NeurIPS 2023\"}", "{\"summary\": \"Resting-state functional MRI is often used to predict behavioral phenotypes by measuring correlations between disparate regions/parcels of the brain. There are many different parcellation conventions (\\u201catlases\\u201d in this context) and downstream analyses are often quite dependent on the choice of atlas.\\n\\nSubmission 4706 presents a classification framework that integrates multiple different atlas conventions for a given subject. They propose a transformer that extracts complementary information and gain modest accuracy increases across four different datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The use of [transformer registers](https://arxiv.org/abs/2309.16588) (herein called \\u201cincompatible nodes\\u201d) for potentially filtering out incompatible information across atlases is a novel and interesting application.\", \"The use of spatial distances between regions of interest in the brain when constructing the message-passing framework is interesting and, to my limited knowledge, not commonly done.\", \"Generally clearly and straightforwardly presented.\"], \"weaknesses\": \"I do not work in this specific subfield and would be happy to revisit my score and look forward to the discussion phase. I also did not read the appendix so please correct me if I missed something.\\n\\n### 1. Unclear motivation for transformer-based approach on processed connectomes\\n\\nThe datasets in fMRI connectomics are (understandably) limited in sample size, ranging here from N=60 to N=1300. \\n\\nHowever, it is unclear how the submission can adequately train transformers on 60 data samples. Without inductive bias (beyond permutation equivariance), transformers require several orders of magnitude higher training sets across the literature. I do not see any mention of pretraining either on larger datasets (e.g. UKBiobank) that would potentially enable few-shot finetuning.\\n\\nFurther, could the authors please elaborate on why they chose to work only with the processed connectomes instead of the high-dimensional raw data where there may be more potential for self-supervised pretraining?\\n\\n### 2. Using only two atlases in experiments\\n\\nThe submission motivates itself by claiming that multiple atlases provide complementary information (which is a reasonable hypothesis) but its experiments only use two atlases (Schaefer100 and AAL116). Given that there is a wide range of potential atlasing procedures and many other atlases are often used, could the authors please clarify why two atlases were used in the experiments?\\n\\n### 3. Unclear significance\\n\\nThe main comparative results (presented in Table 2) are largely well within each others\\u2019 error bars without clear significance.\\n\\nIs there high inter-subject or inter-site variability s.t. standard deviations are inflated? Plotting per-subject performance as a supplemental figure should clarify this.\\n\\nFurther, could the authors please perform significance tests with corrections for multiple comparisons s.t. readers can assess whether these gains are meaningful?\\n\\n### 4. Baseline choices and clarifications\\n\\n#### 4.1. Iterative/conventional baselines\\n\\nThe only iterative baselines included for comparison are logistic regression and SVMs. It is not mentioned whether these methods were regularized in any form (e.g. LASSO) and, if so, whether the regularization hyperparameters were tuned at all. As far as I\\u2019m aware, practitioners [largely use LASSO-style methods for behavior prediction](https://db.humanconnectome.org/megatrawl/HCP820_MegaTrawl_April2016.pdf) and find that the regularization weight strongly affects the results.\\n\\nFurther, to my knowledge, deep nets and methods such as kernel regression are largely equivalent on this task when tested on much larger sample sizes ( [reference](https://pubmed.ncbi.nlm.nih.gov/31610298/) ).\\n\\nAs all results are well within each other's error bars (see point 3 above), could the authors please detail why only two non-DL methods were benchmarked and whether these methods were regularized and tuned?\\n\\n#### 4.2 Transformer baselines\\n\\nSeveral transformer-based approaches to fMRI classification are neither cited nor benchmarked against. For example,\\n- https://link.springer.com/chapter/10.1007/978-3-031-43993-3_28 (MICCAI\\u201923)\\n- https://link.springer.com/chapter/10.1007/978-3-031-72390-2_14 (MICCAI\\u201924)\\n\\nAdditionally, while not a one-to-one comparison to these papers use of static splits vs. the submission using 10-fold CV, these papers report much higher performance on ABIDE (high-70s vs the paper\\u2019s mid-60s accuracy). \\n\\nCould the authors please address differences relative to these works, whether they could be used as baselines, and what could explain the major performance differences?\\n\\n### 5. Missing ablations for key contributions\\n\\nThe paper has several moving parts, all of which are claimed as novel contributions. However, the core parts of Section 4.1 (the identity embedding, the \\u201cdisentangle Transformer\\u201d, and the orthogonal loss) are not ablated. Please do so in the rebuttal or future versions.\\n\\n### 6. Minor\\nThese aspects do not affect my score.\\n\\n- The paper claims that not all regions of the brain are covered by all atlases and thus integrating multiple atlases may have benefits. Please correct me if I\\u2019m wrong but don\\u2019t atlases such as Shen and Craddock cover most of the brain? \\n- Somewhat orthogonally, multiple atlas _harmonization_ can also be performed as with iterative methods like [CAROT](https://www.sciencedirect.com/science/article/am/pii/S136184152300124X). Could the authors please clarify the differences in motivation for their approach vs. atlas harmonization? I ask as much of the paper is motivated by the goal of integrating information across atlases and there are existing methods to do so.\", \"questions\": [\"How is transformer training feasible on tiny datasets with sample sizes of N=60 and N=1300 without any self supervised pretraining?\", \"Why work with processed connectomes instead of raw data for transformer training?\", \"Please perform significance testing with multiple comparisons adjustments.\", \"Please describe how the conventional baselines were tuned and whether they were regularized\", \"Please add ablations for the key contributions in 4.1\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims to classify brain networks from fMRI BOLD ROIs using multiple (potentially conflicting) atlases. The authors resolve conflicts with a deep neural network (specifically a \\\"disentangle transformer\\\") alongside external prior information related to subject and population constraints.\\n\\nTo evaluate the efficacy of the proposed approach, the authors compare against several classical and current methods on four disparate datasets with a classification downstream task. Their method achieves the greatest quantitative results compared to all others in nearly all cases.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This approach is well-motivated, as there is no consensus on the number of atlases to use. Using a single atlas conforms to the biases induced that particular atlas, and using multiple atlases involves the use of novel methods which do not regularize consistency across multiple atlases. The proposed approach aims to achieve both consistency across multiple atlases as well as providing their model ROI-level \\\"information exchange.\\\" Accomplishing these two aims required the development and use of several techniques, including the proposed \\\"disentangle transformer\\\" and \\\"identity embedding\\\", the use of an orthogonality loss on a component of the representations, a nearly-linear learnable mapping for inter-atlas message passing, a bespoke contrastive loss for subject-level consistency, and a similarity loss for \\\"maintain the relationship of subjects across atlases.\\\"\\n\\nThis paper is well-motivated and fairly comprehensive with respect to the classification experiments. The contribution of the approach as an improvement for classification of brain networks is clear. I recommend this paper be accepted.\", \"weaknesses\": \"In addition to my recommendation for acceptance, I enumerate some concerns I have below:\\n\\n1. The technical claims are not fully supported. The \\\"identity embedding\\\" is either mis-named or its explanation is unclear. The embedding is not identity; it seems instead to simply be a learnable embedding. It is unclear whether the parameters of the MLP in Eq. 1 are learnable. If so, what is the purpose behind W_ID?\\n2. The efficacy of the \\\"disentangle Transformer\\\" is not fully supported. How well are conflicting atlases disentangled? Was the orthogonal loss actually useful towards separating shared and conflicting information across atlases? An experiment to support the authors' intuition is missing.\\n3. The same is true for both subject- and population-level consistencies as well as message-passing. The explanation is sensible in text, but the actual efficacy of the proposed architecture and losses is not described in the experiments and results. The reader sees that the downstream classification is improved in light of the proposed components, but it is unknown whether these components accomplish what they are intended to.\\n4. Some figure and table captions are too terse. Figure1, table 1, table 3 could benefit from better description to help the reader understand their contents.\\n5. Table 5 experiment times include training, validation, and testing across multiple cross-validation folds. This is unusual; instead, reporting training+validation time and then testing time (for a single subject) separately would be more conventional. This guides future users of the proposed method towards how much time and compute should be budgeted for training the approach as well as what hardware is necessary to use the proposed method at scale in their own work.\", \"questions\": \"These questions are adapted / copied from the weaknesses listed above:\\n\\n1. It is unclear whether the parameters of the MLP in Eq. 1 are learnable. If so, what is the purpose behind W_ID?\\n2. The efficacy of the \\\"disentangle Transformer\\\" is not fully supported. How well are conflicting atlases disentangled? Was the orthogonal loss actually useful towards separating shared and conflicting information across atlases? An experiment to support the authors' intuition is missing.\\n3. The same is true for both subject- and population-level consistencies as well as message-passing. The explanation is sensible in text, but the actual efficacy of the proposed architecture and losses is not described in the experiments and results. The reader sees that the downstream classification is improved in light of the proposed components, but it is unknown whether these components accomplish what they are intended to. What evidence do the authors have towards these claims?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**[Significance Testing on Large Datasets]**\\nThank you for your thoughtful feedback and for raising the score. We acknowledge your concerns about the robustness of our model\\u2019s gains on larger datasets. However, we would like to highlight that most of the second-best methods are single-atlas approaches. Despite our use of a relatively simple architecture based on Transformers and GCNs, our model demonstrates significant improvements over other multi-atlas methods. This highlights its strong foundation and expandability. We believe that with further incorporation of more sophisticated components, AIDFusion has the potential to achieve even greater performance improvements.\\n\\n**[Data Efficiency and Transformer Inductive Biases]** \\nWe appreciate your clarification regarding data efficiency and the challenge of Transformers' lack of inductive biases. Typically, Transformers require large datasets to learn these biases, posing a challenge in neuroimaging with limited sample sizes. To address this, our approach incorporates specific design choices: \\n1. **Atlas-Specific Consistency**: By integrating identity embeddings and disentangling consistent and incompatible information across atlases, the model inherently captures domain-specific inductive biases. \\n2. **Orthogonal Constraints**: These constraints encourage diverse and non-redundant feature learning, mitigating overfitting risks in small datasets. \\n\\nAdditionally, unlike deeper Transformer architectures like ViT, which have extensive parameters and require large-scale datasets, our Transformer is intentionally shallow (1 layer) and narrow (hidden dimension = ~100) with a single attention head. This minimal parameterization allows effective learning even on small datasets. \\n\\nNonetheless, we acknowledge the inherent challenges of training Transformers from scratch in data-constrained settings. As part of our future work, we are exploring brain-specific pretraining approaches, which we believe will further enhance model performance and generalizability.\"}", "{\"comment\": \"**[W2. Three Atlases Experiments]**\\n\\nWe acknowledge that adding more atlases does not always improve performance, which is similar to the observations in multimodal learning where incorporating additional modalities may not consistently yield better results. Specifically, adding a third atlas might introduce noise or conflicting information that outweighs its potential benefits. \\n\\nBesides, to further verify our finding that our method will benefit more from atlases with a similar number of ROIs, we conduct additional experiments on the ABIDE dataset using three atlases (Schaefer100, AAL116, and BASC122 [3]). Results shown in the following table demonstrate that for five multi-atlas methods, three of them (MGRL, METAFormer, and AIDFusion) achieve better performance with three atlases compared to any two-atlas combinations. Importantly, our proposed AIDFusion still outperforms all baselines in these settings. This reinforces our claim that AIDFusion effectively integrates multi-atlas information while remaining robust to the inclusion of additional atlases. Detailed results and discussion have been added to Appendix G in the revised manuscript. The experiments for ADNI under the same setting are still running and we will update the result to the next version of our paper.\\n\\n| Schaefer100 | AAL116 | BASC122 | model | acc \\u00b1 std |\\n|--------------------|---------------|----------------|-------------------|---------------------|\\n| \\u221a | \\u221a | | MGRL | 61.56 \\u00b1 4.90 |\\n| \\u221a | \\u221a | | MGT | 63.32 \\u00b1 3.90 |\\n| \\u221a | \\u221a | | METAFormer | 61.27 \\u00b1 4.05 |\\n| \\u221a | \\u221a | | LeeNet | 61.28 \\u00b1 3.12 |\\n| \\u221a | \\u221a | | AIDFusion | **66.35** \\u00b1 3.26 |\\n| | \\u221a | \\u221a | MGRL | 60.80 \\u00b1 5.12 |\\n| | \\u221a | \\u221a | MGT | 58.49 \\u00b1 5.64 |\\n| | \\u221a | \\u221a | METAFormer | 62.92 \\u00b1 5.79 |\\n| | \\u221a | \\u221a | LeeNet | 60.01 \\u00b1 4.00 |\\n| | \\u221a | \\u221a | AIDFusion | **65.97** \\u00b1 3.60 |\\n| \\u221a | | \\u221a | MGRL | 63.14 \\u00b1 4.17 |\\n| \\u221a | | \\u221a | MGT | 59.90 \\u00b1 3.46 |\\n| \\u221a | | \\u221a | METAFormer | 62.53 \\u00b1 3.78 |\\n| \\u221a | | \\u221a | LeeNet | 59.12 \\u00b1 4.20 |\\n| \\u221a | | \\u221a | AIDFusion | **64.79** \\u00b1 2.80 |\\n| \\u221a | \\u221a | \\u221a | MGRL | 63.62 \\u00b1 4.57 |\\n| \\u221a | \\u221a | \\u221a | MGT | 62.84 \\u00b1 3.85 |\\n| \\u221a | \\u221a | \\u221a | METAFormer | 65.80 \\u00b1 5.61 |\\n| \\u221a | \\u221a | \\u221a | LeeNet | 60.19 \\u00b1 3.77 |\\n| \\u221a | \\u221a | \\u221a | AIDFusion | **66.65** \\u00b1 4.14 |\\n\\n**[W1.1, Q1. Data Efficiency]** We appreciate the reviewer\\u2019s concerns about training transformers on limited datasets. fMRI data are inherently high-dimensional and suffer from a low signal-to-noise ratio due to factors such as cardiac and respiratory processes or scanner instability [4]. This poses challenges for training machine learning models, particularly on small datasets. Transformers, while not immune to these issues, are better equipped to model the highly nonlinear nature of functional interactions in brain networks. However, for larger-scale datasets that are collected from multiple sites, such as ABIDE (17 sites) and ADNI (89 sites), site-specific noise can complicate training, leading to potential overfitting. This explains the observed trends: transformers may generalize better in smaller datasets such as Matai, where cross-site variability is less pronounced, while non-DL methods struggle with the data\\u2019s high dimensionality and complexity. \\n\\n**[W3, Q3. Significance Tests]** We focused on comparing AIDFusion with the best multi-atlas baselines due to their alignment with our problem definition. Following the reviewer\\u2019s suggestion, we conducted one-sided paired t-tests between AIDFusion and the overall best baselines for each dataset. The revised tests yielded p-values of 0.0775 and 0.0380 for ABIDE and ADNI datasets, respectively. \\n\\n[3] Multi-level bootstrap analysis of stable clusters in resting-state fmri. NeuroImage 2010\\n\\n[4] Structural deep brain network mining. KDD 2018\"}", "{\"title\": \"[Gentle Reminder] Discussion period is closing soon\", \"comment\": \"Dear Reviewer 3zUh,\\n\\nThank you again for your time and valuable suggestions on our work! We understand you are busy. As the discussion period is closing soon, could you please take a look of our response above and let us know if our explanation addresses your concerns? We are more than happy to provide more details if it does not. And we would sincerely appreciate it if you could jointly consider our responses above when making the final evaluation of our work.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"metareview\": \"This paper introduces a transformer for fMRI data (presumably only resting state correlation matrices). They present classification task accuracy on four datasets (ABIDE, ADNI, PPMI, and Matai).\\n\\nReviewer responses are mixed, though discussion has improved scores. There remain unanswered questions however. Among these, specifically concerning are the high standard error; after rebuttal, there are now reportedly low p-values, but these are paired. This appears unanswered, as raised by `CMqs` and `3zUh`, and inconsistent with literature results, as raised by `9YF5`. Reviewer `9YF5` also raises questions about the number of hyper-parameters that may be tuned. Even if all other claims are believed at face value, this is problematic.\\n\\nThere are positive aspects to this submission and innovations in their work. However, it is below acceptance quality, and experimental concerns are unacceptable in general.\\n\\nBeyond this, I find that the motivation is unclear. Is atlas fusion, embedding, or disambiguation the science in question? If so, we should see experiments showing that this model consistently identifies \\\"correct\\\" atlases. This is both a question at the identity embedding stage and at the \\\"disentangle transformer\\\" stage, both of which remain untested experimentally (with respect to identification properties). If the purpose is not better atlases (or identifying a \\\"true\\\" atlas), then pure classification remains the motivating factor, which is weak.\\n\\nI encourage the authors to incorporate the concerns of Reviewer `CMqs` in particular. I find that the contributions of this manuscript have merit, but remain unpolished and leave questions unexplored in the science of brains and fMRI. Seeing as the transformer contributions are low (which is admittedly not the purpose of the paper; it is an \\\"application\\\" paper), for the above reasons I recommend rejection of this submission.\", \"additional_comments_on_reviewer_discussion\": [\"Overall the reviewers have highlighted several strong points, but have also opined on specific pieces that can be improved.\", \"Statistical and experimental problems (as detailed above).\", \"Overall writing problems (\\\"Identity Embedding\\\" and \\\"Disentangle Transformer\\\" break with the usual cases of terminology)\", \"Inclusion of large baseline datasets. ABCD and HCP (YA and Aging) are high quality imaging datasets which are also available.\", \"I feel that the manuscript has improved through this review process. Inclusion of the ablation test in particular is useful.\"]}", "{\"title\": \"General Response [1/2]\", \"comment\": \"We appreciate the reviewers\\u2019 constructive feedback and positive remarks on the paper. We are grateful for the recognition of our work being **well-motivated and promising** (VhSn, 3zUh), **well-structured and easy to understand** (CMqs, 9YF5), **novel and interesting application** of transformer registers (incompatible nodes) (CMqs), and having **comprehensive experimental results** (VhSn) that **effectively demonstrate its effectiveness** (9YF5), while **offering new insights** to the field of neuroscience (9YF5).\\n\\nIn this rebuttal, we have addressed the reviewers' main concerns and provided additional experiments and clarifications. Revisions in our manuscript are highlighted in blue. We are open to further discussions if there are any unresolved concerns or additional questions.\\n\\n**[Clarify identity embedding @VhSn, 3zUh]** We appreciate the request for clarification regarding the identity embedding. The identity embedding is designed to incorporate the anatomical consistency of brain networks across subjects with the same atlas. Specifically, brain networks constructed with the same atlas share the same ROI definitions. For instance, the first node in any brain network constructed with the AAL atlas corresponds to the same ROI. Thus, we use the identity embedding $W_{ID}[1, :]$ to represent this specific ROI and add it to the corresponding node feature $X[1, :]$. This design serves as a positional embedding mechanism, similar to techniques used in graph Transformers, where the learnable MLP parameters help incorporate this positional information effectively. We have clarified this in Appendix C of our revision.\\n\\n**[More evidence for the disentangle Transformer @VhSn, 9YF5]** The disentangle Transformer aims to capture both atlas-specific information and filter out incompatible features across atlases. Drawing inspiration from the use of additional tokens in the NLP and CV domains for extracting global information, we introduced the incompatible nodes with an orthogonal constraint to disentangle shared and conflicting features. Discussion in Section 5.4 and Appendix H demonstrates the ability of AIDFusion to extract atlas-consistent information. To further justify this design, we conducted a case study by visualizing the attention maps of AIDFusion with and without incompatible nodes. The results show that without incompatible nodes, the attention maps are highly imbalanced, with much stronger attention on the Schaefer atlas compared to the AAL atlas. Furthermore, the attention map of AAL without incompatible nodes exhibits over-smoothing, failing to highlight distinguishable network connections. This indicates that the model struggles to extract informative features when inconsistent atlas-specific information is not filtered out. These findings support the necessity of incompatible nodes, and we have included the visualization and discussion in Appendix I.2.\\n\\n**[More baseline comparison @CMqs, 9YF5]** Regarding Transformer-based baselines, BNT [3] applied Transformers to learn pairwise connection strengths among brain regions across individuals; Com-BrainTF [1] uses a hierarchical local-global transformer for community-aware node embeddings; GBT [2] employs an attention weight matrix approximation to focus on the most relevant components for improved graph representation. For multi-atlas related work, CcSi-MHAHGEL [4] introduces a class-consistency and site-independence Multiview Hyperedge-Aware HyperGraph Embedding Learning framework to integrate brain networks constructed on multiple atlases in a multisite fMRI study. We add all these methods into discussion in Section 2 of our revision. We also include comparisons with GBT [2] and BNT [3] in our updated results (Table 2). As shown in the following table, AIDFusion continues to achieve superior performance. Com-BrainTF [1] and CcSi-MHAHGEL [4] are not included because they rely on additional information (community and site distribution) that are not applicable to our setting.\\n\\n| atlas | model | ABIDE | ADNI | PPMI | Matai |\\n|--------------------------------|--------------|---------------------|---------------------|----------------------|----------------------|\\n| Schaefer100 | BNT | 60.01 \\u00b1 5.33 | 66.39 \\u00b1 3.29 | 56.60 \\u00b1 10.82 | 60.00 \\u00b1 13.33 |\\n| | GBT | 61.76 \\u00b1 4.89 | 64.22 \\u00b1 2.67 | 59.07 \\u00b1 12.74 | 65.00 \\u00b1 18.92 |\\n| AAL116 | BNT | 58.95 \\u00b1 4.84 | 59.39 \\u00b1 4.44 | 54.14 \\u00b1 8.53 | 63.33 \\u00b1 24.49 |\\n| | GBT | 59.93 \\u00b1 3.82 | 58.35 \\u00b1 5.96 | 54.26 \\u00b1 10.58 | 60.00 \\u00b1 20.00 |\\n| Schaefer100 + AAL116 | AIDFusion | **66.35** \\u00b1 3.26 | **67.57** \\u00b1 2.04 | **66.00** \\u00b1 4.71 | **75.00** \\u00b1 13.44 |\"}", "{\"title\": \"Response to Reviewer 3zUh\", \"comment\": \"**[W1, Q2, Q4. Disentangle transformer may discard atlas-specific information that are useful for classification.]**\\n\\nThank you for pointing this out. We aim to filter out atlas-specific information that is harmful to the downstream task. This is done in a soft way by using incompatible nodes in disentangle transformer. The atlas-specific information is not entirely filtered out as we can observe from Fig. 2 that different atlas still focuses on different ROI connections. In our design, we have applied the consistency constraints primarily at a higher representation level rather than directly at the ROI or feature level to balance this trade-off. The orthogonal loss is only an auxiliary objective, ensuring that the disentangled representations are distinct but not dominant in driving the training process. Additionally, our case studies in Section 5.4 and Appendix H illustrate that AIDFusion focuses on diverse connections from different brain networks, demonstrating its capability to capture atlas-specific information. \\n\\nRegarding Q4, the learned consistent representations from brain networks with different coverages (e.g., AAL covering cerebellum regions, while Schaefer focuses on cerebral cortex) are obtained at a high-level feature space. This consistency aims to extract information that is disease-specific, even if the coverage differs. In our case studies, the attention weights show that the cerebellum ROIs receive lower attention, which aligns with existing neuroscience findings suggesting a lesser relevance of cerebellum regions in neurological disorder classification.\\n\\n**[W2, Q5. It is not clear what atlases should be fused.]** Thank you for this observation. We agree that determining the optimal set of atlases for fusion is an open research question. However, the main focus of our work is to propose a robust method for integrating multiple atlases rather than conducting an exhaustive search for the best atlas combination. Our experimental results show that in most cases using two atlases achieves superior results to using a single atlas with the same model. Base on this observation, we further explore combining AAL116 with Schaefer at different number of ROIs, reported in Table 11 of Appendix G . Based on the results, we observe that combining two atlases with similar numbers of ROIs (e.g., Schaefer100 and AAL116) tends to yield better performance, which may be due to the balanced information content from both anatomical and functional perspectives. We acknowledge that this is a preliminary finding, and more extensive exploration is needed to determine the most effective atlas combinations. This is something we plan to investigate in future work.\\n\\n**[Q1. Not clear how the identity embedding was implemented.]** We apologize for the confusion. Yes, one node corresponds to one brain ROI. Please refer to our general response for a detailed explanation. In brief, the identity embedding is implemented using a parameter matrix $ W_{ID} $ that encodes the identity information of each ROI in different subjects with the same atlas. Each node is assigned an embedding vector based on its ROI identity, helping to differentiate nodes that belong to the same ROI across different subjects.\\n\\n**[Q3. Typo in Eq. (9).]** Thank you for raising this issue. We have modified $i$ to $j$ in Eq. (9) of our revision.\\n\\n**[Q6. Loss function when extended to more than 2 atlases.]** Yes, when AIDFusion is extended to handle more than two atlases, it computes the orthogonal loss, inter-atlas message passing, and both subject-level and population-level consistency for each pair of atlases. This pairwise computation ensures that the model can fully leverage the complementary information across all available atlases while maintaining consistency. We have clarified this in Appendix G of our revision.\"}", "{\"comment\": \"Thank you for your response and for addressing my concerns. Your comment that the method requires computing population-level consistency each time is actually quite important I think. This could be emphasized in the final manuscript to prevent misunderstandings, but is fine as-is. My rating remains at \\\"accept\\\".\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for your constructive feedback and positive recommendation. As your suggestion, we have added the explanation about the population-level consistency computation in Section 5.6 of our revision to prevent misunderstandings.\"}", "{\"title\": \"Response to Reviewer VhSn\", \"comment\": \"**[W1, Q1.C larification of identity embedding.]** Please refer to the general response.\\n\\n**[W2, Q2. The efficacy of the disentangle Transformer is not fully supported.]** Please refer to the general response.\\n\\n**[W3, Q3. The actual efficacy of the proposed architecture and losses is not described.]** \\n\\nWe appreciate the reviewer\\u2019s insightful feedback on the subject- and population-level consistency constraints in our AIDFusion model. To address this, we conducted an in-depth analysis and visualized the impact of these losses.\\n\\n1. Subject-level Consistency Loss: We visualized the difference matrices of hidden feature representations $\\\\hat{\\\\boldsymbol{H}}$ for two different brain atlases (Schaefer100 and AAL116). The results demonstrate that the subject-level consistency loss significantly reduces the discrepancies between the hidden features learned from different atlases, thereby enhancing the stability of the model's representations. This alignment indicates that AIDFusion captures shared patterns across atlases effectively, contributing to improved generalization.\\n\\n2. Population-level Consistency Loss: We analyzed the difference in similarity matrices $\\\\boldsymbol{G}$, which reflects pairwise similarities of subjects across atlases. Without the population-level consistency loss, substantial discrepancies were observed, potentially confusing the model. In contrast, applying this loss resulted in better-aligned similarity matrices, ensuring consistency in graph-level representations across different views. This leads to more robust predictions, even when faced with variations in brain network atlases.\\n\\nOverall, these findings support the importance of both consistency constraints in aligning multi-atlas representations, improving model robustness, and enhancing performance in downstream tasks. We have added these visualizations and detailed explanations to Appendix I.1 of our revision for clarity.\\n\\n**[W4. Some figure and table captions are too terse.]** Thank you for pointing this out. We have added more details on the captions of Figure 1, Table 1 and Table 3 to make them more informative in the revision.\\n\\n**[W5. Report testing time for each subject.]** Thank you for this valuable feedback. We understand that reporting separate training, validation, and testing times is more conventional. However, due to the small size of the datasets and the relatively small number of parameters in our model, the testing time for each subject is extremely short and does not significantly impact the overall computational budget. Additionally, our method requires computing population-level consistency, which scales with the number of subjects rather than being a linear per-subject cost. For this reason, we reported the overall time cost rather than averaging it by number of subjects.\"}", "{\"summary\": \"In this paper, a novel method named AIDFusion is proposed to learn unified and informative representations of brain functional networks derived from multiple brain atlases, which are expected to facilitate the brain network based classification. The method consists of several modules including disentangle transformer, inter-atlas message-passing, subject-level and population-level consistency to learn atlas-consistent information. The proposed method has been evaluated on multiple datasets to demonstrate its effectiveness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The idea of learning unified representation from multiple brain networks is promising to improve classification performance.\\n2. Multiple techniques are proposed to facilitate the unified representation learning, e.g., disentangle transformer, inter-atlas message-passing, and multi-level consistency.\", \"weaknesses\": \"1. The brain networks from different atlases will contain atlas-consistent information and atlas-specific information, both types of information may be informative for classification. While the proposed method adopts several techniques to get enhanced atlas-consistent information, the atlas-specific information may not be effectively captured. Not sure if the proposed disentangle transformer may discard atlas-specific information that are useful for classification.\\n2. As demonstrate in the experimental results, the fusion of multiple arbitrary atlases may not improve classification performance. It is not clear and not verified what atlases should be fused.\", \"questions\": \"1. For the identity embedding in the disentangle transformer. It is not clear how the identity embedding was implemented. \\\"... a parameter matrix W_ID to encode nodes within the same ROI\\\", but it looks like one node corresponds to one brain ROI in the proposed method.\\n2. The brain networks from different atlases will contain atlas-consistent information shared across atlases and atlas-specific information, both types of information may be informative for classification. While the proposed method adopts several techniques to get enhanced atlas-consistent information, the atlas-specific information may not be effectively captured. Not sure if the proposed disentangle transformer may discard atlas-specific information that are useful for classification.\\n3. The Eq.(9) is not correct. I think i refers to ROI in atlas a or b, not cluster.\\n4. The AAL atlas contains both cerebral cortex, subcortical structures, and cerebellum regions, while the Schaefer atlas only covers cerebral cortex regions. Not sure if it is proper to learn consistent representations from brain networks with different brain coverages.\\n5. As demonstrate in the experimental results, the fusion of multiple arbitrary atlases may not improve classification performance. A principled way to identify what atlases should be fused is needed.\\n6. When applied to more than 2 atlases, will the orthogonal loss, inter-atlas message passing, and subject/population-level consistency be computed for each pair of atlases?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response [2/2]\", \"comment\": \"**[More ablation study @CMqs, 9YF5]** Thank you for the suggestion. We have expanded our ablation studies in the revised manuscript (Table 4). Specifically, we tested our model without the disentangle Transformer (labeled \\\"TF\\\"), without the identity embedding (\\\"Disen TF w/o ID\\\"), and without the orthogonal loss (\\\"Disen TF w/o $L_{orth}$\\\"). The results show a consistent drop in performance when any of these components is removed, highlighting their importance in achieving the superior performance of AIDFusion.\\n\\n| backbone | IA-MP | SC | PC | adni |\\n|------------------------------|--------------|-----------|-----------|---------------------|\\n| TF | | | | 63.99 \\u00b1 4.34 |\\n| TF | \\u221a | \\u221a | \\u221a | 66.82 \\u00b1 1.25 |\\n| Disen TF w/o ID | \\u221a | \\u221a | \\u221a | 66.97 \\u00b1 1.35 |\\n| Disen TF w/o $L_{orth}$ | \\u221a | \\u221a | \\u221a | 66.06 \\u00b1 1.17 |\\n| Disen TF | | \\u221a | \\u221a | 66.58 \\u00b1 1.72 |\\n| Disen TF | \\u221a | | \\u221a | 66.37 \\u00b1 1.56 |\\n| Disen TF | \\u221a | \\u221a | | 65.91 \\u00b1 2.08 |\\n| Disen TF | \\u221a | \\u221a | \\u221a | **67.57** \\u00b1 2.04 |\\n\\n[1] Community-Aware Transformer for Autism Prediction in fMRI Connectome. MICCAI 2023\\n\\n[2] GBT: Geometric-Oriented Brain Transformer for Autism Diagnosis. MICCAI 2024\\n\\n[3] Brain Network Transformer. NeurIPS 2022\\n\\n[4] Multiview hyperedge-aware hypergraph embedding learning for multisite, multiatlas fMRI based functional connectivity network analysis. MIA 2024\"}", "{\"comment\": \"Thanks again for the detailed response. As some of my initial concerns (e.g., adding relevant ablations) have been addressed, I am raising the score to 5. It is primarily not higher as the reported gains over baselines do not seem to be robust on the largest reported datasets, please see the discussion on significance testing.\\n\\n[ Regarding the rebuttal discussion around data efficiency, for clarity, I was referring to transformers (by design) not having inductive biases (beyond permutation equivariance) and therefore having to learn them from large datasets, not the inter-site variability of neuroimaging datasets. It is not clear how this is learnable from datasets with N=60 from random initialization without pretraining. ]\"}", "{\"comment\": \"Thank you for the thorough response. Given the limited time remaining (my apologies for the delayed response), I'll focus on major points alone.\\n\\n> **[W2. Use of only two atlases in experiments.] \\\"We observed that all models tested on three atlases achieved poorer performance than the ones tested on Schaefer100 and AAL116.\\\"**\\n\\nDoes this then not negate the premise of the paper? \\n\\nThe claim is that the proposed transformer can learn to integrate information from across multiple atlases. If two atlases are sufficient for the considered tasks, adding more atlases should, at worst, retain the same performance, not decrease it. If by the proposed mechanisms, the transformer learns to aggregate relevant features across multiple atlases, it should learn to ignore the third atlas. \\n\\n> **[W1.1, Q1. Transformer trained on small datasets without any self-supervised pretraining.]**\\n\\nI understand that sample sizes are limited in brain studies and the associated challenges. To clarify, I'm asking how it is possible to train transformers *from random initialization* on datasets with 60 graphs (as in this paper) when all other subdomains of the transformer literature demonstrate that these networks require several orders of magnitude larger sample sizes to start showing benefits over either networks with relevant inductive biases or iterative methods.\\n\\nFor example, Table 2 (Schaefer+AAL) shows the opposite trends from what we expect w.r.t. data efficiency:\\n- On datasets with higher sample sizes (ABIDE/ADNI), we see minor differences w.r.t. regularized iterative non-DL baselines such as logistic regression and SVMs.\\n- On datasets with smaller sample sizes (PPMI/Matai), we see larger differences w.r.t. regularized iterative non-DL baselines such as logistic regression and SVMs.\\n\\nIt is in data-limited settings (e.g. N=60 as in this work) where we would expect non-DL baselines to outperform DL. I'm open to being wrong, could the authors please elaborate on this point? \\n\\n> **[W3, Q3. Unclear significance of results.] \\\"one-sided paired t-tests between AIDFusion and the best multi-atlas baselines yielded p-values of 0.0116, 0.0380, 0.0830, and 0.0886 respectively\\\"**\\n\\nIf testing between the best and second best, why are these conducted on the best *multiatlas* method specifically? Looking at Table 2, it does look like the 2nd best method for 3 out of 4 datasets is a single atlas method.\"}" ] }
9WbNpRuFuS
Approximately Aligned Decoding
[ "Daniel Melcer", "Sujan Kumar Gonugondla", "Pramuditha Perera", "Haifeng Qian", "Wen-Hao Chiang", "Yanjun Wang", "Nihal Jain", "Pranav Garg", "Xiaofei Ma", "Anoop Deoras" ]
It is common to reject undesired outputs of Large Language Models (LLMs); however, current methods to do so require an excessive amount of computation, or severely distort the distribution of outputs. We present a method to balance the distortion of the output distribution with computational efficiency, allowing for the generation of long sequences of text with difficult-to-satisfy constraints, with less amplification of low probability outputs compared to existing methods. We show through a series of experiments that the task-specific performance of our method is comparable to methods that do not distort the output distribution, while being much more computationally efficient.
[ "Constrained Decoding", "Large Language Models" ]
Reject
https://openreview.net/pdf?id=9WbNpRuFuS
https://openreview.net/forum?id=9WbNpRuFuS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xamJpVTpbv", "wp1BdvDh1M", "tOtXk3NXye", "pEG3QZv8yO", "nHxm5gHVXS", "n6yoiw1sS7", "lvgaIh5X1Z", "gUephogAZD", "fstSeyQV3t", "aMeFHCctCd", "KeI8waAjW9", "GXo95bvuxu", "Ahtdr9nmuB", "8zVe63EFX7", "82EFVnqiru", "2JGhbLzdAQ", "0Z1LWbhzzW" ], "note_type": [ "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1731952374876, 1737523681134, 1732547857571, 1730689631966, 1731987576262, 1732051965924, 1734885814216, 1731948252474, 1729877385608, 1731948311070, 1731948702934, 1731948491741, 1732012844958, 1732119113764, 1730078856116, 1730643394874, 1732114763155 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5061/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5061/Authors" ], [ "ICLR.cc/2025/Conference/Submission5061/Reviewer_xsaW" ], [ "ICLR.cc/2025/Conference/Submission5061/Reviewer_ViLc" ], [ "ICLR.cc/2025/Conference/Submission5061/Authors" ], [ "ICLR.cc/2025/Conference/Submission5061/Area_Chair_9FNq" ], [ "ICLR.cc/2025/Conference/Submission5061/Authors" ], [ "ICLR.cc/2025/Conference/Submission5061/Reviewer_ViLc" ], [ "ICLR.cc/2025/Conference/Submission5061/Authors" ], [ "ICLR.cc/2025/Conference/Submission5061/Authors" ], [ "ICLR.cc/2025/Conference/Submission5061/Authors" ], [ "ICLR.cc/2025/Conference/Submission5061/Reviewer_ViLc" ], [ "ICLR.cc/2025/Conference/Submission5061/Reviewer_WHQc" ], [ "ICLR.cc/2025/Conference/Submission5061/Reviewer_WHQc" ], [ "ICLR.cc/2025/Conference/Submission5061/Reviewer_92k1" ], [ "ICLR.cc/2025/Conference/Submission5061/Reviewer_ViLc" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer ViLc\", \"comment\": \"We thank reviewer ViLc for their comments.\\n\\n> Lack of related work\\n\\nOur intention was for Sections 2 and 3 to collectively serve as an extended related work, with more detailed comments on each method in context, rather than a separate related work section. However, if the reviewers collectively agree that an explicit section would help clarify the presentation, we can incorporate a separate related work section.\\n\\n> The contribution is a bit marginal, which is an incremental combination of ASAp and speculative decoding.\\n\\nWe respectfully disagree. The novelty of the proposed method has been acknowledged by the reviewers; one has pointed out that we are underselling the novelty as a combination of ASAp and speculative sampling. We will improve the presentation to address the issue. \\n\\nIt is nontrivial to combine these ideas to solve an important problem, and we are unaware of any similar work in literature. Additionally, part of our contribution is showing that ASAp and constrained decoding exist as two ends of a spectrum, with AprAD in the middle of the two methods.\\n\\n> AprAD may underperform ASAp in certain tasks, while only about 1.4 times efficiency improvement\\n\\nOur method is intended to serve as a useful midpoint between constrained generation and ASAp---a reasonable default that can be overridden in cases where maximal speed or maximal conformity to the LLM probability distribution is required. Our method is on the Pareto frontier of generation ratio versus accuracy.\\n\\n> Why does ASAp perform so well in Table 3, while being much worse in Figure 2?\\n\\nOur method performs well relative to ASAp in the lipogram task because if a specific generation is rejected, then it is still likely for any other generation to be a violation. Even after attempting many generations, ASAp can only output a few tokens without the banned letter. Our method is able to overcome this, resulting in a large relative improvement.\\n\\nIn contrast, if a hallucinated method is rejected during code generation for the BigCodeBench task, it is unlikely for the LLM to repeatedly attempt to generate additional hallucinated methods, so ASAp is able to cope with this for a 47-56% overhead for the problems shown in Table 3. Even still, our method achieves near-equivalent accuracy with only a 6-8% overhead. This reduction represents a substantial latency improvement for users of a coding assistant, where user experience is highly sensitive to latency.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Upload of Revision\", \"comment\": [\"We have uploaded a revision with the following changes, with edits highlighted in blue. We would like to thank all reviewers for the extremely helpful suggestions; incorporating them has enhanced the presentation.\", \"A re-written introduction and an addition of a related work section.\", \"Expanded discussion section with explanations of when a given method may be preferred, and with additional description of generalization of sampling-based generation methods.\", \"Added description of FUDGE to existing approaches.\", \"Added appendix with additional implementation details and expanded pseudocode.\", \"Assorted bug fixes and clarifications as pointed out by reviewers.\"]}", "{\"summary\": \"The paper proposes a method to speed up the recently proposed ASAp algorithm for constrained decoding by leveraging a connection to speculative decoding. This connection relaxes the exactness of the decoding algorithm, but improves the efficiency of decoding. ASAp iteratively samples a prefix from an LLM until it finds that the prefix violates a constraint, in which case it stores the prefix in a \\\"bad set\\\" B and restarts generation. Instead of restarting the generation in this step, this paper proposes reusing partial prefixes that don't violate the constraint, inspired by speculative decoding. Compared to rejection sampling at one extreme and greedy constrained decoding at the other, the authors show that proposed algorithm (AprAD) occupies a useful midpoint on the tradeoff between computational efficiency and faithfulness to the true posterior distribution $p(response | constraint)$.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"By relaxing the exactness of ASAp and leveraging a connection to speculative decoding, AprAD is faster than rejection sampling / ASAp but produces much higher-quality outputs than greedy constrained decoding on the two empirical settings (generate without using a particular letter + code generation without hallucinated API calls).\", \"The authors use a synthetic setting to test how much AprAD distorts the output distribution compared to ASAp and greedy constrained decoding, and find that AprAD results in much lower divergence to the true distribution than constrained decoding and much more efficient decoding (in terms of model evaluations) than ASAp.\"], \"weaknesses\": [\"Regarding posterior estimation approaches, it's not explained in detail what \\\"Both of these methods...also face issues in certain dense error sets\\u2014the approximation of the posterior tends to become inaccurate when arbitrary generations almost immediately lead to an error.\\\" means. The paper needs to back this claim up with experiments, and it's still worth comparing to these methods.\", \"The proposed constraints in the experiments are somewhat arbitrary and differ from examples in other constrained decoding work. For example, why not evaluate on the same tasks as ASAp?\", \"The paper is missing a citation and comparison to a very simple decoding method that tries to solve the same problem with greedy constrained decoding: [FUDGE](https://aclanthology.org/2021.naacl-main.276.pdf).\", \"Membership in $\\\\mathcal{B}$ has to be determinable by any prefix, which is not a limitation of other methods (such as FUDGE, and the posterior estimation methods cited in the paper.) In fact, this very assumption should fit FUDGE very well.\", \"It's stated that the main weakness of ASAp is \\\"While ASAp succeeds in cases where there are only a small number of errors that comprise\", \"the majority of the probability mass, its generation speed suffers when there are a large number of errors\\u2014each error must be discovered before it is added to $B$. In dense probability sets, its performance characteristics are similar to rejection sampling, as there are an exponential number of error sequences that must be discovered as generation length increases.\\\"\", \"But doesn't the proposed method have the same drawback, in the sense that every generation violating the constraint must be discovered before it's added to $B$? The decoding *speed* should be better, but the doesn't the fact remain that in the limit of many samples, both methods explicitly store every prefix violating the constraints?\", \"Related to the above points, the cited weakness of the posterior estimation approaches is cases \\\"when arbitrary generations almost immediately lead to an error\\\". But aren't those also precisely the bad cases for ASAp and AprAD in terms of representing $B$?\"], \"questions\": [\"L304--L309---why ask human raters about the intent of the constraint when many of these things (e.g., lookalike cyrillic characters, accented characters) could just be incorporated into the constraints by expanding the set of banned tokens?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thank you for your kind response.\", \"Regarding the \\\"related works,\\\" I believe there are subtle distinctions between the preliminary section and the related works. The preliminary section is meant to provide the technical foundation, while the related works offer a broader perspective on the field. Experts in the area might skip the related works, but a general audience could gain an initial understanding of the area. You could include studies that are not directly related to your paper, which wouldn't fit in the preliminary section. This is just my personal opinion, but I do believe that most papers accepted by ICLR follow this.\", \"Regarding \\\"content length,\\\" let me elaborate: it is surprising to the reviewer that the paper is not more than 8 pages, while the recommended length for ICLR is 9-10 pages. Although we know that the core idea is what's most important, this might give readers the impression that the paper is not \\\"thoroughly\\\" written. It would be better to polish the introduction, as many people only read that section. For example, you could explicitly list your contributions, intuitively explain your method, or provide more background information.\", \"For the \\\"performance v.s. ASAp\\\", I appreciate your response . Could you make it clear that which kind of task is more appropriate for AprAD in the revision?\", \"For the \\\"contribution\\\", I will read other reviewers' comments and reconsider this later.\"]}", "{\"title\": \"Response to Additional Questions by Reviewer ViLc\", \"comment\": \"Thank you for responding and for the additional follow-up questions.\\n\\n> Related work, content length\\n\\nThese are fair points. We will use the space available to add clarifications and address reviewer comments, as well as adding a short related work section to provide a broader perspective on the field.\\n\\n> Performance vs. ASAp\\n\\nWe will add a section on considerations of choosing a method for a particular task.\\n\\n> Human raters and evaluation of Lipogram task\\n\\nAll outputs were shuffled, and the labels of which method generated each output were hidden from the human raters. We can clarify this randomization process in the main text of the paper. We also include outputs of each method (random selection; not cherry-picked or truncated) in the appendix.\\n\\n\\nWhile AprAD occasionally makes a few flubs (line 650-651, \\\"Choose a well- Press your collar down\\\"), its output is largely coherent and rarely exhibits the artifacts that are consistently observed with constrained generation. Constrained generation tends to significantly amplify extremely low-probability sequences, such as those containing Cyrillic or accented letter substitutes (line 680), or misspelling words to avoid the constraint (line 678). This occasionally causes a noticeable drop in readability and coherence of the text as a whole (lines 783-788). While AprAD does distort the probability distribution to some extent, its backtracking behavior means that it typically avoids the most extreme low-probability sequences, making the difference in rater scores unsurprising.\\n\\n> Additional synthetic task\\n\\nThank you for the suggestion. We believe that all methods would exhibit behavior where they quickly exhaust their vowel budget, and then perform almost identically as with the original lipogram task. \\n\\nAs the reviewers point out, this would almost certainly occur with constrained generation. Unconstrained generation would likely continue to generate many more vowels. In theory, ASAp will \\\"strategically\\\" allocate the vowels throughout the generation to their maximum-likelihood positions, but this would require an unattainably large computation and memory budget. Instead, because the prefix now contains several additional tokens before obtaining a counterexample, each counterexample represents a lower cumulative probability, meaning that ASAp probably doesn't get as far after exhausting the vowel budget, given the same amount of computation. AprAD is not immune either; it would likely exhaust the vowel budget soon as well. However, its continued generation after doing so would likely still be of quality similar to that of the lipogram task.\\n\\n> Implementation of addBadExample\\n\\nWe cache the probabilities in a trie. The addBadExample function iterates from the counterexample leaf node, up to the root, maintains the conditional probability of the counterexample given the prefix represented by a specific trie node, and subtracts this conditional probability. The probabilities are normalized when later queried (rather than normalized immediately, due to floating point issues). We will add these, and additional implementation details to the paper or appendix.\"}", "{\"metareview\": \"The paper proposes a novel method for constrained decoding in LLMs that balances output distortion and computational efficiency. The paper is well-written and presents ideas clearly, making it approachable for readers not familiar with the area. It also tackles an important problem in LLM generation and provides a neat, effective, and principled solution.\\n\\nHowever, the reviewers also point out that the proposed method, AprAD, is seen as an incremental combination of ASAp and speculative decoding, and lacks sufficient novelty. The improvement over existing methods like ASAp is not substantial enough, especially considering the increased computational cost. The constraints used in the experiments are somewhat arbitrary and differ from those used in other constrained decoding works. The paper lacks a dedicated \\\"Related Work\\\" section and is too short, which might give the impression that it is not thoroughly written.\\n\\nFor these reasons, overall, the reviewers felt the paper is slightly below the acceptance threshold in its current state.\", \"additional_comments_on_reviewer_discussion\": [\"The authors clarified the ambiguity in the title by adding descriptive words like \\\"constrained\\\" and \\\"unbiased\\\".\", \"They acknowledged the lack of a dedicated \\\"Related Work\\\" section and the paper's short length, promising to add clarifications and address reviewer comments in the revision.\", \"They defended the novelty of their work, emphasizing the non-trivial nature of combining existing ideas to solve an important problem and highlighting AprAD's position on the spectrum between constrained generation and ASAp.\", \"They clarified the performance of AprAD compared to ASAp, stating that AprAD is intended as a useful midpoint between constrained generation and ASAp, representing a reasonable default that can be overridden when necessary.\", \"They explained the performance difference of ASAp between Table 3 and Figure 2, attributing it to the likelihood of encountering violations in the lipogram task versus the BigCodeBench task.\", \"They addressed concerns about the lipogram evaluation, clarifying the randomization process and defending the reliability of the human rater scores.\", \"They discussed the implementation of the addBadExample function, detailing the caching of probabilities in a trie and the process of subtracting conditional probabilities.\"]}", "{\"title\": \"Response to Reviewer xsaW\", \"comment\": \"We thank reviewer xsaW for their thoughtful feedback and helpful suggestions. We respond to each of these below, and will incorporate the corresponding changes into the paper.\\n\\n> Regarding posterior estimation approaches\\n\\nFor a task such as lipogram generation, the posterior probability of constraint satisfaction is very close to 0 for almost all prefixes, so estimation techniques generally have difficulty with this estimate. Furthermore, especially for longer generations, the posterior probability may or may not be influenced by anything inherent in the prefix text.\\n\\nFor example, there is nothing inherent about the prefix \\\"Long ago\\\" compared to the prefix \\\"In a galaxy far away\\\" that makes the remainder of the story more or less likely to contain a letter 'e'. In either case, the probability of it doing so is almost exactly 1. In contrast, a misleading comment during code generation can cause the LLM to hallucinate a method in the following line.\\n\\nWe found during preliminary experiments that Lew et al. 2023 and Zhang et al. 2024 likely both suffered from estimation issues on the lipogram task. Additionally, Lew et al. 2023 requires a potentially large number of samples to estimate the posterior---especially when all probabilities are near-zero---negating any overhead benefit, even with the performance optimizations introduced with that method. Zheng et al. 2024 requires a separate training step to obtain the HMM, and is unable to express the constraint of the BigCodeBench task. For these reasons, we chose to focus our comparisons against ASAp and constrained generation. We will add a discussion of the task-dependent pros and cons between sampling-based and posterior estimation-based methods in the paper, and describe factors that a practitioner may consider when deciding on a method. \\n\\n> Differs from examples in ASAp\\n\\nASAp excels in tasks where there are a few high-probability errors, but its evaluation tasks don't include cases where there are dense low-probability errors. We chose lipograms due to this factor, as well as its straightforward explanation and intuitive evaluation. We included BigCodeBench as a difficult real-world code generation problem where the solution requires use of libraries, and hence where generation can benefit from hallucination detection and avoidance. \\n\\n> FUDGE\\n\\nWe were unaware of FUDGE; thank you for pointing it out. It is a relevant work, and we will discuss it in our section about posterior estimation methods and tradeoffs between different approaches. While we haven't run experiments on FUDGE, the discriminator would need to learn to distinguish posterior probabilities extremely close to zero for the lipogram task; as discussed above, this probability would largely depend on arbitrary behavior in the language model rather than on the content of the prefix. While FUDGE would likely succeed on the BigCodeBench task, it would require an additional training step to obtain the discriminator, and its performance would be dependent on the discriminator's generalization ability.\\n\\n> Membership in $\\\\mathcal{B}$ must be determinable by any prefix\\n\\nThis is true, but we do not consider this a major limitation. For example, in the Python example where it is impossible to determine whether `example(foo.bar` is a hallucination (line 367), it is still valid to reject the generation after a close parentheses is generated without defining `foo`. If the close parentheses are high-probability, AprAD will likely backtrack several tokens, rather than trying to generate a low-probability completion with (for example) a generator expression as constrained generation must do.\\n\\n> It's stated that the main weakness of ASAP\\u2026 Every generation violating the constraint must be added to $B$\\n\\nLike with ASAp, when AprAD generates a sample that does not violate the constraint, that sample is accepted and returned, even if $B \\\\neq \\\\mathcal{B}$. AprAD will usually require finding fewer violating samples before finally succeeding, because it usually won't throw away the entire prefix of tokens that has been generated so far.\"}", "{\"summary\": \"This paper focuses on the constrained decoding (or completion) problem. The traditional per-token constrained decoding algorithm will violate the distribution, largely deviating from the ideal distribution; the ASAp algorithm can solve this issue, but may need much computation. This paper proposes to use a per-token approximation of ASAp, which is called AprAD. The key idea is to keep the pre-sampled tokens as possible, and use speculative sampling to adjust the distribution (the Speculative sampling algorithm is an existing rejective sampling approach). AprAD achieves a trade-off between computation efficiency and keeping unbiased distribution.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"## Originality\", \"The framework of AprAD is original.\", \"The generalization idea in conclusio is interesting and insightful.\", \"## Clarity\", \"The algorithm blocks are friendly to readers.\", \"The intuition is clearly expressed.\", \"## Significance\", \"AprAD improves the performance and efficiency of constrained decoding.\", \"The effect shown in Figure 2 looks great.\"], \"weaknesses\": [\"## Major\", \"Lack of related work. There is no \\\"related work\\\" section. Since the paper has not exceeded the 9-10 page limit, it is strongly recommended to add a \\\"related work\\\" section.\", \"Lack of novelty. The contribution is a bit marginal, which is an incremental combination of ASAp and speculative decoding.\", \"The improvement is not good enough. As can be seen from Table 3, AprAD may underperform ASAp in certain tasks, while only about 1.4 times efficiency improvement.\", \"## Minor\", \"The title \\\"Approximately Aligned Decoding\\\" is too ambiguous. It would be better to add more descriptions like \\\"constrained\\\", \\\"unbiased\\\", \\\"speculative sampling\\\".\", \"*The reviewer thinks that this work is not ready to be presented at a top-tier deep learning conference like ICLR, and is recommended submitting to ACL after refinement.*\"], \"questions\": [\"Why does ASAp perform so well in Table 3, while being much worse in Figure 2?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer xsaW (continued)\", \"comment\": \"> Related to the above points, the cited weakness of the posterior estimation approaches\\u2026\\n\\nThis is true; however, the effect is significantly less pronounced with AprAD than with other methods for any given constraint. It is still able to make progress in densely-constrained environments, like with constrained generation, but the probabilistic backtracking behavior helps AprAD avoid the worst artifacts that constrained generation produces. It does not rely on an accurate estimation of the posterior, which may be extremely close to zero. Even in extreme cases with constraints even more restrictive than lipogram, it may still be possible to use a variant of AprAD, as noted in the discussion about introducing a parameter with reviewer WHQc.\\n\\n> L304-309\\n\\nConstraint intent also captures dropping letters, such as the 'a' in \\\"computionl\\\" in Figure 2. Additionally, if an inexperienced user is using a system like this, they shouldn't need to specify all possible edge cases to stop the LLM from circumventing their clear intent.\"}", "{\"title\": \"Response to Reviewer WHQc\", \"comment\": \"We thank reviewer WHQc for recognizing the importance and novelty of our work, and for the insightful suggestions on improving the presentation. We will incorporate the changes in the paper.\\n\\n> This work tackles an important task of decoding from language models under constraints\\u2026which is an interesting topic in the larger picture.\\n\\nThank you very much for the comments and we completely agree with the reviewer's assessment. We will enhance the introduction section accordingly.\\n\\n> Throughout the entire paper, the idea is presented as a combination of ASAp and speculative sampling. \\n\\nWe agree with the reviewer on this. The main commonality between our proposal and speculative sampling is the backtracking algorithm that is able to retain an existing sequence as far as possible, yet conform to a new distribution. There are many differences, as the reviewer pointed out: the \\\"target model\\\" in our method is not a model, but rather cached, modified, and re-normalized probabilities; we start from the beginning rather than working on draft segments; and our focus is not on parallelization-induced efficiency, but rather efficiency from smart backtracking. We will enhance the presentation accordingly.\\n\\nWe can also move Appendix C into the main text and discuss how ASAp and constrained decoding fit the spectrum of the generalized framework.\\n\\n> It would be interesting to have a complete pseudocode of all related techniques. \\n\\nWe agree on the value of including details of how the algorithm is implemented in a larger system. Indeed, the conditional probabilities of each step along the way are pre-computed and saved in our implementation, so there is no need to re-evaluate the LLM as if SpecSample were used without modification. We plan to expand the pseudocode with additional details to distinguish our method compared to conventional speculative sampling, and to include additional information about caching and our data structures in the appendix.\\n\\n> Line 225\\n\\nThank you for pointing this out; we will fix this.\\n\\n> The experiments are designed to meet the assumptions of the methods.\\n\\nWhile the lipogram experiment is admittedly synthetic to demonstrate the difference between different methods, we believe that the BigCodeBench task represents a real-world use case. The results, albeit less dramatic, represent practical values in an AI application, as every reduction in hallucination translates to developer productivity gain.\\n\\n> In AprAD, are the probability tables that lead to the current decoding cached?\\n\\nYes.\\n\\n> Is the caching the main reason why it requires fewer evaluations of the language models?\\n\\nNo. ASAp uses a similar caching mechanism; the reduction in generation ratio is because AprAD reuses part of the already-generated prefix. \\n\\n> Is it possible to introduce hyperparameters to control the behaviors of the proposed approach?\\n\\nYes---while we did not run extensive experiments on this, we believe that the best place to introduce such a hyperparameter is likely as an exponent to $r$ in line 3 of Algorithm 2 (SpecSample}). Let's call the hyperparameter $h$, so this line becomes $r \\\\leftarrow (P(\\\\ldots) / S(\\\\ldots))^h$. \\n\\nOf course $h=1$ gives AprAD unmodified. $h=0$ means that $r$ always equals 1; this procedure yields behavior equivalent to constrained generation. As $h$ approaches infinity, $r$ trends towards $0$, yielding behavior equivalent to ASAp. Any values of $h$ between these two extremes will trade speed and conformance to the distribution. \\n\\nWe will include a discussion of this in the paper or an appendix.\"}", "{\"title\": \"Response to Reviewer 92k1\", \"comment\": \"We thank reviewer 92k1 for their insightful review and accurate summary of our contributions. We do believe that this paper represents an effective and principled solution to an important problem in AI applications.\\n\\n> Line 199\\n\\nCorrect, Deterministic Finite Automata. We will clarify this in the paper.\"}", "{\"title\": \"additional questions\", \"comment\": [\"Hi, after reading other reviewers' comments and checking some details, I have some additional questions:\", \"After checking the experiment details, I find the metric for lipogram not trustworthy: only 4 human raters (small number), no background introduction (might be biased). Therefore, in the trivial Lipogram tasks, it is not reliable that whether the improvement makes sense at the cost of 4 times computation cost compared with constrained decoding, especially given that AprAD is still biased. A way to resolve this concern is to conduct some systhetic tasks like \\u201cno more than $3$ words containing $A$\\u201d, then the constrained decoding immediately cannot work, and the advantages can then be presented in a reliable way.\", \"How is addBadExample implemented for LLM? The renormalization propogation is not trivial for large models. Could you provide some insights?\"]}", "{\"comment\": \"Thank you for the additional information, especially about the hyperparameter $h$ to control the behaviors. I do not have additional questions and am eager to see the revision.\"}", "{\"summary\": \"This paper focuses on the problem of constrained generation from autoregressive language models, where some prefixes are considered as error that should be excluded. Following this condition, the entire language model should be renormalized, in order to sample correctly. However, it is non-trivial to renormalize a language model in a huge sample space. Previous works either fail to renormalize, or require multiple rounds of trial-and-errors to produce a meaningful sample. The recently proposed ASAp falls under the second category. This paper improves ASAp by introducing backtracking into the sampling procedure, reducing the computational overhead while retaining some renormalization. The backtracking is technically the same as speculative sampling procedure, but uses the current language model as the speculative model.\\n\\nThe experiments include analysis on synthesized dataset and evaluation on lipograms and hallucination avoidance tasks. All tasks show that while the proposed method, called AprAD, performs comparably to ASAp, it requires fewer number of model evaluations, thanks to the introduced backtracking procedure.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"I feel that this paper is undersold given its current presentation. The proposed method will be useful, but I would not argue for acceptance now.\", \"This work tackles an important task of decoding from language models under constraints. The task is crucial for responsible and safe control of large language models. In practice, the downstream tasks, with their own safety or legal constraints, are usually developed separately from the training of the models. The proposed method explores effective modification of pretrained language models, which is an interesting topic in the larger picture.\", \"The ideas are clearly presented, with fair comparisons that support the claims. The discussion of related works defines the position of the ideas. Again, I think the proposed method is useful, and should published somewhere.\"], \"weaknesses\": [\"My main issue with the work is its presentation, which makes it weaker than it should be.\", \"Throughout the entire paper, the idea is presented as a combination of ASAp and speculative sampling. But I think it has its own merits that are different from speculative sampling. First, the main point of speculative sampling is parallelization but the proposed approach focuses more on backtracking. During the backtracking in this work, there is actually no need of evaluating the language models as the probability tables are already obtained. This leads to a completely different version of \\\"speculative sampling\\\". However, the differences are not discussed or presented in the manuscript. Second, the backtracking in this work always goes to the beginning. Combined with the sampling under ASAp, the actual algorithm is more complicated than its current form. It would be interesting to have a complete pseudocode of all related techniques.\", \"In line 225, I think $\\\\hat{P}^{B}$ is the speculative model and $\\\\hat{P}^{B\\\\cup \\\\\\\\{x\\\\\\\\}}$ is the target model. The two arguments should be swapped.\", \"The experiments are designed to meet the assumptions of the methods. It would be more interesting to include a real experiment that makes real impact.\"], \"questions\": [\"In AprAD, are the probability tables that lead to the current decoding cached? Is the caching the main reason why it requires fewer evaluations of the language models?\", \"I imagine sometimes the user wants softer constraints and sometimes the user values faster generation. Is it possible to introduce hyperparameters to control the behaviors of the proposed approach?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The topic of this paper is how to efficiently generate text from LLMs such that the generated text avoids undesirable outputs. The paper is well written and gradually introduces the necessary concepts.\\n\\nFirst, the authors introduce the trivial autoregressive generation of text token by token. Then they introduce speculative decoding which uses a LLM and a small speculative model (SSM). The introduction of the speculative decoding is necessary because the final method introduced by the authors use it. After this the authors describe the current methods and their drawbacks. They formalize the set of undesirable strings B (that can be of infinite size) and they require the property that, if a string x belongs to B (is undesirable) then all strings that have x as a prefix also belongs to B (is undesirable). This is assumption might require a careful design of B (see lines 94-98 for the discussion). First, the rejection sampling is introduced - a sampling where we sample the text and resample it until it generates a string not belonging to B. This might be expensive for obvious reasons - when the most generated strings happen to be in B, we have to resample many times. Then the paper introduces the constrained generation, where we generate as normal except, when the next symbol creates a string in B, we reject the symbol and consider only symbols that yield sequences not in B. As authors describe it, this can amplify unlikely generations because we commit to perhaps unlikely prefixes during the generation (lines 149-153). Then authors describe the method known as Adaptive Sampling with Approximate Expected Futures (ASAp) (Park et al., 2024), where the method keeps sampling until a bad sample is encountered. Then conditional probabilities are computed to avoid the bad sample and the process is repeated. The hope is that the encountered bad samples are much fewer than the entire set B and the process ends fast. This, however, might not happen especially when a lot of errors must be discovered and added to the bad sample set. The paper also introduces a somewhat related method of posterior estimation, which has been used in previous works.\\n\\nFinally, the authors introduce their method (Approximately Aligned Decoding, or AprAD) which combines ideas from speculative decoding and ASAp. The main idea is that regular decoding and decoding where we condition out a bad sample are very close in distribution and speculative decoding can be used to sample from the conditional distribution.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The authors evaluated the proposed method on synthetic and real data. For the synthetic data, the authors consider sequences consisting of letter A, B and C. They define various sets of error sets and measure the KL divergence between the distribution for optimal generation as well as other methods. Their method comes up top when considering the speed of generation (generation ratio which they define in the paper). The paper also considers experiments on more \\\"real\\\" data such as lipograms (texts omitting certain vowels) as well as bigcodebench hallucination avoidance.\\n\\nThe paper is very approachable for somebody not in the area as it introduces the notions in a gradual fashion. They study an important problem and give a very neat algorithm for it. I recommend the paper to be accepted at the conference.\", \"weaknesses\": \"please see summary\", \"questions\": \"Small comment: on line 199: what is DFA? Deterministic finite automata? It would be nice to define it.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. I raise my rating to 5 now, and my final rating will depend on the revision.\"}" ] }
9WYMDgxDac
Sample then Identify: A General Framework for Risk Control and Assessment in Multimodal Large Language Models
[ "Qingni Wang", "Tiantian Geng", "Zhiyuan Wang", "Teng Wang", "Bo Fu", "Feng Zheng" ]
Multimodal Large Language Models (MLLMs) exhibit promising advancements across various tasks, yet they still encounter significant trustworthiness issues. Prior studies apply Split Conformal Prediction (SCP) in language modeling to construct prediction sets with statistical guarantees. However, these methods typically rely on internal model logits or are restricted to multiple-choice settings, which hampers their generalizability and adaptability in dynamic, open-ended environments. In this paper, we introduce *TRON*, a **t**wo-step framework for **r**isk c**o**ntrol and assessme**n**t, applicable to any MLLM that supports sampling in both open-ended and closed-ended scenarios. *TRON* comprises two main components: (1) a novel conformal score to **sample** response sets of minimum size, and (2) a nonconformity score to **identify** high-quality responses based on self-consistency theory, controlling the error rates by two specific risk levels. Furthermore, we investigate semantic redundancy in prediction sets within open-ended contexts for the first time, leading to a promising evaluation metric for MLLMs based on average set size. Our comprehensive experiments across four Video Question-Answering (VideoQA) datasets utilizing eight MLLMs show that *TRON* achieves desired error rates bounded by two user-specified risk levels. Additionally, deduplicated prediction sets maintain adaptiveness while being more efficient and stable for risk assessment under different risk levels.
[ "generative models", "calibration/uncertainty", "inference methods" ]
Accept (Spotlight)
https://openreview.net/pdf?id=9WYMDgxDac
https://openreview.net/forum?id=9WYMDgxDac
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zWsJcoxNdK", "zP2PvCxOPK", "vtU1kk2FcC", "vIL1R9mDA5", "u207qVxY4v", "qnxYLbc7n0", "qbXwsnMr7v", "q2b3lc8Oa3", "pYJeLXytTX", "or65OPhNvB", "kYKjcxCtIi", "kQkcdeNVWe", "erhB1oE5e1", "eOXm1wdUOR", "e844uPcbL5", "b05AuUnLAU", "Zw8xobqhwr", "Zrqx6zVC2R", "ZPHny8AL9c", "YnfzBHbl1j", "YIglSHPaQb", "Vx27c3KGM6", "Vw5BXoRw5B", "RkS8VBPeC4", "NCi136tg0W", "LVLZ65ylg3", "LEgXp1DvKD", "HDm6dTctcU", "CCoX3sEqsz", "9yAv5JV4dP", "8FE6XR9zZW", "36rLhEqaKu", "2qs5oJdouM", "1zXY9H70vl" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732535309716, 1732184271061, 1730380703242, 1730821862559, 1732692374748, 1732159959962, 1731606078194, 1731692656695, 1730605898947, 1731757210512, 1732162975101, 1737523624788, 1732533720357, 1732522473142, 1732524561842, 1732163331763, 1732493019336, 1732493086632, 1732993745254, 1732360288397, 1732277962726, 1732692726553, 1734382066341, 1731750743982, 1731753038665, 1732493144426, 1732359317514, 1730721010850, 1732348778121, 1732161365876, 1731606253666, 1731750789654, 1732622645864, 1732618811652 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Reviewer_pRzK" ], [ "ICLR.cc/2025/Conference/Submission4201/Reviewer_amj3" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Reviewer_Tw6D" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4201/Reviewer_pRzK" ], [ "ICLR.cc/2025/Conference/Submission4201/Reviewer_rC7h" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Reviewer_pRzK" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Area_Chair_uhBp" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Reviewer_rC7h" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ], [ "ICLR.cc/2025/Conference/Submission4201/Reviewer_Tw6D" ], [ "ICLR.cc/2025/Conference/Submission4201/Authors" ] ], "structured_content_str": [ "{\"title\": \"Appreciation for Recognition\", \"comment\": \"Thank you for your recognition and encouragement. We appreciate your valuable insights and will strive to make a broader impact in our future work. Thanks again for taking the time to review our responses and revision.\"}", "{\"title\": \"Supplementary Response to Weakness 1\", \"comment\": \"For Weakness 1, we test two additional methods to evaluate the semantic equivalence between different responses: (1) Rouge-L and (2) SentenceTransformers (ST) utilizing DistillRoBERTa as the backbone [1][2]. Rouge-L deems two response as semantically equivalent if their longest common subsequence is larger than a threshold. DistillRoBERTa outputs the semantic similarity score between two responses. Following prior work [1][2][3], we adopt two thresholds: 0.5 and 0.7, for both metrics.\\n\\nThen, we compare bidirectional entailment (BE) based on DeBERTa-large-mnli with the two methods. Since these three methods all operate in the second phase of TRON (i.e., identify), we set alpha to 0 and employ calibration and test data points that cover acceptable responses in their candidate sets. We evaluate the empirical miscoverage rate (EMR) when beta is set to a strict value, 0.1. Results on the VMME dataset utilizing the Gemini-1.5-Pro model are shown below:\\n\\n| Methods | BE | Rouge-L (0.5) | Rouge-L (0.7) | ST (0.5) | ST (0.7) |\\n|---------|--------|---------------|---------------|----------|----------|\\n| EMR | 0.0847 | 0.0870 | 0.0902 | 0.0829 | 0.0848 |\\n\\nThe guarantees of the second phase of TRON were not compromised by the changes in the semantic equivalence evaluation methods.\\n\\nFurthermore, in work [4], bidirectional entailment based on DeBERTa-large-mnli is also used within the conformal prediction framework. \\n\\nIn the last paragraph of Appendix C, we have provided a detailed explanation of using bidirectional entailment to determine semantic equivalence.\\n\\nWe would like to know if we have addressed your comments, or if there is anything else we can help to clarify. Thank you again for your helpful comments, and for taking the time to review our work.\\n\\n---\\n\\n### References\\n\\n[1] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models (ACL 2024)\\n\\n[2] Word-Sequence Entropy: Towards Uncertainty Estimation in Free-Form Medical Question Answering Applications and Beyond (Engineering Applications of Artificial Intelligence, 2024)\\n\\n[3] Generating with Confidence: Uncertainty Quantification for Black-box Large Language Models (TMLR 2024)\\n\\n[4] Addressing Uncertainty in LLMs to Enhance Reliability in Generative AI (NeurIPS 2024 Workshop SafeGenAi)\"}", "{\"summary\": \"The paper presents **TRON**, a two-step framework designed for **risk control and assessment** in multimodal large language models (MLLMs), particularly for **Video Question Answering (VideoQA)** tasks. Addressing challenges in dynamic and open-ended environments, TRON leverages **Split Conformal Prediction (SCP)**, introducing a **conformal score** for sampling response sets and a **nonconformity score** for identifying high-quality responses. Through experiments on several VideoQA datasets, TRON demonstrates the ability to achieve desired error rates across various **user-specified risk levels**. It is also noted for exploring the concept of **semantic redundancy** in prediction sets as an evaluation metric, an area not previously investigated in open-ended contexts.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. **Innovative Framework**: The introduction of TRON, a two-step risk control and assessment framework, contributes significantly to the field of MLLM evaluation in both open-ended and closed-ended VideoQA tasks. Its flexibility in applying conformal prediction in open-ended contexts is commendable.\\n\\n2. **Novel Conformal and Nonconformity Scores**: The paper proposes a unique conformal score for setting the minimum sample size in open-ended tasks and a nonconformity score based on self-consistency theory. These scores provide a rigorous approach to risk control.\\n\\n3. **Addressing Uncertainty with Redundancy Analysis**: The evaluation of **semantic redundancy** in open-ended settings introduces a new angle to uncertainty measurement, providing a promising metric that complements traditional accuracy.\\n\\n4. **Comprehensive Experimental Evaluation**: The experiments span multiple datasets, risk levels, and different types of MLLMs, offering a thorough assessment of TRON\\u2019s effectiveness in diverse conditions.\", \"weaknesses\": \"1. **Limited Discussion of Practicality and Adaptability**: While TRON provides theoretical guarantees, the practical aspects, such as computational overhead and applicability in real-world scenarios, could be discussed in greater depth.\\n\\n2. **Insufficient Baseline Comparisons**: The paper lacks a comparison with other standard risk control methods or frameworks that may be relevant, particularly in closed-ended settings or previous SCP applications in MLLMs.\\n\\n3. **Complexity in Method Presentation**: Some sections of the methodology, particularly the derivation of the conformal and nonconformity scores, lack clarity, which could challenge readers unfamiliar with SCP.\\n\\n4. **Inconsistent Evaluation Details**: Although the experiments are extensive, details on some evaluation metrics, like APSS and their implications for different risk levels, could be better explained. The choice of models and how each metric was applied in open-ended versus closed-ended settings was also not consistently clarified.\", \"questions\": \"1. **What is the expected computational impact** of using TRON in large-scale applications or real-time risk assessment tasks? Could you provide a more detailed explanation or metrics regarding the processing time required for each step?\\n\\n2. **Baseline Method Comparison**: Have you considered comparing TRON with simpler risk control baselines? For instance, would a heuristic-based risk control method suffice for certain types of tasks in closed-ended VideoQA? If not, could you clarify how TRON performs specifically better in such cases?\\n\\n3. **Semantic Redundancy in Open-Ended Tasks**: How does the semantic redundancy analysis handle responses that may be lexically distinct but only partially semantically equivalent? Could this approach potentially overlook responses with subtle but important semantic differences?\\n\\n4. **Alternative Conformal and Nonconformity Scoring Methods**: Could the proposed conformal and nonconformity scores be further enhanced by alternative methods, such as clustering-based or confidence-based approaches beyond self-consistency theory? If so, what would be the implications for TRON\\u2019s current framework?\\n\\n5. **Additional Real-World Validation**: Have there been any attempts to validate TRON in real-world or industry-specific VideoQA tasks, possibly in collaboration with industry partners? If so, could you share any preliminary insights on its practical performance and potential adaptations? If not, do you plan to include such validation in future work?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a two step risk control based framework extending split conformal prediction method for open ended and multimodal (videoQA) tasks. The method applies a conformal score to calibrate the minimum number of responses (samples) needed to ensure coverage. This score defines the prediction set\\u2019s error rate at a risk level alpha. It then refines the set using frequency or semantic similarity to identify high quality responses controlled by another risk parameter beta. The overall risk is bounded by a function of alpha and beta to provide statistical consistency guarantees using calibration, sampling and refinement steps. The paper builds upon multiple conformal risk control based methods and addresses the shortcomings with this two step method (to maintain smaller average prediction set size (APSS)) with lower risk thresholds) and applying them to multimodal videoQA setting.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper's two-step risk control methodology addresses general shortcoming of the conformal prediction method and provides statistical guarantees for error rates, increasing the reliability of MLLM responses.\", \"Since open ended tasks are more challenging due to large number of possible generation, this method (two step approach, use of semantic similarity) seems to dynamically adapt well to provide flexible prediction set sizes for complex and generic generative scenarios (although we need more experimental validation). Error stays within bounds even after filtering step (bounded by alpha + beta - alpha.beta)\", \"As shown is Figure 3, deduplication of the semantically similar responses helps with more stable error rates and smaller prediction sets. Experiments suggest that semantic diversity can create smaller, more efficient prediction sets (lower APSS) without compromising on accuracy (EER stays within limits).\"], \"weaknesses\": [\"Authors already mention under limitations that guarantees are not conditional to individual data points but marginal over the test set. With this limitation, it may still be a bottleneck where risk compliance guarantees are needed for critical applications requiring more stringent guarantees and/or compliance requirements.\", \"more open-ended evaluations and experiments on the open-ended datasets would have shed more light on the strengths and weaknesses of the methods (like Fig 4b). This is a key innovative strength of the method to address open-ended tasks (unlike MCQ) and adoption of this method will depend heavily on understand the strengths of this method in more open-ended generation tasks.\"], \"questions\": [\"Do you foresee any major modifications needed for TRON to control risk in scenarios involving distribution shifts, where calibration and test distributions differ?\", \"Would it be feasible to incorporate dynamic adjustments to prediction set sizes based on task difficulty or user preferences in real time? Are there challenges with balancing efficiency and robustness in such a dynamic setting?\", \"Could access to model internals like logits help improve TRON's performance?\", \"Could reliance on frequency-based nonconformity scores lead to biases in the types of responses included in the prediction set, especially in cases where the model\\u2019s sampling is limited?\", \"Have you observed any variance in EER across different model architectures or response generation and sampling methods (for example, cases where EER can go outside the bounds set by alpha and beta)\", \"How easy is it to generalize this approach beyond VideoQA to other open-ended tasks? Are there any major limitations or requirements for generalization?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Appreciation for Feedback and Recognition\", \"comment\": \"Thank you for taking the time to review our responses. We appreciate your valuable feedback and constructive insights throughout this process. Thanks again for your recognition.\\n\\nBest regards,\\n\\nAuthors of Paper 4201\"}", "{\"title\": \"Rebuttal Revision\", \"comment\": [\"We would like to thank you for your detailed and considerate reviews of our paper. We have uploaded a modified version of our paper that incorporates your comments.\", \"In the main text, we add a conditional performance analysis of TRON utilizing the size-stratified miscoverage rate in Section 4.4, Paragraph Conditional Miscoverage Rate (Figure 6) for the applicability in real-world critical applications requiring more stringent guarantees. Empirical results demonstrate that the miscoverage rate varies at different set sizes, and the conditional performance of TRON significantly improves by integrating the semantic similarity information into the reliability measurement (e.g., the maximum miscoverage rate decreases from 0.2065 to **0.1667** on the VMME dataset when the upper bound is set to 0.19). ***(Corresponding to Weakness 1 and Question 3)***\", \"In Appendix A, we add a brief illustration to conformal prediction utilizing classification tasks.\", \"In Appendix H, we evaluate the generalizability of TRON to other generative language models in various open-ended tasks. We consider the **(1) open-book conversational QA** dataset, CoQA, and the **(2) closed-book reading comprehension** dataset, TriviaQA, utilizing the large language models (LLMs), LLaMA-3-70B-Instruct and LLaMA-3.1-70B-Instruct (from LangChain DeepInfra). Additionally, we employ PandaGPT-13B and GPT-4o on the Vision Question Answering (VQA) dataset for **(3) image understanding** tasks. Empirical results demonstrate that TRON can also provide risk management for both MLLMs and LLMs on image understanding, conversational QA, and reading comprehension tasks (Figures 8 and 9). ***(Corresponding to Weakness 2 and Question 6)***\", \"Furthermore, we compare TRON with Least Ambiguous set-valued Classifiers (LAC) in closed-ended settings, which has been proven to produce prediction sets with the smallest average size. Evaluations on the MMLU dataset (MCQA) utilizing both LLaMA-3-8B-Instruct and LLaMA-3.1-8B-Instruct demonstrate that TRON is more prediction-efficient than LAC (Table 3).\", \"Overall, we hope that these changes have addressed your concerns. We would be grateful for the opportunity to engage further with you to discuss any remaining questions or concerns you may have.\"]}", "{\"title\": \"Responses to Reviewer amj3 Comments on Two Weaknesses\", \"comment\": \"> Weakness 1: Authors already mention under limitations that guarantees are not conditional to individual data points but marginal over the test set. With this limitation, it may still be a bottleneck where risk compliance guarantees are needed for critical applications requiring more stringent guarantees and/or compliance requirements.\\n\\n***Response:*** Thank you for pointing out the bottleneck in risk control. Conditional coverage is a stronger property than marginal coverage, and in the most general case, conditional coverage is impossible to achieve [1][2][3]. We have checked how close our method comes to approximating it utilizing the size-stratified coverage metric (i.e., the stratified coverage at each size of prediction set) [2][4][5]. We utilize the Gemini-1.5-Pro model and set $\\\\alpha = \\\\beta = 0.1$. Empirical evaluations on four datasets indicate that the set size ranges from 0 to 3 after semantically deduplication. The results of size-stratified miscoverage rate are shown in the table below. **We have added an evaluation of conditional performance in the rebuttal revision (Paragraph Conditional Miscoverage Rate and Figure 6 in Section 4.4)**. \\n\\n| **Dataset\\\\Set Size** | **1** | **2** | **3** |\\n|:---------------------:|:--------:|:--------:|:--------:|\\n| **VMME** | 0.1788 | 0.1531 | 0.2065 |\\n| **NEXT** | 0.0900 | 0.1778 | 0.2244 |\\n| **MUSC** | 0.1250 | 0.2286 | 0.1830 |\\n| **MSVD** | 0.0778 | 0.1222 | 0.1556 |\\n\\n> Weakness 2: more open-ended evaluations and experiments on the open-ended datasets would have shed more light on the strengths and weaknesses of the methods (like Fig 4b). This is a key innovative strength of the method to address open-ended tasks (unlike MCQ) and adoption of this method will depend heavily on understand the strengths of this method in more open-ended generation tasks.\\n\\n***Response:*** Thank you for your valuable suggestions for improvement. The key innovation of our method is the development of a conformity score that calibrates the number of samples, which approximates the candidate set as multiple-choice options in closed-ended settings at a user-specified risk level for the first time, addressing the issues of external significance level and sensitivity in prior studies [3][5]. Then, in the second step, based on self-consistency, we use frequency to replace logits and establish the prediction set. Observing the semantic redundancy in the prediction set in open-ended settings, we conduct deduplication and employ the calibrated set size for uncertainty estimation. As shown in Figure 4(b), we utilize the average set size to evaluate the uncertainty of MLLM when providing audio modality information at different levels of silence, to supplement the accuracy metric.\\n\\n**As we stated in the conclusion, our method is applicable to all generative language models and tasks, and we will evaluate our method on both vision-language models (VLMs) and large language models (LLMs). Due to the slow pace of open-ended language generation tasks, we will update the experimental setup and results to the rebuttal revision as soon as possible and notify you through official comment.**\\n\\nWe hope these kindly address the reviewer's concerns. If there are any aspects we have overlooked or misunderstood, please let us know. We would like to provide responses if the reviewer has further questions.\\n\\n---\\n\\n### References\\n[1] Vladimir Vovk. 2012. Conditional Validity of Inductive Conformal Predictors. In Proceedings of the Asian Conference on Machine Learning, PMLR.\\n\\n[2] Anastasios N Angelopoulos and Stephen Bates. 2021. A gentle introduction to conformal prediction and distribution-free uncertainty quantification. arXiv preprint arXiv:2107.07511.\\n\\n[3] Victor Quach, Adam Fisch, Tal Schuster, Adam Yala, Jae Ho Sohn, Tommi S. Jaakkola, and Regina Barzilay. 2024. Conformal language modeling. In The Twelfth International Conference on Learning Representations. \\n\\n[4] Bhawesh Kumar, Charles Lu, Gauri Gupta, Anil Palepu, David Bellamy, Ramesh Raskar, and Andrew Beam. 2023. Conformal prediction with large language models for multi-choice question answering. arXiv preprint arXiv:2305.18404.\\n\\n[5] Jiayuan Su, Jing Luo, Hongwei Wang, Lu Cheng. 2024. API Is Enough: Conformal Prediction for Large Language Models Without Logit-Access. In Findings of the Association for Computational Linguistics: EMNLP 2024.\"}", "{\"title\": \"Responses to Reviewer rC7h Comments on Weakness and Questions\", \"comment\": \"> **Response to the weakness:**\\n\\nThank you for your valuable insight. TRON is applicable to various generative language models and tasks as it formulates the criterion by analyzing the output distribution of the generative model. We have evaluated TRON on the **open-book conversational QA** dataset, CoQA [1], and the **closed-book reading comprehension** dataset, TriviaQA [2], utilizing the large language models (LLMs), LLaMA-3-70B-Instruct and LLaMA-3.1-70B-Instruct. Additionally, we employ PandaGPT-13B and GPT-4o on the Visual Question Answering (VQA) dataset [3] for **image understanding** tasks. Empirical evaluations have been added to Appendix H (Figure 8 and 9) in the rebuttal revision. The results demonstrate that TRON can also provide risk management for both MLLMs and LLMs on image understanding, conversational QA, and reading comprehension tasks.\\n\\n> **Response to question 1:** \\n\\n**Firstly**, since we randomly assign the calibration and test datasets, any sampling anomalies are expected to be uniformly distributed across both sets. **Secondly**, we use the same number of samples for both the calibration and test data. While sampling outliers may be present, the nonconformity score is inherently linked to the correct response. Given that the test and calibration datasets are exchangeable, the overall correctness coverage remains unaffected. However, this may result in an increase in the final average set size (prediction efficiency). **Additionally**, there will be some queries for which we cannot obtain a correct response no matter how many times we sample. We identify these as a distribution shift problem and exclude these queries. **Furthermore**, since we can derive the minimum number of samples on the calibration set at a user-accepted risk level, we can, as prior knowledge, appropriately increase the sampling frequency on the independent and identically distributed (i.i.d.) test data to improve the accuracy of the frequency-based confidence scoring. **For example, at a certain risk level $\\\\alpha$, we determine that the minimum number of samples required is $M$. At this point, we can appropriately increase the number of samples beyond $M$ to enhance the representational capability of frequency-based confidence scoring within the cost constraints, while maintaining the guarantee of the first stage (i.e., $\\\\leqslant \\\\alpha$).**\\n\\n> **Response to question 2:** \\n\\nIt is feasible under the condition of exchangeability. Based on real-time uncertainty measurements, we can exclude problems that the language model is unable to solve. Then we can set the conformal score of each calibration data point to be any number of samples that ensures the inclusion of the correct answer. At this point, we can set the number of samples for the test data to the maximum number of samples in the calibration set, which ensures that we will definitely sample a correct answer under exchangeable conditions. Then, we can use reliability assessment methods to identify high-quality responses.\\n\\nAdditionally, we can perform real-time uncertainty estimation within the candidate set while sampling. We can define the conformal score to be $r(X_i, Y_i)= inf\\\\lbrace M_i : Y_i \\\\in \\\\lbrace \\\\hat y_{m}^{(i)} \\\\rbrace_{m=1}^{M_i}, U(\\\\lbrace \\\\hat y_{m}^{(i)} \\\\rbrace_{m=1}^{M_i})=0 \\\\rbrace,$ where the $U$ function value is 0, when the uncertainty in the candidate set is below the user's requirement; otherwise, it is 1. However, this depends on the performance of the uncertainty assessment method.\\n\\nFurthermore, We can evaluate the uncertainty in the candidate set while sampling. When the uncertainty is below a certain threshold, we check whether the sampled response is correct, thus inferring the minimum confidence level or maximum uncertainty that allows for the inclusion of a correct response in the candidate set. \\n\\nThank you for your valuable suggestions to improve the paper. We would like to provide responses if we have misunderstood any points of your questions. \\n\\n\\n---\\n\\n### References\\n\\n[1] Reddy S, Chen D, Manning C D. Coqa: A conversational question answering challenge[J]. Transactions of the Association for Computational Linguistics, 2019, 7: 249-266.\\n\\n[2] Joshi M, Choi E, Weld D S, et al. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 2017: 1601-1611.\\n\\n[3] Goyal Y, Khot T, Summers-Stay D, et al. Making the v in vqa matter: Elevating the role of image understanding in visual question answering[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 6904-6913.\"}", "{\"summary\": \"The paper deals with risk control and assessment for MLLMs. To address the issues of existing work relying on the internal model logits and working in the multiple-choice setting, the authors propose a TRON, a two-step framework for MLLMs supporting sampling for both open-ended and close-ended scenarios. TRON allows controlling error rates by sampling response sets of minimum size and identifying high-quality responses using self-consistency theory. The experiments on VideoQA demonstrate the efficacy on eight MLLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is easy to follow and understand.\", \"The proposed method extends SCP to open-ended scenarios by estimating the confidence from frequency.\"], \"weaknesses\": [\"The confidence estimation in Step 2 relies on the prediction of another model(e.g. DeBERTa-large-mnli). Then it should be at least discussed on the reliability as the semantic classifier. Otherwise, it makes the identification of risk control less convincing.\", \"It is unclear how the silence percentage is conducted on the audio, and how the conclusion \\u2018introduce audio modality enhances the confidence level\\u2019 (Line371-372) is made. It is shown in Fig. 4 that increasing SPs leads to higher APSS. Also, introducing audio modality seems to only improve VideoLLaMA with SP <50%.\", \"It seems that the proposed method is general to LLMs as well. How does the proposed method work for LLMs for both open-ended and close-ended scenarios? The paper claims this advantage but shows no experimental results. This would make the paper more impactful.\", \"How is the proposed method compared with existing methods for LLMs on such as MCQA? Such a comparison would make the paper more comprehensive.\"], \"questions\": [\"It seems that the best practice of the ratio of the calibration and test set is model-dependent. Is there any insight on the ratio selection when applied to different MLLMs?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses to Reviewer pRzK Comments on Five Questions\", \"comment\": \"> **Response to question 1:**\\n\\nIn practical applications where calibration data has already been set up, TRON requires obtaining M responses from the model output space based on the minimum number of samples derived from the calibration data. This process has a lower computational cost compared to beam search because we use multinomial sampling.\\nThe main computational cost lies in evaluating the reliability of each response, which depends on the risk requirements of the practical application. In high-risk applications, such as medical diagnosis, it is clearly insufficient to evaluate each response solely based on frequency. Additionally, if there is a distribution shift problem, meaning the exchangeability condition is not met, we need to deploy more powerful models to mitigate the issue of uneven distribution, as we primarily rely on analyzing the model's output distribution. If we can access the model's internal logit information, we can also increase the computational cost to calculate the entropy of each response, thereby improving TRON's prediction efficiency. \\n\\n> **Response to question 2:** \\n\\nWe have added a comparison of the prediction efficiency of TRON and Least Ambiguous set-valued Classifiers (LAC), which has been proven to produce prediction sets with the smallest average size in closed-ended settings in **Appendix H (Table 3)**. Empirical results demonstrate that TRON produces more efficient predictions. Furthermore, TRON is similar to **a bag of tricks**. As mentioned in the last paragraph of Section 3.2, the reliability function $F$ can be any measurement that reflects the trustworthiness of each response. TRON provides, for the first time, a guarantee for the miscoverage rate in language generation tasks in open-ended settings. Within this broader framework, we can individually optimize each step to enhance the overall risk management capability. \\n\\n> **Response to question 3:** \\n\\n***Firstly***, following prior work [1][2], we utilize a Natural Language Inference (NLI) classifier with DeBERTa-large-mnli as the backbone to evaluate the semantic equivalence between two responses. As mentioned in Appendix C, If two responses are predicted to have a bidirectional entailment relationship, we consider that semantic redundancy has occurred. ***Secondly***, the approach may potentially overlook responses with subtle but important semantic differences, because there is no perfect method to evaluate the semantic relationship between two responses. At this point, we can combine methods such as the ROUGE-L score for multiple checks to achieve a more rigorous assessment of semantic equivalence, but this would impose an additional computational cost. \\n\\n\\n> **Response to question 4:** \\n\\nYes, as mentioned in Section 4.4, by incorporating semantic similarity information and using Semantic Diversity as the reliability measurement, both the prediction efficiency and conditional performance of TRON are improved. Our motivation of the reference to self-consistency is that when constrained by a black-box setting, we can define the nonconformity score solely by analyzing the model's output distribution. A more accurate reliability measurement will enhance the performance of TRON.\\n\\n> **Response to question 5:** \\n\\nTRON can be combined with conformal risk control [3] and prompt risk control [4] to achieve task-specific performance control in various language generation tasks such as VideoQA and Vision-Language QA. Taking the task of diagnosing Parkinson's disease in medical imaging as an example, we can prompt a vision-language model (VLM) to identify the lesion areas in brain imaging slices. At this point, we can control the precision of the VLM's identification of lesion areas by defining the nonconformity score as the false discovery rate for each sample.\\n\\n---\\n\\n### References\\n\\n[1] Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation (ICLR 2023).\\n\\n[2] Detecting hallucinations in large language models using semantic entropy (Nature 2024).\\n\\n[3] Conformal Risk Control (ICLR 2024)\\n\\n[4] Prompt Risk Control: A Rigorous Framework for Responsible Deployment of Large Language Models (ICLR 2024)\"}", "{\"title\": \"Rebuttal Revision\", \"comment\": [\"We would like to thank you for your detailed and considerate reviews of our paper. We have uploaded a modified version of our paper that incorporates your comments.\", \"In the main text, we add a conditional performance analysis of TRON utilizing the size-stratified miscoverage rate in Section 4.4, Paragraph Conditional Miscoverage Rate (Figure 6) for the applicability in real-world critical applications requiring more stringent guarantees. Empirical results demonstrate that the miscoverage rate varies at different set sizes, and the conditional performance of TRON significantly improves by integrating the semantic similarity information into the reliability measurement (e.g., the maximum miscoverage rate decreases from 0.2065 to **0.1667** on the VMME dataset when the upper bound is set to 0.19).\", \"In Appendix A, we add a brief illustration to conformal prediction utilizing classification tasks.\", \"In Appendix H, we evaluate the generalizability of TRON to other generative language models in various open-ended tasks. We consider the **(1) open-book conversational QA** dataset, CoQA, and the **(2) closed-book reading comprehension** dataset, TriviaQA, utilizing the large language models (LLMs), LLaMA-3-70B-Instruct and LLaMA-3.1-70B-Instruct (from LangChain DeepInfra). Additionally, we employ PandaGPT-13B and GPT-4o on the Vision Question Answering (VQA) dataset for **(3) image understanding** tasks. Empirical results demonstrate that TRON can also provide risk management for both MLLMs and LLMs on image understanding, conversational QA, and reading comprehension tasks (Figures 8 and 9). ***(Corresponding to Weakness 3)***\", \"Furthermore, we compare TRON with Least Ambiguous set-valued Classifiers (LAC) in closed-ended settings, which has been proven to produce prediction sets with the smallest average size. Evaluations on the MMLU dataset (MCQA) utilizing both LLaMA-3-8B-Instruct and LLaMA-3.1-8B-Instruct demonstrate that TRON is more predictive efficient than LAC (Table 3). ***(Corresponding to Weakness 4)***\", \"Overall, we hope that these changes have addressed your concerns. We would be grateful for the opportunity to engage further with you to discuss any remaining questions or concerns you may have.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"Reply to Authors' responses\", \"comment\": \"Thank you to the authors for their efforts and for addressing my concerns. I have considered this paper acceptable from the beginning and appreciate your responses to my questions. However, the scope and impact of the innovations presented in this work might be somewhat limited within the field. Therefore, I will retain my current score, but I am confident that the authors will produce even more impactful work in the future that deserves the highest recognition.\"}", "{\"title\": \"Reply to Authors' responses\", \"comment\": \"Thanks for the author\\u2019s response, which addressed my previous concerns. I have updated my score accordingly.\"}", "{\"title\": \"Appreciation for Feedback and Recognition\", \"comment\": \"Thanks for taking the time to review our responses and raising your score. We are pleased that our responses and revision address your concerns, which also improve the quality of this work.\"}", "{\"title\": \"Rebuttal Revision\", \"comment\": [\"We would like to thank you for your detailed and considerate reviews of our paper. We have uploaded a modified version of our paper that incorporates your comments.\", \"In the main text, we add a conditional performance analysis of TRON utilizing the size-stratified miscoverage rate in Section 4.4, Paragraph Conditional Miscoverage Rate (Figure 6) for the applicability in real-world critical applications requiring more stringent guarantees. Empirical results demonstrate that the miscoverage rate varies at different set sizes, and the conditional performance of TRON significantly improves by integrating the semantic similarity information into the reliability measurement (e.g., the maximum miscoverage rate decreases from 0.2065 to **0.1667** on the VMME dataset when the upper bound is set to 0.19). ***(Corresponding to Weakness 1)***\", \"In Appendix A, we add a brief illustration to conformal prediction utilizing classification tasks. ***(Corresponding to Weakness 3)***\", \"In Appendix H, we evaluate the generalizability of TRON to other generative language models in various open-ended tasks. We consider the **(1) open-book conversational QA** dataset, CoQA, and the **(2) closed-book reading comprehension** dataset, TriviaQA, utilizing the large language models (LLMs), LLaMA-3-70B-Instruct and LLaMA-3.1-70B-Instruct (from LangChain DeepInfra). Additionally, we employ PandaGPT-13B and GPT-4o on the Vision Question Answering (VQA) dataset for **(3) image understanding** tasks. Empirical results demonstrate that TRON can also provide risk management for both MLLMs and LLMs on image understanding, conversational QA, and reading comprehension tasks (Figures 8 and 9). ***(Corresponding to Weakness 1)***\", \"Furthermore, we compare TRON with Least Ambiguous set-valued Classifiers (LAC) in closed-ended settings, which has been proven to produce prediction sets with the smallest average size. Evaluations on the MMLU dataset (MCQA) utilizing both LLaMA-3-8B-Instruct and LLaMA-3.1-8B-Instruct demonstrate that TRON is more predictive efficient than LAC (Table 3). ***(Corresponding to Weakness 2)***\", \"Overall, we hope that these changes have addressed your concerns. We would be grateful for the opportunity to engage further with you to discuss any remaining questions or concerns you may have.\"]}", "{\"title\": \"Have we addressed your concern?\", \"comment\": \"Dear Reviewer rC7h,\\n\\nThank you again for taking the time to review our paper and providing detailed feedback. As the end of the discussion period is approaching, we want to follow up to see if you have any additional questions we have not addressed, or if there is anything else we can help clarify. We have tried responding to the comments from your initial review, and we are more than happy to discuss any points further. Thank you!\\n\\nAuthors of paper 4201\"}", "{\"title\": \"Have we addressed your concern?\", \"comment\": \"Dear Reviewer Tw6D,\\n\\nThank you again for taking the time to review our paper and providing detailed feedback. As the end of the discussion period is approaching, we want to follow up to see if you have any additional questions we have not addressed, or if there is anything else we can help clarify. We have tried responding to the comments from your initial review, and we are more than happy to discuss any points further. Thank you!\\n\\nAuthors of paper 4201\"}", "{\"title\": \"Reply to Authors' responses\", \"comment\": \"I referred to the author's final revised version and implemented some experiments using your ideas. With your responses, I now have a more comprehensive understanding of the work. I hope you will continue your efforts. I have increased my score.\"}", "{\"title\": \"Summary of Author Response to All the Reviewers\", \"comment\": [\"We would like to thank all the reviewers for their insightful comments. We revised our paper based on the constructive feedback and suggestions from the reviewers. We marked the contents that already existed in the original submission (but may be missed by reviewers) in red, and those revised or newly added contents in blue in the revision. Our key responses are summarized as follows:\", \"**Additional explanations.**\", \"As Reviewer Tw6D suggested, we explained the Silence Percentage (SP) conducted on the audio modality information in Section 4.3. In addition, we analyzed why it is necessary to assess the uncertainty based on the average prediction set size (APSS), in order to assist accuracy in the comprehensive evaluation of MLLMs. Moreover, we detailed the semantic equivalence method in Appendix C.\", \"As Reviewer pRzK suggested, we illustrated the base framework of split conformal prediction utilizing classification tasks in Appendix A. In addition, we explained the evaluation metrics (e.g., APSS) in Appendix F.\", \"**Additional experimental results.**\", \"As all four reviewers suggested, we generalized our framework to other models (e.g., large language models and vision-language models) on additional open-ended tasks (e.g., conversational QA, reading comprehension, image understanding) in Appendix H.\", \"As Reviewers amj3 and pRzK suggested, we evaluated the conditional performance of our framework for real-world critical applications requiring more stringent guarantees in Section 4.4. In addition, we discussed improving the conditional performance by utilizing more comprehensive reliability measures of model generations.\", \"As Reviewers Tw6D and pRzK suggested, we compared our framework with existing methods in close-ended scenarios in Appendix H.\", \"**Additional insights.**\", \"As Reviewer amj3 suggested, we discussed the future work for adapting our framework to language generation tasks under distribution shift.\", \"As Reviewer rC7h suggested, we discussed how does our framework handle outliers in sampling size.\", \"As both Reviewers amj3 and rC7h suggested, we envisioned a feasible approach, which further adapts our framework to dynamically adjust the sampling size based on real-time uncertainty measurements.\", \"As Reviewer Tw6D suggested, we analyzed how to determine the sample ratio for the calibration set and test set based on different models.\", \"As Reviewer pRzK suggested, we provided an example of medical video analysis, attempting to validate the application of our method in real-world or industry-specific VideoQA tasks.\", \"We thank all the reviewers again for the detailed and constructive review. We are pleased to see the reviewers' acknowledgment of the contribution of the proposed method. Most of the concerns are raised about unclear expressions, generalizability, and future work. We hope our explanation, additional experimental results, and insights for further adaptation in the rebuttal could address all of your concerns. Please let us know if you have any questions or concerns.\"]}", "{\"title\": \"Supplementary Response to Question 2\", \"comment\": \"> **Question 2:** Could TRON\\u2019s conformal score be further adapted to dynamically adjust the sampling size based on real-time uncertainty measurements?\\n\\n**Insights:** We envision a possible approach for estimating the overall uncertainty in the candidate set (i.e., sampled responses) using a certain uncertainty measure, and then derive a conformal uncertainty criterion by associating it with the correct response. \\n\\nFor example, employing semantic entropy (SE) [1][2] or shifting attention to relevance (SAR) [3] as the uncertainty measurement $U$(\\u00b7), we estimate the reliability of the current question-answering by sampling multiple (i.e.,$M$) responses and incorporating their semantic uncertainty. ***Formally***, we define the uncertainty function as $U(x_i,\\\\lbrace y_{j} \\\\rbrace_{j=1}^{M_i} )$, where $M_i$ denotes the sampling size for i-th question $x_i$, and $y_j$ denotes the j-th sampled response within the candidate set. ***Then***, we can utilize several validation data points to obtain an uncertainty interval function $A(M)$, which represents the **empirical score range** within which the value of $U(x,\\\\lbrace y_{j} \\\\rbrace_{j=1}^{M} )$ should fall, when the size of the candidate set is $M$ and $\\\\lbrace y_{j} \\\\rbrace_{j=1}^{M}$ covers the ground-truth answer. ***Finally***, we can define the conformal score of each calibration data point as $s_i=inf\\\\lbrace M_i : U(x_i,\\\\lbrace y_{j} \\\\rbrace_{j=1}^{M_i} )\\u2208 A(M_i)\\\\rbrace$ and at this point, **the conformal score can dynamically adjust the sampling size until the overall uncertainty of the candidate set falls within our pre-defined uncertainty interval $A(M_i)$**. Considering the sampling cost, we select the minimum $M_i$ (i.e., $inf$). ***Furthermore***, we can also employ the calibration set to assess the reliability of function $A$(\\u00b7) at various sampling size M and calculate the corresponding error rate\\n$\\\\delta(M_i) = 1 - \\\\mathbb{P}(\\\\lbrace U(x_i,\\\\lbrace y_{j} \\\\rbrace_{j=1}^{M_i})\\u2208 A(M_i)\\\\rbrace \\\\equiv \\\\lbrace y^* \\u2208 \\\\lbrace y_{j} \\\\rbrace_{j=1}^{M_i} \\\\rbrace )$. \\nAt this point, $s_i$ is miscalibrated at a risk level $\\\\delta$.\\n\\nNote that this requires the calibration data, validation data, and test data to be independent and identically distributed (i.i.d.), without any distribution shift issues. \\n\\nWe are grateful to you for recognizing the novelty and contribution of our research and providing thoughtful feedback. We would like to discuss any remaining questions or concerns you may have.\\n\\n---\\n\\n### References\\n[1] Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation (ICLR 2023).\\n\\n[2] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models (ACL 2024).\"}", "{\"title\": \"Appreciation for Feedback and Recognition\", \"comment\": \"Thank you for taking the time to review our paper. We appreciate your valuable feedback and constructive insights throughout this process. We are very grateful for your recognition of our work and thorough suggestions for improvements in our future work.\\n\\nBest regards,\\n\\nAuthors of Paper 4201\"}", "{\"metareview\": \"This paper presents a novel pipeline for risk assessment for open-ended LLM generated responses to visual question answering systems. This work addresses limitations of the current state of the art method of using Split Conformal Prediction (SCP) to construct estimates of the error rate of statistical prediction methods for LLM assessment.\", \"extracting_a_well_written_summary_from_one_of_the_reviewers\": \"\\\"The authors propose a two-step framework, TRON, for assessing and controlling risk in MLLMs, specifically targeting VideoQA tasks. The framework consists of: (1) Sampling Step: This step involves calibrating a conformal score that defines the minimum number of response samples needed to ensure the inclusion of acceptable responses within a certain risk level. (2) Identification Step: In this phase, a nonconformity score based on self-consistency theory identifies high-quality responses within the candidate set. This score accounts for model uncertainty by estimating reliability via response frequency, introducing another risk level to statistically control errors.\\n\\nThe authors carry sufficient experiments on four VideoQA datasets with eight MLLMs and demonstrate that TRON achieves desired error rates within the specified risk bounds. The study further highlights TRON\\u2019s adaptability and stability through deduplicated prediction sets that provide more efficient and robust uncertainty estimation across various risk levels.\\n\\nThe authors address limitations in existing SCP methods, which either modify outputs to ensure factuality, rely on internal token sequence logits, or restrict applications to multiple-choice settings. This new approach is versatile, applicable to both open-ended and closed-ended VideoQA tasks, and operates independently of internal model outputs.\\n\\nReading this myself, I agree with the major points the reviewers have identified. Many weaknesses are discussed, but the problem of estimating the error of a predictor under and unknown distribution shift is difficult and I believe that this paper both takes us a notable step closer to solving the problem and makes clear its limitations.\", \"additional_comments_on_reviewer_discussion\": \"The reviewer discussion was quite engaged during the author rebuttal period and I was well pleased with the quality of the engagement from the reviewers. Two. reviewers increased their scores during the discussion and one remained steadfast in their initial score of \\\"8\\\" - accept good paper. Possible one of the best rebuttal period discussions I have seen.\"}", "{\"title\": \"Responses to Reviewer Tw6D Comments on Four Weaknesses\", \"comment\": \"> **Response to weakness 1:**\\n\\nThank you for your valuable feedback. **Firstly**, we have provided detailed explanations of bidirectional entailment in Appendix C. The Natural Language Inference (NLI) classifier with DeBERTa-large-mnli as the backbone has been widely recognized and employed to evaluate the semantic equivalence [1][2][3][4]. **Secondly**, we measure the reliability of each response based on the number of responses that are semantically equivalent to it, and utilize the same evaluation method for both calibration and test data points. Since the nonconformity score is strictly linked with the correct response, the final guarantees of miscoverage will not be impacted. **Furthermore**, errors exist for each data point because it is impossible to find a perfect semantic equivalence evaluation method. However, this does not affect risk control under the condition of exchangeability, as the criteria are consistent for each data point. This is also the core point of why conformal prediction can provide statistically rigorous guarantees. We are eliminating anomalies under consistent criteria to achieve risk control. \\n\\n> **Response to weakness 2:** \\n\\nThank you for pointing out it. **Firstly**, Silence Percentage (SP) refers to the proportion of randomly muted audio segments in a video **(i.e., larger SP, less audio modality information)**. For each video, varying levels of muting are applied to create different SP levels. **Secondly**, the average prediction set size (APSS) reflects the uncertainty of model decision-making[5][6]. The introduction of the audio modality generally enhances the model's confidence by providing complementary information to the visual and text modality. As shown in Figure 4, the APSS metric significantly decreases (i.e., low uncertainty) as SP decreases from 100% to 0% (i.e., gradually incorporating the audio modality). **Additionally**, increasing SP means that the audio modality information decreases, leading to an increase in model uncertainty, which results in a larger APSS. **Furthermore**, our motivation is to complement the empirical findings of the two studies [5][6]. The study [5] utilizes APSS to evaluate the uncertainty of large language models (LLMs) and the study [6] evaluates the uncertainty of visual-language models (VLMs). Neither of them considered audio modality information. Since the accuracy metric does not provide a comprehensive evaluation of the model's performance (e.g., when SP $\\\\leq$ 50%, as SP increases, uncertainty increases while the accuracy metric also rises, utilizing the VideoLLaMA-7B model) [5][6], we use APSS to assess the uncertainty of MLLMs before and after introducing the audio modality. From Figure 4, we observe that as the proportion of audio modality information is gradually removed (i.e., SP \\u2191), the model's uncertainty increases and accuracy decreases generally.\\n\\n> **Response to weakness 3:** \\n\\nThanks for your valuable suggestions to improve the paper. TRON is applicable to generative language models on various open-ended tasks, as it formulates the criterion by analyzing the output distribution of the generative model. In the rebuttal revision, we have evaluated TRON on the **open-book conversational QA** dataset, CoQA [1], and the **closed-book reading comprehension** dataset, TriviaQA [2], utilizing the large language model, LLaMA-3-70B-Instruct. Additionally, we employ PandaGPT-13B on the Visual Question Answering (VQA) dataset [3] for **image understanding** tasks. Empirical results have been added to **Appendix H (Figure 8 and 9)**.\\n\\n> **Response to weakness 4:** \\n\\nThanks for your thoughtful insights. We have compared TRON with Least Ambiguous set-valued Classifiers (LAC), which has been proven to produce prediction sets with the smallest average size, on the MMLU dataset utilizing LLaMA-3-8B-Instruct and LLaMA-3.1-8B-Instruct. Empirical results of the APSS metric ($\\\\downarrow$) are shown in **Table 3 in Appendix H**. \\n\\n| Model\\\\Method | LAC | TRON (M = 5) | TRON (M = 10) | TRON (M = 20) |\\n|------------------|--------|--------------|---------------|---------------|\\n| LLaMA-3-8B-Instruct | $2.93_2$ | $3.06_0$ | $2.93_7$| $2.76_3$|\\n| LLaMA-3.1-8B-Instruct | $2.57_8$ | $2.61_3$ | $2.53_6$| $2.50_4$|\\n---\\n\\n### References\\n\\n[1] Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation (ICLR 2023).\\n\\n[2] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models (ACL 2024).\\n\\n[3] Generating with Confidence: Uncertainty Quantification for Black-box Large Language Models (TMLR 2024).\\n\\n[4] Detecting hallucinations in large language models using semantic entropy (Nature 2024).\\n\\n[5] Benchmarking LLMs via Uncertainty Quantification (NeurIPS 2024). \\n\\n[6] Uncertainty-Aware Evaluation for Vision-Language Models.\"}", "{\"title\": \"Responses to Reviewer pRzK Comments on Four Weaknesses\", \"comment\": \"> **Response to weakness 1:**\\n\\nThanks for your valuable suggestions. We have added empirical evaluations for TRON in the tasks of conversational question answering, reading comprehension, and image understanding in **Appendix H (Figures 8 and 9)**. Results demonstrate that TRON is applicable to generative language models (e.g., VLMs and LLMs) on various open-ended tasks, which is primarily attributed to TRON establishing statistical criteria by analyzing the output distribution of the model. \\n\\nAdditionally, given that practical applications place more emphasis on the **conditional performance of risk control**, we analyze how close our method comes to approximating condition coverage utilizing the size-stratified miscoverage metric in **Section 4.4 (Figure 6)**. When we employ Semantic Diversity as the reliability measurement, TRON\\u2019s conditional performance improves (e.g., the max miscoverage rate decreases from 0.2065 to **0.1667** on the VMME dataset when the risk level is set to 0.19). \\n\\n> **Response to weakness 2:** \\n\\nThanks for your feedback. Since we approximate the candidate set, in open-ended language generation tasks, as multiple-choice options at a user-specified risk level for the first time, we compare TRON with Least Ambiguous set-valued Classifiers (LAC), which has been proven to produce prediction sets with the smallest average size in **closed-ended settings**. Empirical results of the APSS metric ($\\\\downarrow$) are shown in **Table 3 in Appendix H**. TRON significantly outperforms LAC, and increasing the sampling size within the allowed sampling cost will make TRON more prediction-efficient. \\n\\n| Model\\\\Method | LAC | TRON (M = 5) | TRON (M = 10) | TRON (M = 20) |\\n|------------------|--------|--------------|---------------|---------------|\\n| LLaMA-3-8B-Instruct | $2.93_2$ | $3.06_0$ | $2.93_7$| $2.76_3$|\\n| LLaMA-3.1-8B-Instruct | $2.57_8$ | $2.61_3$ | $2.53_6$| $2.50_4$|\\n\\n> **Response to weakness 3:** \\n\\nThanks for your suggestions for improving the paper. We have illustrated the base framework of conformal prediction utilizing classification problems in **Appendix A**, and provided a detailed derivation of the risk management at both the sampling and identifying processes in **Appendix B**. \\n\\n> **Response to weakness 4:** \\n\\nThanks for your suggestion. We have provided a detailed explanation of the meanings and functions of each metric in Appendix F. APSS refers to the average size of the prediction sets for all test data on the test set, used to assist accuracy in evaluating the model's uncertainty on the test set. APSS decreases as the risk level increases, meaning that the allowable error rate is raised, as shown in Figure 3. We generally fix the risk level and use APSS to evaluate the uncertainty of different models on the same test set to assess their performance. Furthermore, APSS provides a consistent evaluation of uncertainty in both open-ended and closed-ended tasks. However, in open-ended tasks, we observe that there is semantic redundancy in the prediction sets. Therefore, we analyze the impact of semantic redundancy on APSS and the final risk assessment in Section 4.3.\"}", "{\"title\": \"Have we addressed your concern?\", \"comment\": \"Dear Reviewer pRzK,\\n\\nThank you again for taking the time to review our paper and providing detailed feedback. As the end of the discussion period is approaching, we want to follow up to see if you have any additional questions we have not addressed, or if there is anything else we can help clarify. We have tried responding to the comments from your initial review, and we are more than happy to discuss any points further. Thank you!\\n\\nAuthors of paper 4201\"}", "{\"title\": \"Supplementary Response to Question 5\", \"comment\": \"> Additional Real-World Validation: Have there been any attempts to validate TRON in real-world or industry-specific VideoQA tasks, possibly in collaboration with industry partners? If so, could you share any preliminary insights on its practical performance and potential adaptations? If not, do you plan to include such validation in future work?\\n\\nLet's take a medical video analysis task as an example. Given $N$ medical video-report pairs $\\\\lbrace(X_i,Y_i^*) \\\\rbrace_{i=1}^N$ and a new test data $(X_t,Y_t^*)$ (or $(X_{test},Y_{test}^*)$)\\n\\n***Step 1.*** For each medical query based on the video, we sample $m$ generations from the model, denoted as $\\\\mathcal{C_m} (X_i)=\\\\lbrace \\\\hat Y_j^{(i)}\\\\rbrace_{j=1}^m$. Then, we define the loss of miscoverage by the candidate set as $\\\\mathcal{l}(\\\\mathcal{C_m}(X_i),Y_i^*)= \\\\mathcal{1}\\\\lbrace Y_i^*\\u2209 \\\\mathcal{C_m}(X_i)\\\\rbrace$, and the loss is non-increasing in $m$.\\n\\nWe set the size of the candidate set to $X_{test}$ to be $\\\\hat m= inf \\\\lbrace m: \\\\frac{A_N (m) +1}{N+1}\\\\leqslant \\\\alpha \\\\rbrace$ $=inf\\\\lbrace m: A_N(m) \\\\leqslant \\\\alpha (N+1)-1\\\\rbrace$, where $A_N(m)=\\\\sum_\\\\limits{i=1}^{N} l(\\\\mathcal{C}_{m}(X_i),Y_i^*).$ Since $A_N(m)$ is monotone in $m$, we can efficiently search for $\\\\hat m$ by binary search to arbitrary precision.\\n\\nGiven that $l (\\\\mathcal{C}_{\\\\hat m} (X_t),Y_t^*) \\\\leq 1(\\\\in \\\\lbrace 0,1\\\\rbrace)$, we obtain \\n\\n$A_{N+1}(\\\\hat m)=\\\\sum\\\\limits_{i=1}^{N+1}l(\\\\mathcal{C}_{\\\\hat m}(X_i),Y_i^*)$\\n\\n$= A_N(\\\\hat m)+l(\\\\mathcal(C)_{\\\\hat m}(X_t),Y_t^*)$\\n\\n$\\\\leq A_N(\\\\hat m)+1$\\n\\n$\\\\leq \\\\alpha (N+1)$\\n\\nBy the exchangeability of $N$ calibration data points and the test data point, we have $l(\\\\mathcal(C_{\\\\hat m}(X_t),Y_t^*)$~$Uniform(\\\\lbrace l(\\\\mathcal{C_{\\\\hat m}}(X_1),Y_1^*),...,l(\\\\mathcal{C}_{\\\\hat m}(X_t),Y_t^*)\\\\rbrace)$. Then, we guarantee the error rate of the candidate set of the test data point failing to encompass an acceptable generation\\n\\n$\\\\mathbb{E}\\\\[ l \\\\(\\\\mathcal{C_{\\\\hat m}} \\\\(X_t\\\\),Y_t^*\\\\)\\\\]= \\\\frac{1}{N+1} \\\\sum\\\\limits_{i=1}^{N+1}l(\\\\mathcal{C}_{\\\\hat m}(X_i),Y_i^*)$\\n\\n$=\\\\frac{A_{N+1}({\\\\hat m})}{N+1}$\\n\\n$\\\\leq \\\\alpha$\\n\\n***Step 2.*** We assume these $N+1$ data points have at least one acceptable generation within their individual candidate set (i.e., $\\\\alpha=0$). We then post-process the candidate set of the test data point by selecting high-quality responses to construct a prediction set, following\\n$\\\\mathcal{P}_{\\\\hat t}(X_t)=\\\\lbrace {\\\\hat Y_k^{(test)}}\\\\in \\\\lbrace{\\\\hat Y_j^{(test)}}\\\\rbrace_j^{\\\\hat m} : F({\\\\hat Y_k^{(test)}})\\\\geq 1-{\\\\hat t}\\\\rbrace$,\\nwhere $F$(\\u00b7) measures the reliability of each generation within the candidate set and $\\\\hat t$ is defined as\\n$\\\\hat t = inf\\\\lbrace t:\\\\frac{R_N(t)+1}{N+1}\\\\leq\\\\delta\\\\rbrace$\\n\\n, where $R_N(t) = \\\\sum\\\\limits_{i=1}^N r(\\\\mathcal{P}_t(X_i),Y_i^*)=\\\\sum\\\\limits_i^N 1- \\\\frac{\\\\mathcal{P}_t(X_i) \\\\cap Y_i^*}{|\\\\mathcal{P}_t(X_i)|}$. \\n\\nSimilarly, we control the **false discovery rate (FDR)** of a new medical report $\\\\mathbb{E}[r(\\\\mathcal{P}_{\\\\hat t}(X_t),Y_t^*)] \\\\leq \\\\delta$\\n\\nAt this point, we can sample multiple medical video reports and extract reliable sub-claims using $F$ to form a prediction set (i.e., a new medical report), while controlling the FDR within the prediction set. This is an adapted framework of TRON in real-world applications. We hope that our responses have addressed your concerns, and we would greatly appreciate it if you could reconsider the score. Thank you again for your thoughtful review.\"}", "{\"summary\": \"The authors propose a two-step framework, TRON, for assessing and controlling risk in MLLMs, specifically targeting VideoQA tasks. The framework consists of: (1) Sampling Step: This step involves calibrating a conformal score that defines the minimum number of response samples needed to ensure the inclusion of acceptable responses within a certain risk level. (2) Identification Step: In this phase, a nonconformity score based on self-consistency theory identifies high-quality responses within the candidate set. This score accounts for model uncertainty by estimating reliability via response frequency, introducing another risk level to statistically control errors.\\n\\nThe authors carry sufficient experiments on four VideoQA datasets with eight MLLMs and demonstrate that TRON achieves desired error rates within the specified risk bounds. The study further highlights TRON\\u2019s adaptability and stability through deduplicated prediction sets that provide more efficient and robust uncertainty estimation across various risk levels.\\n\\nThe authors address limitations in existing SCP methods, which either modify outputs to ensure factuality, rely on internal token sequence logits, or restrict applications to multiple-choice settings. This new approach is versatile, applicable to both open-ended and closed-ended VideoQA tasks, and operates independently of internal model outputs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. TRON\\u2019s two-step approach combines conformal scores and self-consistency theory to establish a flexible and robust risk assessment framework for MLLMs, particularly in open-ended scenarios, where traditional SCP methods fall short.\\n2. The paper presents extensive experiments across four VideoQA datasets and eight MLLMs, showcasing TRON's effectiveness and consistency in different VideoQA tasks.\\n3. By avoiding reliance on model logits, TRON is adaptable for API-restricted MLLMs, expanding its usability in various practical applications.\", \"weaknesses\": \"I raise the concern that although TRON is evaluated on diverse datasets, it primarily focuses on VideoQA tasks. Could it be tested on additional multimodal tasks to enhance the generalizability of its risk management capabilities?\", \"questions\": \"1. How does TRON handle outliers in response sampling that may disproportionately affect the frequency-based confidence scoring?\\n2. Could TRON\\u2019s conformal score be further adapted to dynamically adjust the sampling size based on real-time uncertainty measurements?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Supplementary Response to Question 2\", \"comment\": \"> Question 2: Would it be feasible to incorporate dynamic adjustments to prediction set sizes based on task difficulty or user preferences in real time? Are there challenges with balancing efficiency and robustness in such a dynamic setting?\\n\\n**Our insights:** We envision a possible approach, which estimates the overall uncertainty in the candidate set (i.e., sampled responses) using a certain uncertainty measure and then derives a conformal uncertainty criterion by associating the uncertainty condition with the correct response. \\n\\nFor example, employing semantic entropy (SE) [1][2] or shifting attention to relevance (SAR) [3] as the uncertainty measurement $U$(\\u00b7), we estimate the reliability of the current question-answering by sampling multiple (i.e.,$M$) responses and incorporating their semantic uncertainty. ***Formally***, we define the uncertainty function as $U(x_i,\\\\lbrace y_{j} \\\\rbrace_{j=1}^{M_i} )$, where $M_i$ denotes the sampling size for the i-th question $x_i$, and $y_j$ denotes the j-th sampled response within the candidate set. ***Then***, we can utilize a validation data set to obtain an uncertainty interval function $A(M)$, which represents the **empirical score range** within which the value of $U(x,\\\\lbrace y_{j} \\\\rbrace_{j=1}^{M} )$ should fall, when the size of the candidate set is $M$ and $\\\\lbrace y_{j} \\\\rbrace_{j=1}^{M}$ covers the ground-truth answer. ***Finally***, we can define the conformal score of each calibration data point as $s_i=inf\\\\lbrace M_i : U(x_i,\\\\lbrace y_{j} \\\\rbrace_{j=1}^{M_i} )\\u2208 A(M_i)\\\\rbrace$ and at this point, **the conformal score can dynamically adjust the sampling size until the overall uncertainty of the candidate set falls within our pre-defined uncertainty interval $A(M_i)$**. Considering the sampling cost (or efficiency), we select the minimum $M_i$ (i.e., $inf$). ***Furthermore***, we can also employ the calibration set to assess the reliability of function $A$(\\u00b7) at various sampling size $M_i$ and calculate the corresponding error rate\\n$\\\\delta(M_i) = 1 - \\\\mathbb{P}(\\\\lbrace U(x_i,\\\\lbrace y_{j} \\\\rbrace_{j=1}^{M_i})\\u2208 A(M_i)\\\\rbrace \\\\equiv \\\\lbrace y^* \\u2208 \\\\lbrace y_{j} \\\\rbrace_{j=1}^{M_i} \\\\rbrace )$. \\nAt this point, $s_i$ is miscalibrated at a risk level $\\\\delta$.\\n\\nNote that this requires all calibration, validation, and test data points to be independent and identically distributed (i.i.d.), without any distribution shift issues. \\n\\nWe are grateful to you for recognizing the novelty and contribution of our research and providing thoughtful feedback. We would like to discuss any remaining questions or concerns you may have.\\n\\n---\\n\\n### References\\n[1] Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation (ICLR 2023).\\n\\n[2] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models (ACL 2024).\"}", "{\"title\": \"Rebuttal Revision\", \"comment\": [\"We would like to thank you for your detailed and considerate reviews of our paper. We have uploaded a modified version of our paper that incorporates your comments.\", \"In the main text, we add a conditional performance analysis of TRON utilizing the size-stratified miscoverage rate in Section 4.4, Paragraph Conditional Miscoverage Rate (Figure 6) for the applicability in real-world critical applications requiring more stringent guarantees. Empirical results demonstrate that the miscoverage rate varies at different set sizes, and the conditional performance of TRON significantly improves by integrating the semantic similarity information into the reliability measurement (e.g., the maximum miscoverage rate decreases from 0.2065 to **0.1667** on the VMME dataset when the upper bound is set to 0.19).\", \"In Appendix A, we add a brief illustration to conformal prediction utilizing classification tasks.\", \"In Appendix H, we evaluate the generalizability of TRON to other generative language models in various open-ended tasks. We consider the **(1) open-book conversational QA** dataset, CoQA, and the **(2) closed-book reading comprehension** dataset, TriviaQA, utilizing the large language models (LLMs), LLaMA-3-70B-Instruct and LLaMA-3.1-70B-Instruct (from LangChain DeepInfra). Additionally, we employ PandaGPT-13B and GPT-4o on the Vision Question Answering (VQA) dataset for **(3) image understanding** tasks. Empirical results demonstrate that TRON can also provide risk management for both MLLMs and LLMs on image understanding, conversational QA, and reading comprehension tasks (Figures 8 and 9). ***(Corresponding to Weakness 1)***\", \"Furthermore, we compare TRON with Least Ambiguous set-valued Classifiers (LAC) in closed-ended settings, which has been proven to produce prediction sets with the smallest average size. Evaluations on the MMLU dataset (MCQA) utilizing both LLaMA-3-8B-Instruct and LLaMA-3.1-8B-Instruct demonstrate that TRON is more predictive efficient than LAC (Table 3).\", \"Overall, we hope that these changes have addressed your concerns. We would be grateful for the opportunity to engage further with you to discuss any remaining questions or concerns you may have.\"]}", "{\"title\": \"Responses to Reviewer amj3 Comments on Six Questions\", \"comment\": \"> **Response to question 1:**\\n\\nTheoretically, distribution shift is primarily due to the quantile of nonconformity scores obtained from the calibration set being shifted with respect to the test distribution. In our view, distribution shift in language generation tasks is attributed to the conditional nature of language models. For example, It is possible that for a powerful proprietary model, the difficulty of dataset A and dataset B is similar. In this case, we can consider that datasets A and B satisfy the exchangeability condition for that model. However, if a language model is only suitable for the types of questions in dataset A and struggles with questions in dataset B, a distribution shift phenomenon will occur. We believe that distribution shift is determined by the language model itself, and we can achieve alignment across various distributions by weighting the nonconformity scores based on the analysis of certain model statistical metrics. We are currently working on this. \\n\\n> **Response to question 2:** \\n\\nWe can retrieve specific types of calibration data based on user needs, thereby determining the quantile threshold by constraining the size of the prediction set on the calibration set to handle the test data. In this dynamic environment, the criteria for selecting calibration data are particularly important, as different evaluation criteria can lead to deviations in the exchangeability between calibration data and test data. At this point, the efficiency of data filtering and the robustness of risk control need to be balanced according to actual requirements.\\n\\n> **Response to question 3:** \\n\\nYes. Since the nonconformity score is defined to reflect how the response disagrees with the question, the evaluation of response reliability will impact the final results of risk control. As we mentioned in the last paragraph in Section 3.2 (i.e., Extensibility), the function F(\\u00b7) can be any measure that reflects the trustworthiness of each response. Additionally, Table 2 in Section 4.4 shows that when we incorporate semantic diversity into F(\\u00b7), the average set size decreases (i.e., more efficient prediction). Relying solely on black-box methods to evaluate responses is limited, and integrating auxiliary information, such as logits-based entropy, can enhance the reliability assessment and thereby improve TRON's performance.\\n\\nInspired by the weakness 1 you mentioned, **we have added a comparison of the conditional performance of two measures in Section 4.4**. The table below shows that after incorporating semantic similarity information, TRON's conditional performance has significantly improved (e.g., **the size-stratified miscoverage rate decreases from 0.2065 to 0.1667 on the VMME dataset when the upper bound is 0.19**).\\n\\n| | Frequency | | | Semantic |Diversity | |\\n|----------|---------------------|-----------------|-----------------|----------------------|-----------------|-----------------|\\n| **Set Size** | 1 | 2 | 3 | 1 | 2 | 3 |\\n| **VMME** | 0.1788 | 0.1531 | 0.2065 | 0.1203 | 0.1667 | 0.1389 |\\n| **NEXT** | 0.0900 | 0.1778 | 0.2244 | 0.1472 | 0.1763 | 0.2075|\\n| **MUSC** | 0.1250 | 0.2286| 0.1830 | 0.1225 | 0.1970 | 0.1923 |\\n| **MSVD** | 0.0778| 0.1222 | 0.1556 | 0.1246 | 0.1592 | 0.1875 |\\n\\n> **Response to question 4:** \\n\\nWhen the number of samples is limited, the capability of frequency to represent response confidence will decline, as we mentioned in the last paragraph of Section 3.2 (i.e., Extensibility), which will affect the average set size (i.e., efficiency). At this point, we need to incorporate other auxiliary tools, such as external models, into the function F(\\u00b7) to enhance the assessment of response reliability. However, theoretically, the risk control of the method remains guaranteed because the number of samples is consistent across all calibration and test data. \\n\\n> **Response to question 5:** \\n\\nYes, while the theoretical guarantee of TRON is rigorous, there can be minor fluctuations in practice due to finite-sample variability [1][2].\\n\\n> **Response to question 6:** \\n\\nAs we mentioned in the conclusion, our framework can be directly applied to various generative language models and tasks. TRON inherits the model-agnostic property of conformal prediction, and we establish task-specific guarantees solely by analyzing the output distribution of the model. \\n\\nThank you very much for your valuable questions. Throughout the rebuttal process, we gained deeper insights into improving the quality of the paper and conducting future work. We would like to provide responses if you have any further questions. \\n\\n---\\n\\n### References\\n[1] Angelopoulos A N, Bates S. A gentle introduction to conformal prediction and distribution-free uncertainty quantification[J]. arXiv preprint arXiv:2107.07511, 2021.\\n\\n[2] Ye F, Yang M, Pang J, et al. Benchmarking llms via uncertainty quantification[J]. arXiv preprint arXiv:2401.12794, 2024.\"}", "{\"title\": \"Responses to Reviewer Tw6D Comments on One Question\", \"comment\": \"> Question: It seems that the best practice of the ratio of the calibration and test set is model-dependent. Is there any insight on the ratio selection when applied to different MLLMs?\\n\\n\\n**Response:** Conformal prediction relies on the exchangeability of calibration and test data. When the data distribution of the calibration set cannot encompass the test data, a distribution shift problem arises. In this case, we should expand the range of the calibration set. \\n\\nIn our view, the distribution shift is determined by the generative language model. For example, when the model performs very well on both the calibration and test sets, the output distributions (i.e., statistical metrics that define the nonconformity score) on the calibration and test sets will be similar, allowing us to choose a smaller calibration set ratio. However, if the model struggles with the test data, we should increase the size of the calibration set as much as possible. Theoretically, this means enhancing the generalization ability of the nonconformity scores on the calibration set to the test data.\"}", "{\"comment\": \"I thank the authors for addressing all my concerns. I have increased my score.\"}", "{\"comment\": \"Dear Reviewer Tw6D,\\n\\nThank you again for your insightful comments. This is just a gentle reminder as the ICLR discussion period is extended. Could you please take a look at our rebuttal and other reviews, and see whether you would like to update your ratings? We would like to respond to any remaining questions or concerns you may have. Thank you!\\n\\nAuthors of paper 4201\"}" ] }
9WG1ga39Dq
COT: Consistent Optimal Transport with Applications to Visual Matching and Travelling Salesman Problems
[ "Liangliang Shi", "Duorui Li", "Jiale Hong", "Junchi Yan" ]
This paper generalizes the vanilla Optimal transport (OT) to the so-called Consistent Optimal Transport (COT) accepting more than two measures as input with transport consistency. We formulate the problem as minimizing the transport costs between each pair of measures and meanwhile requiring cycle-consistency among measures. We present both the Monge and Kantorovich formulations of COT and obtain the approximate solution with added entropic and consistency regularization, for which an iterative projection (RCOT-Sinkhorn) algorithm is devised to improve the Sinkhorn algorithm. We show the superiority on the task of visual multi-point matching, in which our COT solver directly utilizes the cosine distance between learned features of points obtained from off-the-shelf graph matching neural networks as the pairwise cost. We leverage the algorithm to learn multiple matching and the experiments show a great improvement without more feature training. Furthermore, based on COT, we propose a new TSP formulation called TSP-COT and also adopt regularization to relax the optimization and use the modified RCOT-Sinkhorn algorithm to get the probability matrix of TSP routing. Then post-process search method is adopted to get the TSP routs and the experiments show the superiority of our method. The code will be available.
[ "Optimal Transport", "Entropic Regularization", "Cycle-Consistency", "Matching", "Travelling Salesman Problems" ]
Reject
https://openreview.net/pdf?id=9WG1ga39Dq
https://openreview.net/forum?id=9WG1ga39Dq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "s9uPMqknAe", "oaOAFqk3Ej", "mbT0rRVRYa", "kxk8VnXqaX", "kPukV2rHab", "jtAHkOG6BF", "jOBgzTujm4", "h4BpGRSiyP", "dehnAwxRAQ", "aUAH8kq5zd", "XBfYI8nfjN", "Vg4CIMAVhe", "SLQMqIrqxL", "R7OhIWX5dX", "PThlzZdu3u", "NqpSYmHluF", "JwUS1rufhG", "IsFOjwmAFq", "GftrCFsYjL", "GWuS1nELu6", "F2nBnUFQad", "Be4HXP09b1", "Amt6BhIeb3", "9hXxJPxzug", "62Yn2VSOhn" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732117548579, 1732116841488, 1733188374912, 1732510585073, 1730656810189, 1732367306322, 1734932766545, 1732701591462, 1732117980574, 1732260690628, 1730687747320, 1732462369739, 1732411803830, 1730489173092, 1732258548023, 1732269285834, 1732422497529, 1730412010909, 1737523591464, 1732367318892, 1732136504620, 1732466243373, 1733209832400, 1732116337364, 1732766301625 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3708/Authors" ], [ "ICLR.cc/2025/Conference/Submission3708/Authors" ], [ "ICLR.cc/2025/Conference/Submission3708/Reviewer_3aUe" ], [ "ICLR.cc/2025/Conference/Submission3708/Authors" ], [ "ICLR.cc/2025/Conference/Submission3708/Reviewer_Ldtp" ], [ "ICLR.cc/2025/Conference/Submission3708/Authors" ], [ "ICLR.cc/2025/Conference/Submission3708/Area_Chair_xnHA" ], [ "ICLR.cc/2025/Conference/Submission3708/Authors" ], [ "ICLR.cc/2025/Conference/Submission3708/Authors" ], [ "ICLR.cc/2025/Conference/Submission3708/Reviewer_5wsL" ], [ "ICLR.cc/2025/Conference/Submission3708/Reviewer_8SNv" ], [ "ICLR.cc/2025/Conference/Submission3708/Reviewer_8SNv" ], [ "ICLR.cc/2025/Conference/Submission3708/Authors" ], [ "ICLR.cc/2025/Conference/Submission3708/Reviewer_3aUe" ], [ "ICLR.cc/2025/Conference/Submission3708/Authors" ], [ "ICLR.cc/2025/Conference/Submission3708/Authors" ], [ "ICLR.cc/2025/Conference/Submission3708/Reviewer_3aUe" ], [ "ICLR.cc/2025/Conference/Submission3708/Reviewer_5wsL" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3708/Authors" ], [ "ICLR.cc/2025/Conference/Submission3708/Reviewer_5wsL" ], [ "ICLR.cc/2025/Conference/Submission3708/Reviewer_Ldtp" ], [ "ICLR.cc/2025/Conference/Submission3708/Authors" ], [ "ICLR.cc/2025/Conference/Submission3708/Authors" ], [ "ICLR.cc/2025/Conference/Submission3708/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Q1: Does a solution to the Monge formulation in Eq. (6) exist?\", \"a1\": \"For the question about the existence of a solution to the Monge formulation in Eq.(6), a solution does exist. We can consider a feasible solution as follows: assume that $\\\\{t_1, t_2, \\\\dots, t_{K-1}\\\\}$ are the solutions of the original MMOT problem. Then, given $x\\\\in \\\\mathcal{X_1}$ and $y=T_{K-1}T_{K-2}\\\\cdots T_1(x)$, we can set $x=T_K(y)$. In this way, it satisfies the conditions and thus is a feasible solution.\", \"q2\": \"Does the order of the probability measures $\\\\alpha_k$ impact the results?\", \"a2\": \"As for $K>3$, switching the order does indeed impact the formulation of the problem. However, note our directed cyclical structure is essentially a subgraph of pairwise structure. When the latter is satisfied, the former is also satisfied, allowing our method to still improve the matching performance. We have showed the experiments of switching the second and third set order in the ablation study. It can be observed that the results with and without switching the order are actually very close to each other in Table 4.\", \"q3\": \"At what rate is the Cycle-consistency achieved?\", \"a3\": \"In theory, the cycle-consistency should be exactly satisfied. However, after applying relaxation techniques in our approach, it is approximately achieved. The Consistent Rate (CR) metric, which we have defined in Eq.(18) , provides insights into how closely our solutions approximate the ideal cycle-consistency.\", \"q4\": \"What are the \\\"certain factors\\\" in the ablation study in Section 4.3 ?\", \"a4\": \"We have now added a clear explanation of the \\\"certain factors\\\" in the updated paper. The factors we investigated mainly include switching the order of point sets in the matching process and applying Hungarian algorithm to P in Eq.16.\", \"q5\": \"The setting of Hyper-parameter $\\\\epsilon'$.\", \"a5\": \"We did not initially mention the setting of $\\\\epsilon$ in the original paper. However, it can be tuned in a similar manner as $\\\\delta'$. Specifically, one can replace $\\\\lambda$ with $\\\\epsilon$ in Algorithm 5 to perform the tuning process. We have now added the relevant description in the latest version of the paper to clarify this point. This way, readers can understand that the approach for adjusting $\\\\epsilon$ is analogous to that of $\\\\delta'$, providing a more comprehensive understanding of the hyperparameter settings in our methodology.\", \"q6\": \"What is the connection to multimarginal OT? The authors mention briefly in Section 2 but do not elaborate.\", \"a6\": \"Both MMOT and COT involve k margins, which is a common aspect. However, there are notable differences. In MMOT, the cost requires a specialized design and is not straightforward to represent as a distance. In contrast, COT typically defines the cost as the pairwise distance matrix. Additionally, the solution in MMOT is a high-dimensional coupling in the form of an $(n_1, n_2, \\\\cdots, n_K)$ tensor, while COT's solution is a collection of couplings between pairs with the shape of $(n_1, n_2), (n_2, n_3), ...$.\", \"q7\": \"Some other issues.\", \"a7\": \"Regarding the issues you raised, we have made targeted revisions. We have modified the expressions of some sentences and added necessary explanations for some professional terms or abbreviations.\"}", "{\"comment\": \"Q1: Are there any other non-trivial connections between the multi-marginal OT problem and COT beyond the fact that both involve multiple probability measures?\", \"a1\": \"Both MMOT and COT involve K margins, which is a common aspect. However, there are notable differences. In MMOT, the cost requires a specialized design and is not straightforward to represent as a distance. In contrast, COT typically defines the cost as the pairwise distance matrix. Additionally, the solution in MMOT is a high-dimensional coupling in the form of a tensor with the shape of $(n_1, n_2, \\\\cdots, n_K)$, while COT's solution is a collection of couplings between pairs with the shape of $(n_1, n_2), (n_2, n_3), ...$.\", \"q2\": \"Please provide a few references for the computation of GW.\", \"a2\": \"Thank you for your suggestion. As you recommended, we have added appropriate reference in the updated version. This reference will enhance the comprehensiveness and credibility of our research, enabling readers to have a more in-depth understanding of the computation of GW and its connections to our work.\", \"q3\": \"Can the authors say anything about the analytic convengence of their methods?\", \"a3\": \"Thank you for your question regarding the analytic convergence of our methods. Our algorithm is equivalent to the projection of the gradient descent algorithm. Please refer to the newly added reference in Section 4.2, where the convergence proof related to the our algorithm is discussed. This reference provides a comprehensive analysis that is applicable to our method and helps establish the analytic convergence properties. We believe this will address your concerns and further clarify the theoretical foundation of our approach.\", \"q4\": \"Some Minor details.\", \"a4\": \"Thank you for your careful review. We have gone through the paper and corrected all such errors.\"}", "{\"comment\": \"Thank you for taking the time to thoroughly address my questions; I\\u2019ve updated my score.\"}", "{\"comment\": \"> **Q1: The connection between MMOT and COT.**\\n\\nWe are grateful for your in-depth review and recognition of our efforts in clarifying the relationship with MMOT. To further elucidate the connection between our COT and MMOT, we present the following detailed illustration. \\n\\nTake the case when $K = 3$ as an example. In MMOT, the formulation is given by $\\\\min_X\\\\sum_{ijk}C_{ijk}X_{ijk}$, s.t. $\\\\sum_{ij}X_{ijk} = 1$, $\\\\sum_{ik}X_{ijk} = 1$, and $\\\\sum_{jk}X_{ijk} = 1$. \\n\\nWhile in COT, the formulation is given by $\\\\min \\\\sum_{k} \\\\langle D_k,P_k\\\\rangle$, s.t. $\\\\forall k$, $P_k{\\\\mathbf{1}}_K={\\\\mathbf{1}}_K$, $P_k^T{\\\\mathbf{1}}_K={\\\\mathbf{1}}_K$, and $P_1P_2P_3=\\\\mathbf{I}$.\\n\\nLet $C_{ijk}=D_{1ij}+D_{2jk}+D_{3ik}$. Then, the objective function of MMOT can be written as $\\\\sum_{ijk}(D_{1ij}+D_{2jk}+D_{3ik})X_{ijk}$, which, through algebraic manipulation, equals $\\\\sum_{ij} D_{1ij}\\\\sum_kX_{ijk}+\\\\sum_{jk} D_{2jk}\\\\sum_iX_{jk}+\\\\sum_{ik} D_{3ik}\\\\sum_jX_{ik}$. \\n\\nBy introducing the notations $\\\\sum_kX_{ijk}=P_{1ij}$, $\\\\sum_iX_{ijk}=P_{2jk}$, and $\\\\sum_jX_{ijk}=P_{3ik}$, the objective function is transformed into $\\\\min \\\\sum_{k} \\\\langle D_k,P_k\\\\rangle$, which clearly exhibits the COT form.\\n\\nWhen $K \\\\gt 3$, the MMOT can also be transformed into COT by using a similar method. This shows that, under specific definitions and transformations of the cost matrices and variables, MMOT can be related to COT in a structured way. Although MMOT deals with a higher-dimensional tensor involving all distributions leading to high computational costs, our COT, through this demonstrated connection, can leverage certain aspects of MMOT's framework while maintaining its own computational efficiency and applicability in the targeted problems. We believe this example clarifies the inherent relationship between the two works more explicitly.\\n\\n> **Q2: The order dependence for larger K.**\\n\\nWe sincerely appreciate the reviewer's incisive feedback regarding the discussion and experiments on order dependency. While we acknowledge that the current experiments on ordering, exemplified by those in Table 4 and Table 5, might seem relatively simple and limited to small values of $K$, it is important to note that in both the datasets we utilized and the majority of practical applications within our research domain, the value of $K$ typically does not exceed $4$. This practical constraint implies that the scenarios we are primarily concerned with are well-covered by our existing experimental setup. Nevertheless, we recognize the significance of delving deeper into the order dependency issue to provide more comprehensive insights. In response, we plan to conduct some simulation experiments with large values of $K$, which we believe will enhance the readers' understanding of this critical aspect.\\n\\n> **Q3: Clarity and Exposition**\\n\\nWe fully acknowledge your incisive feedback, despite the efforts we've made to enhance our works, there still exist areas that fall short in terms of presenting ideas with absolute lucidity and providing comprehensive explanations. We take your concerns seriously and are committed to continuously refining our work.\"}", "{\"summary\": \"This paper extends the scope of Optimal Transport (OT) theory to cases involving more than two probability distributions, introducing a framework called Consistent Optimal Transport (COT). The authors explore transportation among three (or more) probability measures while enforcing cycle-consistency, which ensures that the transport plan respects consistency across the measures. Unlike the traditional OT problem, the COT's Kantorovich formulation becomes a nonlinear optimization problem due to these additional constraints. To address computational challenges, the authors propose a regularized version of COT using entropic and cycle-consistency regularization, which leads them to use the Sinkhorn algorithm for approximate solutions. As a by-product, this work offers a novel formulation for the Traveling Salesman Problem, offering insights into finding the shortest route that visits each city once and returns to the starting point.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is clear and well-written, with well-defined goals and contributions, including detailed algorithms. Additionally, the problem addressed is novel, and the authors highlight connections to other well-known problems, such as the Traveling Salesman Problem (TSP).\", \"weaknesses\": \"The authors mention a connection to the multi-marginal OT problem, noting that both multi-marginal OT and COT involve multiple distributions. They state, \\\"However, the multi-marginal OT primarily emphasizes learning the joint coupling among more than two distributions, whereas our focus is on learning the coupling between each pair of distributions and maintaining cycle-consistency constraints among these couplings\\\". As a point of curiosity, are there any other non-trivial connections between the multi-marginal OT problem and COT beyond the fact that both involve multiple probability measures?\", \"questions\": \"The authoirs are motived by the algorithms proposed for approximating the computation of GW. Please provide a few references in this regard.\\n\\nThe authors include a section on the Numerical Convergence Analysis. Can the authors say anything about the analytic convengence of their methods?\", \"minor_details\": [\"Line 94:\\u00a0\\\"The Monge problem is exactly not easy to calculate [...]\\\" Add: \\\"and an optimal T might not exists\\\" (as is pointed out later in section 3.1)\", \"Line 188, eq (7): replace T_k by T_K, that is, capitalize the subindex\", \"Use either \\\"travelling\\\" or \\\"traveling\\\" consistently\\u00a0through the paper, that is, pick one option.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for reading our works and response. Shall we ask that do you have further concerns and we do hope we could resolve your potential questions!\"}", "{\"metareview\": \"The authors consider the optimal transport problem and extend it for multiple input distributions. Instead of formulating as the popular multi-marginal OT (MMOT), the authors consider a cycle of OT over a pair of adjacent distributions. The Reviewers have mixed opinions on the submission. The considered problem may be an interesting contribution to the computational OT computational. However, the Reviewers raised several concerns on the order of the cycle (or whether one can get invariant property on such order), the motivation of the proposed approach (why COT instead of MMOT), the hardness of the cycle-consistency constraint (only satisfying for the Monge problem) which leads to the question how to relax it appropriately to the Kantorovich problem as in standard OT. The Reviewers also raised concerns on the 1D illustration which may mislead the intuition into high-dimensional case where permutation is required for the cycle-consistency. Thus, we think a major revision and further development are required for the proposed approach to plug it into the picture of computational OT. The authors can consider the comments of the Reviewers to improve the submission.\", \"additional_comments_on_reviewer_discussion\": \"The Reviewers raised several significant concerns on the proposed approach for extending OT problem when there are many input measures as listed in the meta-review. Although the proposed approach may be interesting and a good additional piece for the computational OT picture, we think that a major revision and further development are required for the submission.\"}", "{\"title\": \"Any further questions?\", \"comment\": \"Thanks again for your comments! Is there any remaining concerns about our paper? We are more thandelighted to address any concerns/questions you may have.\"}", "{\"comment\": \"Q1: What is the practical difference between MMOT and COT?\", \"a1\": \"MMOT is a generalized form that indeed presents difficulties when it comes to solving specific problems. When attempting to solve a particular problem using MMOT, one is required to define a specific cost, as illustrated in Remark 10.2 (P160) of \\\"Computational Optimal Transport\\\" by G Peyr\\u00e9 and M Cuturi. In contrast, for COT, the cost is defined between adjacent pairs of distributions and does not necessitate an additional, separate definition. This key distinction in cost definition between MMOT and COT has important practical implications. In situations where the problem at hand has a natural pairwise cost structure that can be directly exploited, COT provides a more straightforward and applicable solution. For example, in visual multi-point matching tasks, COT can directly utilize the cosine distance between learned features of points obtained from off-the-shelf graph matching neural networks as the pairwise cost, without the need for complex cost redefinition. On the other hand, MMOT might be considered in scenarios where a more generalized and elaborate cost structure needs to be incorporated, although this comes with the added complexity of formulating the appropriate cost function. However, in many practical applications, the pairwise cost nature of COT makes it a more convenient choice as it aligns well with the inherent relationships between data points or distributions in the problem domain.\", \"q2\": \"Does the order of measures impact results?\", \"a2\": \"As for $K>3$, switching the order does indeed impact the formulation of the problem. However, note our directed cyclical structure is essentially a subgraph of pairwise structure. When the latter is satisfied, the former is also satisfied, allowing our method to still improve the matching performance. We have showed the experiments of switching the second and third set order in the ablation study. It can be observed that the results with and without switching the order are actually very close to each other in Table 4.\", \"q3\": \"Is it generally true that the optimal $P_1, \\\\dots, P_K$ are permutation matrices if we only assume that $P_k \\\\in U(a_k, a_{k+1})$ in Eq. (8) rather than $P_k \\\\in \\\\{0,1\\\\}^{N \\\\times N}$?\", \"a3\": \"In the context of our matching problem, which is a special case of the transportation problem, we can use continuous values in $[0, 1]$ instead of the discrete values ${0, 1}$. In fact, in this problem, the solutions in both the discrete and continuous cases are equivalent.\", \"q4\": \"In 1D case, cycle-consistency is automatically satisfied without enforcement, making the experiment in Figure 2 barely supporting the proposed approach.\", \"a4\": \"We must admit that, as you correctly pointed out, in the 1D case, the standard OT plan is monotone and cycle-consistency is automatically satisfied without explicit enforcement. We apologize for not clarifying this in the paper. However, in our actual computations, we consider arbitrary matrices and solutions under entropy regularization. The purpose of presenting this example in Figure 2 is to illustrate the solution and the satisfaction of consistency in a more general context that is relevant to our overall approach. We understand that this may have caused some confusion.\"}", "{\"comment\": \"> If the elements of $P_k$ were to be in the general range of $(0,1)$, the constraint $\\\\prod P_k = I$ could not be satisfied.\\n\\nOh indeed! I believe that this fact should be, at the very least, discussed (if you have a reference for the proof, you can put it in the appendix, or write the proof itself---it's not that long/complicated---in the appendix). It was to me a significant caveat in the work at first glance. Now that is has been explained, I'm increasing my rating. \\n\\nFor the other point, I guess it's a matter of feeling at this stage.\", \"note\": \"I still do believe that the work needs a significant rewriting effort that would be worth another round of review, but my overall appreciation of it is better now.\"}", "{\"summary\": \"The paper considers a generalized Optimal Transport (OT) problem that has pairwise transport between multiple distributions with an added constraint to ensure the formation of a closed cycle. The authors present an iterative Sinkhorn algorithm to solve the Kantorovich formulation of the above-mentioned problem.\\nI think the paper is well-written and well-motivated. In particular, the alternative formulation of the TSP is very interesting, especially when one takes into account the performance and computation time. The problem itself is well-motivated with multiple applications. The numerical results often outperform, or at least stay competitive, with the state-of-the-art in all the examples.\\n\\nI think there is a strong case for the acceptance of this paper.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The quality of writing and presentation is high. Consequently, the results are presented in a clear and concise manner.\", \"While I do not think there is much originality/novelty (apart from the alternative TSP problem formulation) on the theoretical side, the improvements seen in the numerical simulations make a strong case for the significance of these results.\"], \"weaknesses\": [\"Not weaknesses per se, but I would like to see the following information included:\", \"I would like to see how sensitive the results are with respect to the optimization parameters, such as $\\\\delta$.\", \"Table 1 should have computation time. Line 471 says that running time is presented in Table 1, but I do not see it.\"], \"questions\": \"I don't have any questions in particular. However, I did notice a couple of typos (e.g., line 307 RCOT) and random capitalizations of words while reading, so I recommend that the authors perform thorough proofreading.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for clarifying some of my queries. I have no further comments.\"}", "{\"comment\": \"Thanks for reading our response and raising the score from 1 to 3. Shall we ask that do you have further concerns and we do hope we could resolve your potential questions!\"}", "{\"summary\": \"This paper addresses the challenge of computing consistent optimal transport across multiple measures. The authors propose a cycle-consistent version of the Monge formulation, which is then relaxed to the Kantorovich formulation. Finally, it is further relaxed into an optimization problem regularized by cycle consistency and entropy, solved using an iterative Sinkhorn-like algorithm. The approach is demonstrated on several problems, including consistent point matching in computer vision and approximating solutions for the combinatorial traveling salesman problem.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The problem of consistent multiway optimal transport is compelling, and the authors effectively demonstrate its relevance through several potential applications. To my knowledge, both the formulation and approach are new.\", \"The authors present a comprehensive approach that includes Monge and Kantorovich formulations, entropic relaxation, and optimization algorithms.\", \"The connection to the Traveling Salesman Problem (TSP) is a valuable addition.\", \"The experimental results, particularly in the point matching experiment, underscore the motivation for computing cycle-consistent optimal transport.\"], \"weaknesses\": [\"The exposition could be improved. Parts of the paper are challenging to read, especially for readers without prior familiarity with the subject. See below for concrete examples.\", \"The authors define \\\"COT's Monge Formulation,\\\" \\\"COT's Kantorovich,\\\" and then introduce relaxations in Section 3.2. While these formulations appear similar to the traditional Monge-Kantorovich formulations with entropic relaxation, it's unclear if similar properties apply. For instance, does a solution to the Monge formulation in Eq. (6) exist? In what sense is the Kantorovich formulation in Eq. (8) a relaxation of the Monge formulation in Eq. (6)? Could you please provide, if possible, a more detailed discussion of these formulations and any conditions needed?\", \"The author's definition of cycle-consistency seems to be order-dependent, relying on the order of the probability measures \\\\alpha_k. It only accounts for consecutive pairs. Although this is briefly mentioned for applications in point matching, it is not discussed in more detail. Could you please include a discussion on the implications of this order-dependency, address how this might affect the results, or whether there are ways to mitigate this dependency?\", \"The connection to the Traveling Salesman Problem (TSP), while intriguing. Could you please explain in more detail how the COT formulation is used to represent the TSP problem, and what can be said about the approximate solution it achieves (e.g., in comparison to other ensemble-based approach)?\", \"It\\u2019s difficult to assess whether cycle-consistency is achieved exactly or approximately, in theory and in experiments, and at what rate. Could you please discuss in more detail?\", \"The ablation study in Section 4.3 is unclear. What are the \\\"certain factors\\\"? Could you please clarify the setup and conclusions of this study, including a more thorough discussion of the results and their implications?\"], \"additional_issues_and_comments\": [\"Abstract: Challenging to read.\", \"The first sentence of the introduction is incomplete.\", \"\\\"introduce the entropic regularization transforming the hard cycle-consistency\\\": the regularized version seems to seperately include an entropy term and a cycle-consistency term, so this statement may be inaccurate.\", \"\\\"matrix-vector iterative method\\\": unclear.\", \"Line 69: What is \\\"MCTS\\\"?\", \"\\\"We generalize OT to the marginal consistent case\\\": This is confusing since \\\"multi-marginal\\\" is later described as something related but different.\", \"\\\"The Monge problem is exactly not easy to calculate and a popular improvement is the Kantorovich relaxation\\\": please revise.\", \"\\\"C is the cost matrix defined by the divergence\\\": is it limited to this C?\", \"Line 138: What is \\\"LAP\\\"?\", \"\\\"In contrast, our method employs a training-free approach that assumes consistency is satisfied on the test set, using this prior information to improve performance during inference.\\\" please clarify.\", \"Line 246: RCOT-PGD is mentioned but is not defined or explained (except in the Appendix).\", \"Line 248: What is \\\"GW\\\"?\", \"\\\"RCOT-Sinkhorn achieves cycle-consistency results\\\": Approximately or exactly cycle-consistent? Are there any guarantees?\", \"Figure 3: Somewhat unclear.\", \"\\\"The setting of Hyper-parameter \\\\delta'\\\": What about the entropic regularization parameter \\\\epsilon?\", \"Algorithms 1-5 are not included in the main text.\"], \"typos\": [\"Line 47: \\\"there calls\\\"\", \"Line 52: \\\"cost of three trasnsportation\\\"\", \"Line 69: \\\"we contribute\\\"\", \"\\\"is one of the simple but efficient methods\\\"\", \"Eq (6): Summation should be over k.\", \"Line 345: \\\"k<0\\\" => \\\"k<K\\\"\"], \"questions\": \"- What is the connection to multimarginal OT? The authors mention briefly in Section 2 but do not elaborate.\\n\\n============\", \"post_rebuttal\": \"Increasing my rating from 3->5\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"We appreciate your understanding of our point about MMOT. We highlight that MMOT and our COT approach differ substantially in multiple aspects. In terms of motivation, MMOT pursues a more general mathematical abstraction in handling multiple marginals, while COT is centered around ensuring transport consistency among measures, which is highly relevant and efficient for specific tasks such as visual multi-point matching and TSP. Theoretically, MMOT operates within a different framework that may not be as directly applicable to our problem domain. Algorithmically, the RCOT-Sinkhorn algorithm we devised for COT is tailored to address the unique challenges and constraints of COT, which is distinct from the typical algorithms used in MMOT. In application, COT has demonstrated its superiority in our target scenarios where the pairwise cost structure is prevalent, whereas MMOT may not be as effective in these specific applications.\", \"Thank you for your suggestion. Due to the page limit of the paper, we are unable to include an extensive discussion on this topic in the main text. However, as you mentioned, we have already addressed this issue in the ablation study in Section 4.3. We believe that the current presentation in the supplementary material, along with the reference to the relevant section in the main text, provides sufficient information for readers to understand the impact of measure ordering.\", \"In our matching problem, the optimal $P_1,\\\\dots,P_K$ are indeed permutation matrices. If the elements of $P_k$ were to be in the general range of $(0,1)$, the constraint $\\\\prod P_k$ could not be satisfied. Only when $P_1,\\\\dots,P_K$ are permutation matrices can this equality hold. This is a fundamental property that ensures the equivalence between our problem and Eq (8).\", \"We acknowledge your view on the experiment in Figure 2. However, it is important to note that our work is not restricted to grid-based experiments (please refer to \\\"Fast Sinkhorn I: An O (N) algorithm for the Wasserstein-1 metric\\\"). Whether the input data is 1D, 2D, or multi-dimensional, it is transformed into a cost matrix. In the 1D case, although cycle-consistency is automatically satisfied in a simple sense, our algorithm operates on the cost matrix and entropy regularization in a more general context, which is relevant to our overall approach. Figure 2 is intended to provide a basic illustration of the solution and consistency in a broader framework.\"]}", "{\"comment\": \"Thank you for your valuable comments. We have already added the proof in the appendix of the latest version. We will continue to improve our works.\"}", "{\"comment\": \"Thank you for your responses. After reviewing your rebuttal and the revised paper, I am afraid that several of my concerns remain unresolved or insufficiently addressed. Primarily:\\n\\n1) Clarity and Exposition: While the ideas presented in the paper are intriguing, the clarity and overall exposition still require significant improvement. Although the authors have made improvements, the paper continues to contain unclearly phrased sentences and explanations and does not yet seem ready for publication.\\n\\n2) Content-related issues:\\n- I appreciate the explanation of the relationship to MMOT and have carefully read the responses to both my questions and those of other reviewers, as well as the additional details included in the revised paper. While I now better understand that MMOT deals with a higher-dimensional tensor relating all distributions, which makes it computationally expensive, I still find the connection between the two works less clear than I would have expected, given their close relationship.\\n- The issue of order dependency remains insufficiently discussed in the paper. The experiments on ordering, such as those presented in Table 4, are relatively simple, evaluated only for small K, and do not provide sufficient insight into this critical aspect.\\n- As mentioned in my initial review, I find the TSP example intriguing, but the explanation in the paper is still unclear, making it difficult to fully understand the details and implications.\"}", "{\"summary\": \"This paper introduces a \\\"cycle-consistent\\\" optimal transport (COT) formulation : given a sequence of (say, discrete) measures $\\\\alpha_1,\\\\dots,\\\\alpha_K$, the goal is to minimize\\n$$ (P_1,\\\\dots,P_k) \\\\mapsto \\\\sum_{k=1}^K \\\\braket{C_k, P_k}$$\\nwhere $C_k$ is a cost matrix, $P_k$ should be a transportation plan between $\\\\alpha_k$ and $\\\\alpha_{k+1}$ (with the convention that $\\\\alpha_{K+1} = \\\\alpha_1$, and the $(P_k)_k$ are related through the _cycle-consistency constraint_ $\\\\prod_{k=1}^K P_k = I$.\\n\\nBecause the cycle consistency constraint is non-linear, solving the COT problem is harder than solving standard OT problems, and thus the authors propose to resort on two layers of regularization: relaxing the constraint $\\\\prod_{k=1}^K P_k = I$ using a divergence term (here, a Froebenius norm) and add an entropic regularization term (a common idea in computational OT for the last decade). They derive an mirror descent like scheme to minimize their regularized problem. \\n\\nEventually, they observe that their formulation may be adapted to resemble the celebrated Traveler Salesman Problem (TSP).\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The formulation of the COT problem is somewhat intriguing and I do believe that it may be of interest in some situations.\\n\\nThe relation with the TSP is interesting. I appreciate that the authors acknowledge the limitations of their approach and do not \\\"oversell\\\" it.\", \"weaknesses\": \"## 1. Clarity\\n\\nOverall, the paper lacks clarity in its writing. Key issues include the following:\\n\\n**Unclear or Misleading Statements:** Numerous sentences are either unclear or misleading. For example, in the abstract, it\\u2019s stated that the COT problem considers each pair of measures, which implies that one must compute the optimal transportation cost between all pairs, i.e., between each $\\\\alpha_i$ and $\\\\alpha_j$, for $1 \\\\leq i, j \\\\leq K$. However, the paper actually focuses only on transportation between adjacent measures, $\\\\alpha_k$ and $\\\\alpha_{k+1}$.\\n\\n**Inconsistent and Incorrect Notation:** Mathematical notation is inconsistently and sometimes incorrectly applied. For instance, the scalar product is denoted by < x, y >, which should be written as \\\\langle x, y \\\\rangle or using the braket package for $\\\\braket{x, y}$. Additionally, notations should be standardized\\u2014sometimes measures contain $N$ points, while other times they contain $n$. Such issues, though minor in isolation, collectively impede readability.\\n\\n**Formatting of Proofs:** The proofs in the appendix are poorly formatted, with equations split across multiple lines without necessity (for example, $d P_k$ at the end of Eq. (25) and in Eq. (26)). This formatting makes it difficult to review the proofs accurately.\\n\\n**Placement of Algorithms:** The algorithms are all placed in the appendix but are referenced in the main paper as if they were essential. While it\\u2019s acceptable to include optional material in the appendix, the main text should be self-contained. Therefore, the algorithms should either be included in the main paper if they are necessary or clearly marked as optional if they\\u2019re not.\\n\\n**Lack of Informative Content in Some Sentences:** Some sentences add little information. For instance, the introductory sentence, \\\"Optimal transport (...) is a tool to learn the optimal transportation between the source and target probability measures,\\\" requires prior knowledge of what optimal transportation means, offering minimal insight. Additionally, comparisons with the Gromov--Wasserstein (GW) problem are not particularly useful here, as the GW problem is fundamentally different and is not introduced in this work. Relaxing the cycling constraint to a penalty seems natural and doesn\\u2019t require extensive justification.\\n\\n## 2. Motivation of the method, comparison with multi-marginal OT, soundness, and mathematical grasp on the problem. \\n\\nThe motivation for introducing the COT problem is limited, and the authors seem to lack critical distance from their work. For example:\\n\\n**Motivation of the approach and Comparison with multi-marginal OT (MMOT):** It is regularly said that the contribution of this paper is to \\\"generalize the OT problem to more than two marginals \\\" (abstract, contributions section, etc.), but this is precisely what multi-marginal OT is about. The paper mentions multi-maginal OT and, while I understand the formal difference between the two approach (they are different problems, for sure), I fail to see the practical difference: when should one use MMOT or COT? The paper does not give a proper answer to this central question in my opinion. \\n\\n**Dependence on the Order of Measures:** The formulation of the COT problem depends on the order of $\\\\alpha_1, \\\\dots, \\\\alpha_K$, yet this is not discussed. This could be crucial; for example, if a user has a set of measures from an experiment, how should they be ordered? Is the solution permutation-equivariant? (in which case I would agree that the ordering does not matter)\\n\\n**Applicability of Birkhoff\\u2019s Theorem:** The authors assume discrete uniform measures with $N$ points each, but it\\u2019s unclear whether Birkhoff\\u2019s theorem applies here. Specifically, is it generally true that the optimal $P_1, \\\\dots, P_K$ are permutation matrices if we only assume that $P_k \\\\in U(a_k, a_{k+1})$ in Eq. (8) rather than $P_k \\\\in {0,1}^{N \\\\times N}$? Understanding this is essential for motivating the adaptation to the Traveling Salesman Problem (TSP), the behavior of entropic regularization as $\\\\epsilon \\\\to 0$, and related points.\\n\\n**1D Case in Figure 2:** In Figure 2, the measures are depicted in 1D. In this case, it\\u2019s known that the standard OT plan is monotone, involving matching quantiles. Unless something is overlooked, this suggests that cycle-consistency is automatically satisfied without enforcement, making this experiment barely supporting the proposed approach. Could this be confirmed?\\n\\n### Note\\n\\nDespite my negative rating, I want to stress that I do like the proposed problem, I just feel that in its current state, the work is not ready for publication and I encourage the authors to revise it, go deeper into their understanding of the COT problem to make it a good candidate in the computational OT community.\", \"questions\": \"See Section 2. in the Weaknesses block.\", \"note\": \"rating updated after rebuttal.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thanks for reading our works and response. Shall we ask that do you have further concerns and we do hope we could resolve your potential questions!\"}", "{\"title\": \"Thanks for the discussion\", \"comment\": \"> A1\\n\\nI am not completely convinced by the claim that MMOT requires to define a complex cost, in that many natural costs (including those discussed in Comp. OT) are readily usable. But ok, I guess I get your point. \\n\\n> A2\\n\\nIndeed, I did not notice Table 4 in the supplementary material which already discuss this point. Sorry about that. \\n\\nNonetheless, I believe that a discussion about the need for an prior on the ordering of the measure should belong to the main material. \\n\\n> A3\\n\\nI'm not sure that I understand your answer. Let me resume my question: consider the minimization problem $\\\\min_{P_1,\\\\dots,P_K} \\\\sum_k \\\\braket{C_k, P_k}$ over the set $\\\\\\\\{(P_1,\\\\dots,P_K),\\\\ P_k \\\\in U(1_N, 1_N),\\\\ \\\\prod P_k = I \\\\\\\\}$. What ensures that, at optimality, the $(P_k)_k$ are permuation matrices? \\n\\nIn standard OT, this holds thank to Birkhoff's theorem, which tells us that the extremal points of the set $U(1_N, 1_N)$ are exactly permutation matrices (and we are minimizing a linear functional so the optimum must be an extremal point generically). **But** here, adding the non-convex constraint $\\\\prod P_k = I$, it is not clear to me whether the optimum in the variable $(P_1,\\\\dots,P_K)$ is still a tuple of permutation matrices, which would yield the equivalence between this problem and Eq (8). \\n\\n> A4\\n\\nI still believe that the experiment is misleading and does not support the work as the solution displayed does not depend on whether one enforce the cycle consistency constraint or not. Figure 3 is way more insightful.\"}", "{\"comment\": \"I thank the authors for their responses. They have added the requested references, corrected the minor issues I identified, and provided comments addressing all my questions.\"}", "{\"title\": \"Summary of Initial Reviews and Responses\", \"comment\": \"We sincerely thank the reviewers for their time and valuable feedback. As the author-reviewer discussion wraps up, here's a summary of the reviews and our efforts during this phase:\\n\\n| Reviewers' Concerns | Author Responses |\\n| --- | --- |\\n| How sensitive the results are with respect to the optimization parameters? | We have conducted comprehensive sensitivity experiments for $\\\\delta$ and $\\\\epsilon$. The results are added to Table 6 in our paper, demonstrating the robustness of our method. |\\n| Can the authors say anything about the analytic convengence of their methods? | Our algorithm is equivalent to the projection of the gradient descent algorithm. Please refer to the newly added reference in Section 4.2, where the convergence proof related to the our algorithm is discussed. |\\n| Does a solution to the Monge formulation in Eq. (6) exist? | A solution does exist. We can consider a feasible solution as follows: assume that $\\\\{t_1, t_2, \\\\dots, t_{K-1}\\\\}$ are the solutions of the original MMOT problem. Then, given $x\\\\in {\\\\mathcal{X_1}}$ and $y=T_{K-1}T_{K-2}\\\\cdots T_1(x)$, we can set $x=T_K(y)$. In this way, it satisfies the conditions and thus is a feasible solution. |\\n| Does the order of the probability measures $\\\\alpha_k$ impact the results? | It is obvious that the order of measures has no impact when $K < 3$. For $K=3, 4$, as shown in Section 4.3, the results before and after switching the order of measures are almost the same, demonstrating that the order of measures has little impact on the results for $K=3, 4$. It is important to note that in both the datasets we utilized and the majority of practical applications within our research domain, the value of $K$ typically does not exceed $4$. This practical constraint implies that the scenarios we are primarily concerned with are well-covered by our existing experimental setup. |\\n| At what rate is the Cycle-consistency is achieved? | In theory, the cycle-consistency should be exactly satisfied. However, after applying relaxation techniques in our approach, it is approximately achieved. The Consistent Rate (CR) metric, which we have defined in Eq.(18) , provides insights into how closely our solutions approximate the ideal cycle-consistency. |\\n| The setting of Hyper-parameter $\\\\epsilon$. | We did not initially mention the setting of $\\\\epsilon$ in the original paper. However, it can be tuned in a similar manner as $\\\\delta'$. Specifically, one can replace $\\\\lambda$ with $\\\\epsilon$ in Algorithm 5 to perform the tuning process. We have now added the relevant description in the latest version of the paper to clarify this point. This way, readers can understand that the approach for adjusting $\\\\epsilon$ is analogous to that of $\\\\delta'$, providing a more comprehensive understanding of the hyperparameter settings in our methodology. |\\n| The connection between MMOT and COT. | The solution in MMOT is a tensor with the shape of $(n_1, n_2, \\\\cdots, n_K)$, while COT's solution is a collection of couplings between pairs with the shape of $(n_1, n_2), (n_2, n_3), ...$. By the method newly added in Appendix H.2, MMOT can be transsformed to COT, which means that COT can leverage certain aspects of MMOT's framework while maintaining its own computational efficiency and applicability in the targeted problems. |\\n| Are $(\\\\mathbf{P_k})_{k=1}^K$ permutation matrices in Eq.(8)? | In our matching problem, the optimal $P_1,\\\\dots,P_K$ are indeed permutation matrices. If the elements of $P_k$ were to be in the general range of $(0,1)$, the constraint $\\\\prod P_k$ could not be satisfied. Only when $P_1,\\\\dots,P_K$ are permutation matrices can this equality hold. The detailed proof has been added to Appendix A in our paper |\"}", "{\"comment\": \"Q1: How sensitive the results are with respect to the optimization parameters?\", \"a1\": \"Thank you for highlighting the importance of analyzing the sensitivity of results to optimization parameters. We have conducted comprehensive sensitivity experiments for $\\\\delta$ and $\\\\epsilon$. The results, presented in as follows, demonstrate the robustness of our method.\\n\\n| $\\\\delta$ | $\\\\epsilon$ | ACC | CACC | CR |\\n| ---- | ---- | ---- | ---- | ---- |\\n| 0.001 | 1e-9 | 0.9412 | 0.8767 | 0.9158 |\\n| 0.001 | 1e-10 | 0.9412 | 0.8767 | 0.9158 |\\n| 0.01 | 1e-9 | 0.9442 | 0.8967 | 0.9475 |\\n| 0.001 | 1e-11 | 0.9412 | 0.8767 | 0.9158 |\\n| 0.01 | 1e-10 | 0.9442 | 0.8967 | 0.9475 |\\n| 0.01 | 1e-11 | 0.9442 | 0.8967 | 0.9475 |\\n| 0.1 | 1e-9 | 0.9382 | 0.9087 | 0.9951 |\\n| 0.1 | 1e-10 | 0.9382 | 0.9087 | 0.9951 |\\n| 0.1 | 1e-11 | 0.9382 | 0.9087 | 0.9951 |\", \"q2\": \"Some tyos and random capitalizations of words.\", \"a2\": \"Thank you for your careful review. We have gone through the paper and corrected all such errors.\"}", "{\"comment\": \"We have already added the Further discussion to the appendix. May we ask if we have addressed your concern? We would be extremely grateful if you could raise the rating.\"}" ] }
9W6Z9IeLzc
CoPS: Empowering LLM Agents with Provable Cross-Task Experience Sharing
[ "Chen Yang", "Chenyang Zhao", "Quanquan Gu", "Dongruo Zhou" ]
Sequential reasoning in agent systems has been significantly advanced by large language models (LLMs), yet existing approaches face limitations. Reflection-driven reasoning relies solely on knowledge in pretrained models, limiting performance in novel scenarios, while experience-assisted reasoning often depends on external experiences and lacks clear principles for selecting representative experiences. We address these limitations by proposing CoPS (Cross-Task Experience Sharing), a generalizable algorithm that enhances sequential reasoning by cross-task experience sharing and selection. In detail, CoPS leverages agents' experiences on previous tasks, selecting distribution-matched experiences via a provable pessimism-based strategy to maximize utility while minimizing risks from distribution shifts. Extensive experimental results on benchmarks like Alfworld, Webshop, and HotPotQA demonstrate that CoPS consistently outperforms state-of-the-art baselines, with superior sample efficiency suitable for resource-constrained scenarios. Theoretically, we show that the performance of our algorithm depends on both the quality of the pretrained LLM and the matching between the agent's task-dependent trial distribution and that generated by the LLM. Our work bridges the gap between existing sequential reasoning paradigms and validates the effectiveness of leveraging cross-task experiences, shedding light on the potential to improve agents' generalization and adaptability across diverse tasks. Our codes are released at [this link](https://anonymous.4open.science/r/AlphaMemory-05CA).
[ "Agent", "LLM" ]
Reject
https://openreview.net/pdf?id=9W6Z9IeLzc
https://openreview.net/forum?id=9W6Z9IeLzc
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vHmgHBQyTU", "uCKmyDVFmL", "tvgmuUsdU7", "siK3O2tgau", "s8xgoY4y3E", "psykdphFA9", "o9kfxvv0SL", "jalpZXcDan", "ifkGsA17Bt", "hdKISbWo0O", "gLRM1JjCWN", "b1Aqkv8sfO", "YdUqHXyFMm", "W4Wm6zhQ2F", "VzHbK7jr4K", "Sqb13zx4xk", "ReIbeyuj1e", "Rc3OZpKTrU", "R3fIhJOzG3", "R0V1LpVx59", "J82LMuqJZR", "FO1uGMU2lG", "EXIFpSa6tB", "Blteoxzp0U", "BES5xhQAnz", "AZhWVPw1LL", "9BPIKxzNsu", "8QZe8YI6ye", "7KgIvd8slF", "6gbFplFWZg", "6XE7JnxNlD", "66saydAQIH", "5Y2V8OB29q", "499FC3uvHK", "3iNVHIhad3", "0sS43oeu0m", "0QWzCSC07U" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732428145156, 1732501216405, 1730744288420, 1732426444559, 1732425972755, 1730704708260, 1732744978852, 1732501341430, 1733180316352, 1732432257382, 1732425383272, 1732428288604, 1732425192345, 1737524203187, 1732426857646, 1730787630467, 1732426826265, 1732426005549, 1732425015887, 1733166889966, 1732616566318, 1732744845105, 1732427789170, 1733166764500, 1732484054924, 1734558932288, 1732745100516, 1733181121793, 1733167004547, 1732649788103, 1733013266627, 1732426776101, 1732426479603, 1733013293842, 1730718709623, 1733165544151, 1732501311917 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Reviewer_4n3a" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Reviewer_ct7E" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Reviewer_QJKw" ], [ "ICLR.cc/2025/Conference/Submission12611/Reviewer_4n3a" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Reviewer_QJKw" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Reviewer_CYSd" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Area_Chair_7FfG" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ], [ "ICLR.cc/2025/Conference/Submission12611/Reviewer_CYSd" ], [ "ICLR.cc/2025/Conference/Submission12611/Reviewer_ct7E" ], [ "ICLR.cc/2025/Conference/Submission12611/Authors" ] ], "structured_content_str": [ "{\"title\": \"Reply to Reviewer QJKw (Comment 7 ~ Question 2)\", \"comment\": \"## Comment 7\\n\\n> *Since this is such an activate area of research, it would be good to show how baselines discussed in the related works section actually perform. It is less convincing to read, \\u201cHowever, their approach demonstrated poor sample efficiency, making it less suited for real-world agent settings where opportunities for trial and error are limited\\u201d instead of seeing a plot with number of samples on the x-axis and success rate on the y-axis for this method.*\\n\\n## Response\\n\\n Thank you for this valuable suggestion. We sincerely apologize if the statement appeared unsubstantiated. Actually, we indeed have conducted a rigorous quantitative analysis to support this claim. **As presented in Table 3 of our paper (we also attach the same table below), the sample efficiency of LATS is quantitatively five times lower than that of CoPS (i.e., LAST requires 5 times more samples/tokes than CoPS).** This substantial difference highlights the advantages of our approach in scenarios where opportunities for trial-and-error learning are constrained, underscoring its suitability for real-world agent settings.\\n\\n| **Algorithm** | **Reflexion** | **RAP** | **LATS** | **CoPS** |\\n| ---------------- | ------------- | ------- | -------- | -------- |\\n| **Llama3.1 8B** | 159131 | 107504 | 1555365 | 314336 |\\n| **Llama3.1 70B** | 125406 | 109245 | 1058752 | 113849 |\\n\\nWe also greatly appreciate your recommendation to include a plot for enhanced visualization. While Table 3 provides a detailed numerical comparison, we agree that a plot showing sample efficiency\\u2014with the number of samples on the x-axis and success rate on the y-axis\\u2014would offer additional insight and make our findings more accessible. In future iterations, we will prioritize including such visualizations to improve the clarity and impact of our analysis. Thank you again for this constructive feedback.\\n\\n## Question 1\\n\\n> *What encoders are used for generating embeddings of start states?*\\n\\n## Response\\n\\n Thank you for your question. We employ the Alibaba-NLP/gte-Qwen2-7B-instruct embedding model as our primary encoder for generating embeddings of start states. This model is combined with a ReAct-style prompt, which provides the agent with clear and concise formatting instructions to guide its behavior. This design ensures straightforward implementation across various benchmarks. After the agent completes its initial round of experience, we transition to fully leveraging our embedding model for subsequent operations, allowing for more efficient and robust retrieval and adaptation.\\n\\n## Question 2\\n\\n> *Have you tried utilizing both start state and resultant actions as input to this embedding model? This would encapsulate a demonstration policy instead of just the start state distribution.*\\n\\n## Response\\n\\nWe appreciate this insightful suggestion. We do incorporate resultant actions by defining the state to include the entire history of a single trajectory. This design inherently captures both the start state and the sequence of actions, effectively encoding the demonstration policy into our retrieval model and the agent's in-context prompt. This holistic representation ensures that both the initial conditions and the resultant actions are utilized, enhancing the agent's ability to generalize and learn from demonstrations.\"}", "{\"title\": \"Follow-up on Rebuttal Responses for Submission #12611\", \"comment\": \"Dear Reviewer QJKw,\\n\\nWe hope this email finds you well. Thank you again for your thoughtful feedback on our submission. We deeply appreciate the time and effort you\\u2019ve put into reviewing our work.\\n\\nWe have carefully addressed all the comments and concerns you raised in your review. If our responses have clarified your concerns and resolved the issues you identified, would you please consider reflecting on your overall evaluation and score? We are happy to provide further clarification or discuss any remaining questions if needed.\\n\\nThank you once again for your time and thoughtful input. Your feedback has been invaluable in improving our work.\\n\\nBest regards,\\nAuthors of Submission #12611\"}", "{\"summary\": \"The paper proposes a method for utilizing experience from cross-tasks in order to improve performance. The proposed method samples experience from a probability distribution that captures reward and distance between experiences. Results are provided for widely studied ALFWorld, Webshop and HotPotQA environments. Authors also propose a theoretical framework for agents who are utilizing prior experience for performance improvement.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well written and easy to follow\", \"The paper focuses on an important aspect of using cross-task experience to improve performance in sequential decision making.\"], \"weaknesses\": [\"The paper fails to discuss relevant literature that uses cross-task experience for improving performance. For example O3D paper by Xiao et al. https://openreview.net/pdf?id=bkY8zEDdH9 proposes a method that uses offline trajectories from all tasks to distill knowledge for performance improvement.\", \"Authors only provide results on 2 models from Llama family. How well does this method work with SOTA models such from GPT or Claude family\", \"The main contribution of the paper is proposing a method that benefit from cross-task experience. However, authors fail to illustrate this in given experimental results. What is the contribution of cross-task experiences (compared to same task experience) in the provided performance improvement?\"], \"questions\": [\"Please refer to the weaknesses section for main questions. More minor questions are given below\", \"How is the reward r calculated in this work?\", \"How is distance between experiences are calculated?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer CYSd (Comment 1)\", \"comment\": \"# Reply to Reviewer CYSd\\n\\nWe sincerely thank your thorough and insightful feedback, which has significantly contributed to improving the quality and clarity of our paper. We have carefully addressed all comments and concerns in our revised submission and provided detailed responses to each point. If there are any remaining questions or further points for clarification, please do not hesitate to raise them. We greatly value the opportunity to engage in further discussions and refine our work based on your suggestions. Thank you once again for your time and effort in reviewing our paper.\\n\\n## Comment 1\\n\\n> *This paper aims to share cross-task experiences from a memory bank. However, it is rather unclear how to construct cross-task experiences in the memory bank. More specifically, how is offline data collected? Which LLMs are used as a policy to generate offline data? How many experiences are required to achieve the performance provided in the paper? How much different tasks can be used together to take advantages of cross-task experience sharing?*\\n\\n## Response\\n\\nThank you for raising these important questions. We address them as follows:\\n\\nIn our theoretical analysis, *CoPS* leverages a pessimism-based strategy to compute similarity and select experiences, which can be sourced from either online or offline data collection. Both approaches with different experience sources have the potential to yield significant performance improvements. However, due to time and resource constraints, we focused solely on online experience collection for our experiments. In fact, the goal of *CoPS* is not to optimize the offline data collection but to highlight the general applicability of our cross-task experience selection strategy. Despite using data collected online with a relatively modest LLaMA 3.1 8B Instruct model, our results already demonstrate significant effectiveness of *CoPS*. We believe that incorporating high-quality offline experiences would further enhance the performance, but even with resource-limited online data, *CoPS* achieves substantial improvements, which highlights its practical usage for realistic agent applications.\\n\\nFor the number of experiences required, we used only 5 in-context experiences to achieve the performance reported in the paper. This efficiency demonstrates the power of *CoPS* in resource-constrained settings. The specific hyperparameter settings for different benchmarks and model sizes are summarized in Table 5 in our paper, which is also provided here for reference:\\n\\n| **Benchmark** | **Alfworld** | **Webshop** | **HotPotQA** |\\n|----------------------|--------------|-------------|--------------|\\n| **LLaMA 3.1 8B** | \\\\( k = 5, c = 5 \\\\) | \\\\( k = 5, c = 0 \\\\) | \\\\( k = 5, c = 5 \\\\) |\\n| **LLaMA 3.1 70B** | \\\\( k = 5, c = 5 \\\\) | \\\\( k = 5, c = 0 \\\\) | \\\\( k = 5, c = 0 \\\\) |\", \"note_that_c_is_the_scaling_factor_in_the_following_equation_to_calculate_the_distance_of_experiences_and_k_is_the_number_of_in_context_experiences_as_demonstrations\": \"$$\\nd(\\\\tau, \\\\tau'):= c\\\\cdot\\\\text{cos}(e(\\\\tau), e(\\\\tau')),\\\\ \\\\hat p(\\\\tau) \\\\propto r(\\\\tau)\\\\cdot\\\\exp(-d(\\\\tau, \\\\tau^{s_1}))\\n$$\\n\\n\\nRegarding the number of tasks, as demonstrated above, we used 5 tasks for cross-task experience sharing, which already delivered significant performance improvements. While we anticipate that incorporating more tasks/experiences could further enhance the results, the fact that just 5 tasks achieved such gains highlights the efficiency and practicality of *CoPS*, making it highly suitable for real-world, resource-constrained applications.\"}", "{\"title\": \"Reply to Reviewer 4n3a (Comment 1 ~ Comment 2)\", \"comment\": \"# Reply to Reviewer 4n3a\\n\\nWe sincerely thank your thorough and insightful feedback, which has significantly contributed to improving the quality and clarity of our paper. We have carefully addressed all comments and concerns in our revised submission and provided detailed responses to each point. If there are any remaining questions or further points for clarification, please do not hesitate to raise them. We greatly value the opportunity to engage in further discussions and refine our work based on your suggestions. Thank you once again for your time and effort in reviewing our paper.\\n\\n## Comment 1\\n\\n> *The paper fails to discuss relevant literature that uses cross-task experience for improving performance. For example O3D paper by Xiao et al. https://openreview.net/pdf?id=bkY8zEDdH9 proposes a method that uses offline trajectories from all tasks to distill knowledge for performance improvement.*\\n\\n## Response\\n\\nThank you for highlighting this paper and bringing it to our attention. We recognize the importance of engaging with relevant literature, including the work by Xiao et al. on leveraging offline trajectories across tasks for knowledge distillation and performance improvement.\\n\\nO3D introduces an innovative offline learning framework that leverages cross-task experience through skill discovery and knowledge distillation. O3D demonstrates the ability to generalize across tasks without requiring model fine-tuning, which significantly reduces deployment costs and complexity. By segmenting trajectories and extracting reusable skills, O3D achieves notable performance improvements in downstream tasks. Its flexible prompt engineering approach makes it highly adaptable across diverse domains, and its impressive results in complex environments such as ALFWorld and WebShop underscore its robustness and usability.\\n\\nWhile O3D excels in offline data utilization and skill discovery, *CoPS* addresses a critical gap by focusing on a pessimism-based strategy for cross-task experience selection, effectively mitigating risks associated with distribution shifts. This strategy optimizes the utility of shared experiences and adapts dynamically to online and offline settings. Moreover, *CoPS* demonstrates remarkable usability with just a few lines of additional retrieval codes, achieving high performance even in constrained computational settings, such as smaller models or limited infrastructure, while significantly enhancing sample efficiency.\\n\\nO3D and *CoPS* are complementary in their design philosophies. O3D emphasizes a bottom-up approach, focusing on skill discovery and knowledge distillation to extract generalizable insights from large-scale offline data. In contrast, *CoPS* introduces a theoretically grounded perspective on distribution-matched experience selection, making it highly effective in dynamic and resource-constrained environments. Together, O3D and *CoPS* form a comprehensive suite of solutions for cross-task learning and experience sharing, advancing the frontier of LLM applications in multi-task decision-making scenarios.\\n\\n**We've already added the discussion between O3D and CoPS in our revised paper's related work, and we use green color to highlight it.**\\n\\n------\\n\\n## Comment 2\\n\\n> *Authors only provide results on 2 models from Llama family. How well does this method work with SOTA models such from GPT or Claude family.*\\n\\n## Response\\n\\nThank you for pointing this out. We have added additional experiments to evaluate the performance of *CoPS* based on SOTA close-sourced GPT and Claude models. The detailed performance is shown below. From the results, we find that *CoPS* works well with these close-sourced models and achieves reasonably high performance compared with open-source models.\\n\\n**We've already added the discussion in our revised paper's Appendix F \\\"PERFORMANCE ON CLOSE-SOURCED MODELS,\\\" and we use green color to highlight it.**\\n\\n| **Model** | **Alfworld** | **Webshop** | **HotPotQA** |\\n| --------------------- | ------------ | ----------- | ------------ |\\n| **GPT-4o** | 100 | 56 | 67 |\\n| **Claude 3.5-Sonnet** | 100 | 58 | 66 |\\n\\n------\"}", "{\"summary\": \"This paper proposes CoPS (Cross-Task Experience Sharing), an algorithm that aims to enhance LLM-based agents\\u2019 sequential reasoning by leveraging experiences across different tasks. The method uses a pessimism-based strategy to select relevant experiences from a memory bank while minimizing distribution shift risks. The authors evaluate CoPS on three benchmarks (Alfworld, Webshop, HotPotQA) and claim superior performance compared to baselines like Reflexion and RAP.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The core idea of leveraging cross-task experiences for LLM agents is novel and potentially impactful. The pessimism-based selection strategy provides a theoretically grounded approach to experience sharing.\", \"The implementation is relatively straightforward and generalizable across different environments, requiring minimal task-specific modifications.\", \"The empirical results show promising performance improvements, particularly with smaller models like Llama 3.1 8b, suggesting potential resource efficiency benefits.\"], \"weaknesses\": \"1. The idea seems similar to the Retrieval-Augmented Generation (RAG) technique. The paper lacks a comparison and discussion with traditional RAG approaches that also leverage external knowledge for LLM enhancement. The pessimism-based selection strategy could be better positioned against existing RAG retrieval methods like hybrid search and recursive retrieval[1].\\n2. The relationship to LLM agent memory mechanisms is insufficiently explored. For instance, no comparison is made with memory bank approaches that handle both short-term and long-term memory [2,3,4]. The paper should discuss how CoPS differs from or improves upon existing memory management solutions in LLM agents.\\n3. The current implementation lacks consideration of hybrid memory architectures that combine both short-term and long-term memory components. The system could benefit from incorporating recent advances in memory management like episodic memory modules or hierarchical attention mechanisms. Also, the paper doesn't address how the experience selection strategy could be enhanced with modern RAG techniques like recursive retrieval or adaptive retrieval mechanisms.\\n4. No ablation studies comparing different memory retrieval strategies (e.g., semantic search vs. keyword-based vs. hybrid approaches). Missing evaluation of memory retention and recall over extended periods, which is crucial for long-term agent deployment. Limited analysis of how the system handles memory updates and forgetting mechanisms compared to other memory-augmented LLM approaches[3].\\n5. Presentation could be improved. For instance, Fig. 1 does not with adequate explanation and it is hard to understand what does the example task means.\", \"refs\": \"[1] Retrieval Augmented Generation (RAG) for LLMs https://www.promptingguide.ai/research/rag\\n\\n[2] Zhang, Zeyu, et al. \\\"A survey on the memory mechanism of large language model based agents.\\\" arXiv preprint arXiv:2404.13501 (2024).\\n\\n[3] MemoryBank: Enhancing Large Language Models with Long-Term Memory, https://ojs.aaai.org/index.php/AAAI/article/view/29946\\n\\n[4] Wang, Guanzhi, et al. \\\"Voyager: An open-ended embodied agent with large language models.\\\" arXiv preprint arXiv:2305.16291 (2023).\\n\\n[5] A Survey on Retrieval-Augmented Text Generation for Large Language Models - NASA ADS https://ui.adsabs.harvard.edu/abs/2024arXiv240410981H/abstract\", \"questions\": \"Apart from the weakness section, I also have a few questions:\\n\\n1. How is the task defined? It is a bit unclear to me in the experiments -- does the author use the experience from the same benchmark or from other benchmarks as well?\\n\\n2. How is the sampled experience number selected? How would the number affect the performance? Note that more sampled experience will result in more context length.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow-up on Submission #12611 Rebuttal Responses\", \"comment\": \"Dear Reviewer CYSd,\\n\\nWe hope this message finds you well. Thank you again for your thoughtful feedback and for taking the time to review our work. We sincerely appreciate your detailed comments, particularly on cross-task experience sharing.\\n\\nTo address your request for clarification, we have provided additional explanations in the comments section, including a detailed example from AlfWorld. This example aims to better illustrate how cross-task experiences are constructed, retrieved, and applied in the CoPS framework. We hope this expanded discussion helps clarify the methodology and its implementation.\\n\\nIf there are any remaining questions or aspects that require further clarification, we would be more than happy to address them. We greatly value your insights, which have been instrumental in improving our work, and we are committed to refining it further based on your feedback.\\n\\nThank you once again for your thoughtful review and your time.\\n\\nBest regards,\\nAuthors of Submission #12611\"}", "{\"title\": \"Follow-up on Rebuttal Responses for Submission #12611\", \"comment\": \"Dear Reviewer ct7E,\\n\\nWe hope this email finds you well. Thank you again for your thoughtful feedback on our submission. We deeply appreciate the time and effort you\\u2019ve put into reviewing our work.\\n\\nWe have carefully addressed all the comments and concerns you raised in your review. If our responses have clarified your concerns and resolved the issues you identified, would you please consider reflecting on your overall evaluation and score? We are happy to provide further clarification or discuss any remaining questions if needed.\\n\\nThank you once again for your time and thoughtful input. Your feedback has been invaluable in improving our work.\\n\\nBest regards, Authors of Submission #12611\"}", "{\"comment\": \"Thanks for your rebuttal. Due to limited novelty, a restricted experimental setting and a somewhat unclear choice for the setting of the theoretical analysis, I will keep my score.\"}", "{\"comment\": \"Thank you for addressing the concerns. I have increased my score.\"}", "{\"title\": \"Reply to Reviewer QJKw (Comment 5 ~ Question 6)\", \"comment\": \"## Comment 5\\n\\n> *Small experimentation setup: The experiments section currently reads like that of a paper written in 2022 (which in LLM-application research is a significant period). No error bars/multiple seed runs are reported. The analysis is performed on 3 somewhat outdated benchmarks. This should be expanded. Why are some baselines missing for some of the benchmarks? For example, Table 3 does not report RAP performance. LATS is missing in Table 1?*\\n\\n## Response\\n\\n**Error Bars:** \\n\\nWe acknowledge the importance of including error bars and multiple seed runs for robustness. The mean and standard deviation of repeated experiment results on all three benchmarks are as follows:\\n\\n| **Benchmark** | **Model** | **Mean** | **Std** |\\n| ------------- | --------- | -------- | ------- |\\n| HotpotQA | 8B | 53.6 | 1.5 |\\n| HotpotQA | 70B | 62.8 | 1.3 |\\n| Webshop | 8B | 47.2 | 1.6 |\\n| Webshop | 70B | 51.2 | 2.7 |\\n| Alfworld | 8B | 93.6 | 1.0 |\\n| Alfworld | 70B | 100.0 | 0.0 |\\n\\nThese results are incorporated into our revised version in Appendix E \\\"REPEATED EXPERIMENTS\\\" and use dark blue to highlight this.\\n\\n**Outdated Benchmarks:**\\n\\nThank you for pointing this out. While the benchmarks we used may appear dated, their selection follows the standard practice established by Zhou et al. (2023), which remains a widely accepted baseline for evaluating LLM agents. That said, we fully acknowledge the importance of incorporating newer benchmarks and are committed to extending our evaluation on AgentBench in the camera-ready version.\\n\\n**Missing Baselines (LATS and RAP):** \\n\\nThe absence of LATS and RAP in Tables 1 and 3 stems from the highly specialized nature of their implementations, which require extensive benchmark-specific adaptations:\\n\\n- **LATS** relies on carefully designed evaluation prompts tailored to each benchmark for optimal performance. Implementing LATS in Alfworld would require manually crafting a large number of prompts, a process that was unfortunately infeasible within the constraints of our time and resources. \\n- **RAP** necessitates manual segmentation of trajectories into multiple stages, with segmentation methods varying significantly across benchmarks. Implementing RAP for HotpotQA would have required extensive manual effort to develop benchmark-specific segmentation strategies, which was beyond our available resources.\\n\\nWhile we acknowledge the value of including these baselines for comparison, their omission is due to the significant manual overhead required for their benchmark-specific adaptations. Instead, we focused our efforts on thoroughly evaluating the proposed method across diverse benchmarks. We believe this decision provides a meaningful demonstration of the generality and effectiveness of *CoPS*. However, we recognize the importance of including more baselines in future work and will prioritize strengthening the comprehensiveness of our evaluations.\\n\\n\\n## Comment 6\\n\\n> *Why is the pretraining performance being considered in your analysis? My understanding was that no LLMs had been pretrained for this work.*\\n\\n## Response\\n\\n Thank you for raising this important question. The pretraining analysis is included in our work to underscore the foundational role that pretraining plays in the design and performance of LLMs. While we did not conduct additional pretraining for this study, the LLMs utilized\\u2014specifically from the Llama3 model family\\u2014are pretrained on massive and meticulously curated datasets, as documented in Dubey et al. (2024).\\n\\nThis foundational pretraining forms a critical assumption underpinning our theoretical framework and serves as the basis for our empirical evaluations. **By aligning our analysis with the idealized pretraining data distribution, we aim to provide a coherent and theoretically grounded interpretation of our results.** We hope this clarification addresses your concern, and we would be happy to elaborate further if needed.\"}", "{\"title\": \"Reply to Reviewer ct7E (Question 1 ~ Question 2)\", \"comment\": \"## Question 1\\n\\n> *How is the task defined? Does the author use experiences from the same benchmark or from other benchmarks as well?*\\n\\n## Response\\n\\nIn our experiments, tasks are defined within the context of the AlfWorld benchmark, which includes multiple tasks where the agent operates in a household setting via prompts. All tasks share cross-task experiences strictly within the boundaries of the same benchmark. We do not utilize experiences from other benchmarks since tasks between benchmarks are unrelated, and experience similarity is very low.\\n\\n---\\n\\n## Question 2\\n\\n> *How is the sampled experience number selected? How would the number affect the performance?*\\n\\n## Response\\n\\nTo explore the impact of the number of sampled experiences, we conducted a detailed ablation study, which is discussed in Appendix B of our paper. The results reveal trade-offs between performance and context length as the number of sampled experiences increases. For smaller models, a balance is achieved around \\\\( k = 3 \\\\). Additional experiences beyond this threshold offer minimal gains, highlighting practical guidance for deployment. **We have highlighted Appendix B with orange for your reference.**\"}", "{\"title\": \"Reply to Reviewer QJKw (Comment 4)\", \"comment\": \"## Comment 4\\n\\n> *Unclear what the quality of the demonstration dataset is. This brings me to my second point, it\\u2019s unclear what the quality of demonstrations utilized is. Do you only have successful trajectories in your demonstration? What if you only used suboptimal demonstrations? No such analysis is conducted.*\\n\\n## Response\\n\\n\\nWe appreciate your insightful question regarding the quality of the demonstration dataset and its potential impact on our approach. To clarify, in our realistic implementation of *CoPS*, we only utilized successful trajectories following other related works. However, in our theoretical improvements, we use the measurement we designed in the following equation:\\n\\n$$\\n\\\\hat{p} = \\\\arg\\\\max_{p \\\\in \\\\Delta(\\\\mathcal{D})} \\\\mathbb{E}_{\\\\tau \\\\sim p}[r(\\\\tau)] - d(p, \\\\text{Decoder}(\\\\cdot \\\\mid s_1))\\n$$\\n\\nThis equation considers both the successful and failed trajectories and calculates the similarity between the experience and our current task. However, in realistic implementation, the trajectories that gain high similarity scores are successful, thus we only utilize successful trajectories due to limited compute budgets.\\n\\nTo further address your concern about the impact of suboptimal demonstrations, we conducted an ablation study on the Alfworld benchmark, comparing top-k and bottom-k successful trajectories ranked by the similarity score. The results are summarized below:\\n\\n| **Retrieval Method** | **Performance (Success Rate %)** |\\n| -------------------- | -------------------------------- |\\n| Top-5 | 93.6 \\u00b1 1.0 |\\n| Bottom-5 | 83.0 \\u00b1 3.9 |\\n\\nThese results demonstrate that the quality of retrieved demonstrations significantly affects performance, with top-k successful trajectories outperforming bottom-k successful trajectories by a substantial margin. This underscores the importance of selecting high-quality trajectories. We hope this analysis addresses your concerns and highlights the robustness of our framework. Please let us know if there are additional aspects you would like us to elaborate on. We also add these discussions in our Appendix D \\\"QUALITY OF DEMONSTRATIONS\\\" and use dark blue to highlight this.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Reply to Reviewer ct7E (Comment 3 ~ Comment 4)\", \"comment\": \"## Comment 3\\n\\n> *No ablation studies comparing different memory retrieval strategies (e.g., semantic search vs. keyword-based vs. hybrid approaches). Missing evaluation of memory retention and recall over extended periods, which is crucial for long-term agent deployment. Limited analysis of how the system handles memory updates and forgetting mechanisms compared to other memory-augmented LLM approaches[3].*\\n\\n### Memory Retrieval Strategies\\n\\nThank you for these valuable observations. We think our approach can be categorized into semantic search methods. To evaluate different memory retrieval strategies, we performed an ablation study on the AlfWorld benchmark with the LLaMA 3.1 8B Instruct model. The results are summarized below:\\n\\n| **Retrieval Method** | **Success Rate (%)** |\\n|------------------------------------------|------------------------|\\n| **Semantic Search (embedding model)** | 93.6 \\u00b1 1.0 |\\n| **Keyword-Based (BM25)** | 94.1 \\u00b1 1.2 |\\n| **Hybrid (BM25 + Short Summarization Embedding)** | 91.3 \\u00b1 1.4 |\\n\\nThese results indicate that semantic search and keyword-based approaches perform comparably well, whereas the hybrid approach shows a slight performance drop, potentially due to the added complexity of combining methods. **We have included this ablation study in Appendix H \\\"IMPACT OF RETRIEVAL METHODS,\\\" highlighted with orange for your reference.**\\n\\n---\\n\\n### Forgetting Mechanisms\\n\\nThank you for highlighting the importance of memory management and forgetting mechanisms, especially for long-term agent deployment. In our initial experiments, we assumed a sufficiently large memory bank and did not model forgetting. To address this concern, we conducted a new ablation study on the AlfWorld benchmark with varying memory sizes to evaluate the system's robustness under constrained memory conditions.\\n\\n| **Memory Size** | **Success Rate (%)** |\\n|--------------------|-----------------------|\\n| **50** | 95.6 \\u00b1 2.7 |\\n| **10** | 94.0 \\u00b1 1.1 |\\n| **5** | 87.2 \\u00b1 0.8 |\\n\\nThese results demonstrate that *CoPS* maintains robust performance even with constrained memory sizes, with only a slight drop in success rate when the memory size is reduced from 50 to 10. However, significant reductions in memory size (e.g., to 5) lead to performance degradation due to more aggressive forgetting of potentially useful experiences. **We have included this ablation study in Appendix I \\\"IMPACT OF MEMORY SIZE,\\\" highlighted with orange for your reference.**\\n\\nWe believe that further exploration of advanced forgetting mechanisms and memory retention strategies could further improve the system's performance and scalability in long-term deployments. We appreciate your insightful suggestions and will prioritize these directions in future work.\\n\\n---\\n\\n## Comment 4\\n\\n> *Presentation could be improved. For instance, Fig. 1 does not provide adequate explanation, and it is hard to understand what the example task means.*\\n\\n## Response\\n\\nThank you for pointing this out. We apologize if the figure was not adequately explained and appreciate the opportunity to clarify.\\n\\nFigure 1 illustrates the key distinction between standard approaches (\\\"Others\\\") and our proposed method (*CoPS*) in solving a task through cross-task experience sharing. The example task is to \\\"put some vase in a safe,\\\" and the environment is initialized with various objects and locations, such as shelves, drawers, and safes.\\n\\n- **Standard Approaches:** The agent attempts to achieve the task by making decisions based solely on immediate feedback from the environment. In the example shown, the agent navigates to \\\"shelf 6\\\" but fails to achieve the task as it does not leverage prior experiences to guide its actions.\\n\\n- **CoPS:** In contrast, our approach enables the agent to query a memory bank containing cross-task experiences. By retrieving relevant prior experiences, the agent formulates a more informed decision, such as \\\"put vase 2 in/on safe 1,\\\" successfully completing the task in fewer steps.\\n\\nWe will ensure these explanations and a better figure are explicitly included in the camera-ready version of our paper to improve the presentation and clarity.\\n\\n---\"}", "{\"summary\": \"The authors propose COPS, a method which utilizes an offline demonstration dataset of \\u201cexperiences\\u201d and study how to use these experiences to solve downstream embodied and question-answering tasks. The authors motivate the work theoretically, drawing parallels to works in distribution selection and retrieval. Subsequently, they demonstrate the performance of COPS on AlfWorld, HotPotQA and WebShop.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Strengths:\\n1. The paper is well written and easy to understand\\n2. The theoretical section is clear with well-defined notations used consistently through the manuscript.\", \"weaknesses\": \"Weaknesses:\\n1. **Differences between COT/Few shot prompting?** The method is very similar to any form of few shot prompting and similar retrieval augmented generation works. The authors utilize an embedding model to calculate similarity between a current starting state and one sampled from the experience dataset. Additionally, their measure function for selecting the distribution to sample experiences from is a combination of reward and the similarity between the current start state and those in the dataset as defined by the embedding model. This to me, feels like a simple extension of RAG style methods with the additional reward label. This is interesting, yet on its own is not novel enough. Especially because none of the necessary offline RL analysis is conducted on this optimization. For instance, does this result in trajectory stitching, i.e., does the actor combine multiple subtrajectories in a demonstration to yield \\u201cstitched behaviors\\u201d? Can the agent outperform the demonstrations provided in the dataset? \\n2. **Unclear what the quality of the demonstration dataset is?:** This brings me to my second point, it\\u2019s unclear what the quality of demonstrations utilized is. Do you only have successful trajectories in your demonstration? What if you only used suboptimal demonstrations? No such analysis is conducted. \\n3. **Small experimentation setup:** The experiments section currently reads like that of a paper written in 2022 (which in LLM-application research is a significant period). No error bars/multiple seed runs are reported. The analysis is performed on 3 somewhat outdated benchmarks. This should be expanded. Why are some baselines missing for some of the benchmarks? For example, Table 3 does not report RAP performance. LATS is missing in Table 1?\\n4. Why is the pretraining performance being considered in your analysis? My understanding was that no LLMs had been pretrained for this work.\\n5. Since this is such an activate area of research, it would good to show how baselines discussed in the related works section actually perform. It is less convincing to read, *\\u201cHowever, their approach demonstrated poor sample efficiency, making it less suited for real-world agent settings where opportunities for trial and error are limited\\u201d* instead of seeing a plot with number of samples on the x-axis and success rate on the y-axis for this method.\", \"questions\": \"1. What encoders are used for generating embeddings of start states?\\n2. Have you tried utilizing both start state and resultant actions as input to this embedding model? This would encapsulate a demonstration policy instead of just the start state distribution.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer ct7E (Comment 2)\", \"comment\": \"## Comment 2\\n\\n> *The relationship to LLM agent memory mechanisms is insufficiently explored. For instance, no comparison is made with memory bank approaches that handle both short-term and long-term memory [2,3,4]. The paper should discuss how CoPS differs from or improves upon existing memory management solutions in LLM agents. The current implementation lacks consideration of hybrid memory architectures that combine both short-term and long-term memory components. The system could benefit from incorporating recent advances in memory management like episodic memory modules or hierarchical attention mechanisms. Also, the paper doesn't address how the experience selection strategy could be enhanced with modern RAG techniques like recursive retrieval or adaptive retrieval mechanisms.*\\n\\n## Response\\n\\nThank you for these insightful comments and for highlighting the potential connections between *CoPS* and existing memory management mechanisms. We agree that exploring these relationships represents an important direction for future work and could further enhance the applicability of our framework.\\n\\nNonetheless, the primary focus of *CoPS* is to facilitate cross-task experience sharing by leveraging the similarities between experiences, without explicitly incorporating or requiring long-term or short-term memory management into its current implementation. As such, our work focuses on designing a generalizable experience selection strategy rather than addressing memory organization or management in detail. Therefore, comparisons with memory bank approaches or hybrid memory architectures are beyond the scope of this paper. However, we recognize the significant opportunities for synergies between these areas.\\n\\nFor instance, integrating *CoPS* with hybrid memory systems that combine short-term and long-term memory could improve its ability to retain and retrieve task-relevant experiences across extended interactions. Episodic memory modules, for example, could enable more fine-grained handling of temporally structured information, while hierarchical attention mechanisms could help prioritize the most relevant experiences for retrieval. These enhancements could strengthen the adaptability of *CoPS* in scenarios requiring continuous learning or interaction over prolonged periods.\\n\\nAdditionally, incorporating modern RAG techniques such as recursive retrieval or adaptive retrieval mechanisms could further refine *CoPS*'s experience selection strategy. Recursive retrieval could improve the precision of memory queries by iteratively refining retrieval contexts, while adaptive mechanisms could dynamically adjust retrieval strategies based on the task or environment, enhancing both efficiency and effectiveness.\\n\\n**We acknowledge the value of these advanced memory management techniques and believe that their integration with CoPS has the potential to unlock new capabilities. We plan to explore these possibilities in future work and appreciate your suggestions, which provide a valuable perspective for extending the scope and impact of our research.**\\n\\n---\"}", "{\"title\": \"Reply to Reviewer 4n3a (Comment 3 ~ Question 2)\", \"comment\": \"## Comment 3\\n\\n> *The main contribution of the paper is proposing a method that benefit from cross-task experience. However, authors fail to illustrate this in given experimental results. What is the contribution of cross-task experiences (compared to same task experience) in the provided performance improvement?*\\n\\n## Response\\n\\nThank you for highlighting this important aspect of our work. While leveraging single-task experience might seem ideal, practical scenarios often necessitate relying on experiences from relevant but distinct tasks, which introduces additional challenges. To address your concern, we conducted an ablation study comparing the performance of our method with and without cross-task experience on the Alfworld benchmark. The results are summarized below:\\n\\n| **Method** | **Success Rate (%)** |\\n| ------------------------------ | -------------------- |\\n| **With cross-task experience** | 94 |\\n| **Only same-task experience** | 57 |\\n\\nThese results clearly demonstrate the significant contribution of cross-task experience to performance improvement, with a nearly twofold increase in success rate compared to using only same-task experience. We hope this analysis effectively addresses your concern and highlights the value of our proposed approach. Please let us know if there are additional aspects you would like us to explore.\\n\\n**We've already added the discussion in our revised paper's Appendix G \\\"IMPACT OF CROSS-TASK EXPERIENCES,\\\" and we use green color to highlight it.**\\n\\n## Question 1\\n\\n> *How is the reward rr calculated in this work?*\\n\\n## Response\\n Thank you for your question. In our work, the reward rr is defined as a binary indicator (0 or 1), representing whether the agent successfully completes the current task. For further details, we attached the details in our revised paper as below:\\n\\n> In all benchmarks, the reward function $r(\\\\tau)$ is defined as $1$ if the agent successfully completes the task and $0$ otherwise.\\n\\n------\\n\\n## Question 2\\n\\n> *How is distance between experiences calculated?*\\n\\n## Response\\n\\nWe appreciate your question regarding the calculation of distance between experiences. The distance is computed based on cosine similarity, a widely used metric to quantify the similarity between two vector representations. The detailed calculations are shown in our paper's Equation 2.3, which we've also attached below:\\n\\n$$\\nd(\\\\tau, \\\\tau'):= c\\\\cdot\\\\text{cos}(e(\\\\tau), e(\\\\tau')),\\\\ \\\\hat p(\\\\tau) \\\\propto r(\\\\tau)\\\\cdot\\\\exp(-d(\\\\tau, \\\\tau^{s_1}))\\n$$\"}", "{\"title\": \"Reply to Reviewer QJKw (Comment 1 ~ Comment 3)\", \"comment\": \"# Reply to Reviewer QJKw\\n\\nWe sincerely thank your thorough and insightful feedback, which has significantly contributed to improving the quality and clarity of our paper. We have carefully addressed all comments and concerns in our revised submission and provided detailed responses to each point. If there are any remaining questions or further points for clarification, please do not hesitate to raise them. We greatly value the opportunity to engage in further discussions and refine our work based on your suggestions. Thank you once again for your time and effort in reviewing our paper.\\n\\n## Comment 1\\n\\n> *This to me, feels like a simple extension of RAG style methods with the additional reward label. This is interesting, yet on its own is not novel enough.*\\n\\n## Response \\n\\nThanks for your thoughtful feedback and for recognizing the simplicity and practical usability of *CoPS*. Nevertheless, we would like to emphasize that the novelty of a research contribution does not solely lie in complex algorithmic designs or intricate analyses. For *CoPS*, we believe that our value lies in its ability to provide a straightforward yet highly effective extension/plugin that can be seamlessly integrated into existing LLM agent systems. With few lines of additional codes, our approach delivers tangible performance improvements, making it both accessible and impactful. **Thus, we view the out-of-box usability of *CoPS* as a strength that aligns with our goal of fostering broad adoption and practical utility, rather than as a limitation.**\\n\\n## Comment 2\\n\\n> *Especially because none of the necessary offline RL analysis is conducted on this optimization.*\\n\\n## Response\\n\\nWe sincerely appreciate your comment and would like to clarify this point respectfully. The regularized term \\\\( d(\\\\tau, \\\\tau') \\\\) introduced in Eq. 2.1 indeed serves a role analogous to a pessimistic term, which has been extensively studied and validated in the offline RL literature. We believe this connection provides a solid theoretical foundation for our optimization framework, aligning it with established principles in the field. We hope this addresses your concern, and we welcome further suggestions to strengthen the discussion on this aspect.\\n\\n## Comment 3\\n\\n> *\\\"Stitched behaviors\\\"? Can the agent outperform the demonstrations provided in the dataset?*\\n\\n## Response\\n\\nThank you for your insightful question. We understand \\\"stitching behaviors\\\" to refer to whether the agent can plan a trajectory surpassing the provided dataset's trajectories. To clarify, in our experiments, the agent operates online, starting with limited experience and progressively updating its knowledge during episodes. While the term \\\"stitching\\\" is often associated with purely offline settings, our framework is not constrained to this context.\\n\\nFrom a theoretical perspective, our approach does not require the agent to have access to a pre-existing successful trajectory for specific tasks. Instead, the core idea of our framework lies in experience sharing, enabling the agent to leverage knowledge from other tasks to guide its exploration and discovery of successful experiences for the current task. We hope this explanation addresses your concern, and we are happy to elaborate further if needed.\"}", "{\"title\": \"Follow-up on Submission #12611 Rebuttal Responses\", \"comment\": \"Dear Reviewer QJKw,\\n\\nWe hope this message finds you well. Thank you again for taking the time to provide thoughtful and detailed feedback on our submission. Your insights have been instrumental in helping us refine and strengthen our work. \\n\\nAs the rebuttal phase is nearing its conclusion, we wanted to follow up on your review. We have addressed all your comments thoroughly in our responses, including conducting additional analyses, providing ablation studies, and incorporating new visualizations to support our claims. These updates aim to clarify our contributions, improve presentation, and resolve the concerns you raised. \\n\\n**If our responses and revisions have addressed your questions and concerns, we kindly ask you to consider revisiting your evaluation and score. If there are any remaining doubts or aspects requiring further clarification, we are more than happy to provide additional information or engage in further discussions. **\\n\\nThank you once again for your time, effort, and valuable input. We greatly appreciate your contributions to improving our work and look forward to any further feedback you may have. \\n\\nBest regards, \\nAuthors of Submission #12611\"}", "{\"title\": \"After the Author Response\", \"comment\": \"Thank you for providing thoughtful responses to my comments. I could understand more the differences between CoPS and RAP. However, it seems that the description of cross-task experience sharing is still unclear. Therefore, I currently maintain my initial score.\"}", "{\"title\": \"Follow-up on Submission #12611 Rebuttal Responses\", \"comment\": \"Dear Reviewer QJKw,\\n\\nThank you for your detailed feedback on our submission. We have thoroughly addressed your comments, including the novelty of CoPS, its theoretical connections to offline RL, additional analyses on demonstration quality, inclusion of error bars and extended discussions on benchmarks, and clarifications on pretraining and embedding model usage. The revised manuscript now includes new analyses, ablation studies, and visualizations to better support our claims. \\n\\nWe hope our responses have resolved your concerns and clarified our contributions. If there are any remaining questions, we are happy to discuss them further. We kindly request you to revisit your evaluation and consider potential adjustments based on the updates. \\n\\nBest regards, \\nAuthors of Submission #12611\"}", "{\"title\": \"Revision Summary\", \"comment\": \"# Revision Summary\\n\\nWe summarize the key revisions made in our paper to address the reviewers' feedback:\\n\\n---\\n\\n## **Expanded Related Work**\\n\\n- Added comparisons between CoPS and the O3D method, emphasizing differences in knowledge distillation and experience selection strategies, addressing **Reviewer 4n3a's Comment 1**.\\n\\n---\\n\\n## **Appendix Changes**\\n\\n### **Appendix B**: Hyperparameter Tuning\\n\\n- **Purpose**: Addressed hyperparameter impacts on performance, focusing on the effects of \\\\(k\\\\) (sampled experiences) and \\\\(c\\\\) (scaling factor).\\n\\n- **Reviewer Concern**: Tackles questions from **Reviewer ct7E (Q2)** about the number of sampled experiences and their impact on performance.\\n\\n---\\n\\n### **Appendix D**: Quality of Demonstrations\\n\\n- **Purpose**: Included ablation study comparing top-\\\\(k\\\\) and bottom-\\\\(k\\\\) trajectories to highlight the significance of high-quality demonstrations.\\n\\n- **Reviewer Concern**: Added in response to **Reviewer QJKw's Comment 4**, emphasizing how demonstration quality affects performance.\\n\\n---\\n\\n### **Appendix E**: Experimental Robustness\\n\\n- **Purpose**: Reported error bars and standard deviations for all benchmarks and models to ensure result reliability.\\n\\n- **Reviewer Concern**: Responds to **Reviewer QJKw's Comment 5** about the lack of error bars and repeated experiments.\\n\\n---\\n\\n### **Appendix F**: Close-Sourced Models\\n\\n- **Purpose**: Evaluated *CoPS* using GPT and Claude models to show its generalizability.\\n\\n- **Reviewer Concern**: Directly addresses **Reviewer 4n3a's Comment 2**, demonstrating performance on state-of-the-art close-sourced models.\\n\\n---\\n\\n### **Appendix G**: Cross-Task Experience Impact\\n\\n- **Purpose**: Compared performance with and without cross-task experiences, showing substantial improvements from sharing experiences across tasks.\\n\\n- **Reviewer Concern**: Added in response to **Reviewer 4n3a's Comment 3** about the contributions of cross-task experiences.\\n\\n---\\n\\n### **Appendix H**: Retrieval Strategies\\n\\n- **Purpose**: Performed ablation on different retrieval methods (semantic, keyword-based, hybrid) to analyze their impact on performance.\\n\\n- **Reviewer Concern**: Responds to **Reviewer ct7E's Comment 4** about the lack of comparisons among retrieval strategies.\\n\\n---\\n\\n### **Appendix I**: Memory Constraints and Forgetting\\n\\n- **Purpose**: Evaluated the performance under varying memory sizes and forgetting mechanisms to test robustness in constrained scenarios.\\n\\n- **Reviewer Concern**: Tackles **Reviewer ct7E's Comment 4** about handling memory updates and forgetting mechanisms.\\n\\n---\\n\\n## **Figure and Presentation Improvements**\\n\\n- Revised **Figure 1** with a detailed explanation of the example task to improve clarity, addressing **Reviewer ct7E's Comment 5**.\\n\\n- Enhanced table formatting and captions for improved readability, including hyperparameter settings, retrieval strategies, and memory size analyses, ensuring results are easily interpretable.\"}", "{\"title\": \"CoPS vs RAG vs Long/Short Memory\", \"comment\": \"Dear Reviewer ct7E,\\n\\nThank you for your thoughtful feedback and for revisiting our rebuttal. We deeply appreciate your dedicated time and effort in reviewing our work. Regarding your concerns about the relationship between CoPS, RAG, and long/short memory mechanisms, we\\u2019d like to provide additional clarifications:\\n\\n1. **CoPS and RAG** \\n \\nCoPS can be regarded as a specialized variant of RAG, with its focus on simplifying and optimizing the utilization of retrieved experiences. **Traditional RAG methods often involve intricate post-retrieval processing, such as hybrid or recursive search, to enhance performance.** Some approaches, like our baseline RAP, require manual segmentation of the agent\\u2019s planning trajectory into stages. **Furthermore, many RAG methods rely heavily on external knowledge bases for retrieving relevant information.**\\n\\n**In contrast, CoPS streamlines the process by employing a straightforward retrieval mechanism guided by a pessimism-based selection strategy. Instead of relying on external knowledge bases or complex trajectory segmentation, CoPS directly utilizes its own stored experiences.** This design emphasizes task-specific utility and resource efficiency. As shown in our experiments, CoPS achieves superior performance with minimal complexity, demonstrating its practical effectiveness and simplicity.\\n\\n2. **CoPS and Long/Short Memory Mechanisms** \\n \\nCoPS operates orthogonally to long-term and short-term memory mechanisms. While our current implementation does not explicitly incorporate these advanced memory systems, we firmly believe that CoPS\\u2019s retrieval strategy could seamlessly complement them. For instance, integrating CoPS with hierarchical attention mechanisms or episodic memory modules could further enhance its ability to retrieve and utilize task-relevant experiences, especially in long-term or continuous learning scenarios.\\n\\nAs the rebuttal phase draws close, we sincerely thank you for your continued engagement and feedback. **If our additional explanations have clarified your concerns regarding the relationship between CoPS, RAG, and memory mechanisms, we kindly request you consider revisiting your evaluation and score.** Your insights have been instrumental in improving our work, and we deeply value your contributions.\\n\\nThank you again, and we hope you have a wonderful day!\\n\\nBest regards, \\nAuthors of Submission #12611\"}", "{\"title\": \"Thanks so much!\", \"comment\": \"Thank you so much for increasing your score. If you have further questions or concerns, please do not hesitate to share.\"}", "{\"metareview\": \"This paper proposes a RAG-like approach, with multiple tasks. Reviewers generally commented that the work lacked significant novelty compared to the original RAG, while the experiments lacked rigor. There was a healthy discussion which was not sufficient for the reviewers to increase their scores, as concerns remained. The paper therefore fails to meet the bar for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The main points were regarding the novelty vs. RAG and then the lack of rigor in the experiments, with simple tasks and only one seed. The authors claimed the novelty was indeed there, while saying that new benchmarks will be included in the camera ready. This seems insufficient since the paper needs to be evaluated as-is, and has not substantially changed since the initial submission.\"}", "{\"title\": \"Follow-up on Submission #12611 Rebuttal Responses\", \"comment\": \"Dear Reviewer ct7E,\\n\\nWe hope this message finds you well. Thank you again for your thoughtful feedback and for the time you dedicated to reviewing our work. We truly appreciate your detailed comments and constructive suggestions, which have been immensely valuable in improving the clarity and quality of our paper.\\n\\nIn response to your concerns, we have provided detailed explanations during the rebuttal phase, including ablation studies, clarifications on task definitions, and expanded discussions on topics such as RAG techniques and memory management systems. We have also worked to improve the presentation, refining Figure 1 and its accompanying explanation to enhance clarity.\\n\\nIf there are any remaining questions or aspects of our work that you feel require further elaboration, we would be more than happy to address them. Your feedback has played an important role in refining our work, and we greatly value the opportunity to engage further.\\n\\nWe hope that our responses and updates have clarified the key points of our submission. If appropriate, we kindly invite you to revisit your initial evaluation. Thank you once again for your insights and thoughtful input.\\n\\nBest regards,\\nAuthors of Submission #12611\"}", "{\"title\": \"Clarifying Novelty, Breadth, and Theoretical Strengths of CoPS\", \"comment\": \"Dear Reviewer QJKw,\\n\\nThank you for your thoughtful feedback on our paper. We appreciate your detailed comments and the opportunity to clarify key points about CoPS\\u2019 contributions. **We respectfully disagree with your concerns regarding novelty, experimental settings, and theoretical analysis, and provide the following responses to address these issues.**\\n\\n## **Novelty**\\n\\nThe core innovation of CoPS lies in its **pessimism-based cross-task experience selection strategy**. Unlike RAG methods that focus on augmenting responses with external data, CoPS directly optimizes **task-specific sequential reasoning** by leveraging agent-derived cross-task experiences to mitigate distribution shifts. This approach is theoretically grounded and tailored for **LLM agent systems**\\u2014a unique setting compared to traditional RAG or memory mechanisms.\\n\\nCoPS is intentionally designed to be **simple yet powerful**, enabling seamless integration into any agent framework with minimal overhead. Its ability to improve performance across tasks without the need for external databases or complex hybrid memory systems represents a **novel contribution** that bridges theoretical rigor and practical applicability.\\n\\n## **Experimental Setting**\\n\\nYour comment about restricted experimental settings does not align with the breadth of our evaluations. CoPS was extensively tested on **diverse benchmarks** (AlfWorld, WebShop, HotPotQA), which are widely regarded as **standard and challenging** for evaluating LLM agent performance. These benchmarks span embodied reasoning, interactive environments, and multi-step QA, covering a broad range of agent capabilities.\\n\\nFurthermore, we performed **extensive ablation studies**, including comparisons of retrieval strategies (semantic, keyword-based, hybrid), memory constraints, and the utility of cross-task experiences. These analyses validate CoPS\\u2019 **robustness, scalability, and efficiency** in realistic scenarios. While additional benchmarks can always enhance evaluations, our experimental results already demonstrate the generalizability and effectiveness of CoPS.\\n\\n## **Theoretical Analysis**\\n\\nCoPS is supported by a **rigorous theoretical framework** that bridges online and offline task settings. The **pessimism-based selection strategy** is grounded in principles of distributional robustness, ensuring that experience retrieval is both **utility-maximizing** and **risk-averse**.\\n\\nOur theoretical contributions extend beyond retrieval methods by analyzing the trade-offs between utility and safety in cross-task experience sharing. These analyses not only validate the robustness of CoPS but also highlight its theoretical novelty compared to existing approaches.\\n\\n----------------\\n\\nWe respectfully assert that CoPS makes **novel, significant contributions** by introducing a simple yet effective framework that combines theoretical rigor, broad applicability, and practical efficiency. Our evaluations are comprehensive, and our theoretical insights address critical gaps in existing paradigms. CoPS is a valuable addition to the field, advancing LLM agent systems in resource-constrained and real-world applications.\\n\\nWe hope these clarifications address your concerns and illustrate the depth and breadth of our work. If further elaboration is required, we would be delighted to provide additional insights.\\n\\nThank you again for your thoughtful review.\\n\\nBest regards, \\nAuthors of Submission #12611\"}", "{\"title\": \"Follow-up on Rebuttal Responses for Submission #12611\", \"comment\": \"Dear Reviewer CYSd,\\n\\nWe hope this message finds you well. Thank you again for your thoughtful feedback and for the time and effort you\\u2019ve dedicated to reviewing our submission. Your detailed comments, particularly on cross-task experience sharing, have been invaluable in refining our work and improving its clarity. \\n\\nTo address your concerns, we have provided expanded explanations and examples, such as the detailed AlfWorld scenario, to illustrate how cross-task experiences are constructed, retrieved, and applied in CoPS. We hope this additional context has clarified the methodology and its implementation. \\n\\n**As the rebuttal phase is nearing its conclusion, we kindly request you to consider revisiting your evaluation and score if our responses have addressed your concerns. If there are still aspects that require further clarification, we would be happy to discuss them further and provide any additional details needed. **\\n\\nThank you once again for your constructive input and your efforts in reviewing our work. We greatly appreciate your feedback and hope you have a wonderful holiday season! \\n\\nBest regards, \\nAuthors of Submission #12611\"}", "{\"title\": \"Explanation of Cross-Task Experience Sharing\", \"comment\": \"Thank you for your feedback and for appreciating the discussion between CoPS and RAP. To address your request for a clearer explanation of cross-task experience sharing, we provide the following detailed example using Alfworld.\\n\\nAlfworld consists of 134 unique tasks, each defined by a task description and an environment description. For example, as shown on the [Alfworld website](https://alfworld.github.io/) and in our Figure 1, one task might be: \\\"Your task is to: put some vase in safe.\\\" The corresponding environment is: \\\"You are in the middle of a room. Looking quickly around you, you see a drawer 2, a shelf 5, a drawer 1, a shelf 4, a sidetable 1, a drawer 5, a shelf 6, a shelf 1, a shelf 9, a cabinet 2, a sofa 1, a cabinet 1, a shelf 3, a cabinet 3, a drawer 3, a shelf 11, a shelf 2, a shelf 10, a dresser 1, a shelf 12, a garbagecan 1, an armchair 1, a cabinet 4, a shelf 7, a shelf 8, a safe 1, and a drawer 4.\\\"\\n\\nIn our framework, agents run for up to 10 trials per task, meaning each task has a maximum of 10 trajectories recorded in the memory bank. This results in at most $(134 \\\\times 10)$ trajectories across all tasks. However, once a task is successfully completed in an early trial, no further trials are conducted for that task. Each trajectory (whether successful or failed) is stored in the memory bank as an experience, representing the agent's attempts to solve a task.\\n\\nFor the first trial of each task, there are no pre-existing experiences in the memory bank, meaning cross-task experience sharing is not yet applicable. However, starting from the second trial, for tasks that were not successfully completed in their first trial, CoPS retrieves relevant experiences from the memory bank based on our pessimism-based experience selection strategy. These experiences are chosen by measuring the similarity between the current target task and past trajectories (both successful and failed) across all tasks. The retrieved experiences are then directly incorporated as in-context examples for the target task (which failed in previous trial), without requiring any modifications.\\n\\nThus, the essence of cross-task experience sharing in CoPS lies in leveraging past trajectories from other tasks to inform and guide the agent's decision-making in subsequent trials. By retrieving and reusing experiences across tasks, CoPS effectively enables agents to learn from a shared pool of knowledge, improving their overall performance and adaptability.\\n\\nThank you for pointing this out and giving us the opportunity to explain these concepts in detail. If you have any additional questions or need further clarification, please let us know. If our explanation and rebuttal have addressed all your concerns, we would greatly appreciate it if you would consider raising your score.\"}", "{\"title\": \"Follow-up on Rebuttal Responses for Submission #12611\", \"comment\": \"Dear Reviewer CYSd,\\n\\nWe hope this email finds you well. Thank you again for taking the time to provide thoughtful and constructive feedback on our submission. Your insights have been instrumental in refining our work and making meaningful improvements.\\n\\nAs the rebuttal phase is coming to a close, we wanted to kindly follow up to ensure that our responses have sufficiently addressed your concerns. If there are any remaining points that need further clarification, please let us know.\\n\\nWe are immensely grateful for your efforts and hope you have a wonderful holiday and weekend.\\n\\nBest regards,\\nAuthors of Submission #12611\"}", "{\"title\": \"Reply to Reviewer ct7E (Comment 1)\", \"comment\": \"# Reply to Reviewer ct7E\\n\\nWe sincerely thank your thorough and insightful feedback, which has significantly contributed to improving the quality and clarity of our paper. We have carefully addressed all comments and concerns in our revised submission and provided detailed responses to each point. If there are any remaining questions or further points for clarification, please do not hesitate to raise them. We greatly value the opportunity to engage in further discussions and refine our work based on your suggestions. Thank you once again for your time and effort in reviewing our paper.\\n\\n## Comment 1\\n\\n> *The idea seems similar to the Retrieval-Augmented Generation (RAG) technique. The paper lacks a comparison and discussion with traditional RAG approaches that also leverage external knowledge for LLM enhancement. The pessimism-based selection strategy could be better positioned against existing RAG retrieval methods like hybrid search and recursive retrieval[1].*\\n\\n## Response\\n\\nThank you for this thoughtful suggestion. We acknowledge that our method *CoPS* is conceptually related to Retrieval-Augmented Generation (RAG) and can be seen as a specialized adaptation of this paradigm. While traditional RAG techniques focus on retrieving information from external knowledge to enhance general-purpose language generation, *CoPS* uniquely retrieves an agent's own experiences, specifically tailored for decision-making and adaptation in LLM agent settings without expensive manual experiences/knowledge collections.\\n\\nThis distinction allows our method to better address challenges inherent to LLM agents, such as leveraging past trajectories for task-specific improvements. Moreover, the incorporation of a pessimism-based selection strategy provides a principled approach to retrieval that ensures robust performance under uncertainty, setting it apart from traditional RAG techniques like hybrid search or recursive retrieval. As shown in our experimental results, this targeted adaptation results in superior sample efficiency and task performance, which are critical in real-world agent scenarios.\\n\\n**We've discussed these differences in detail in the Related Works and Theoretical Analysis sections of our paper and appreciate the opportunity to further clarify the unique contributions of our method.**\\n\\n---\"}", "{\"title\": \"Reply to Reviewer CYSd (Comment 2 ~ Question 2)\", \"comment\": \"## Comment 2\\n\\n> *To enable cross-task experience sharing, this paper proposes to find a probability distribution (in Equation 2.2) that can maximize the expected reward while keeping the distribution close to a task-dependent distribution of a LLM. However, this paper approximates the probability distribution by using cosine similarity between experiences. This approximation seems to make CoPS too similar to RAP.*\\n\\n## Response\\n\\nThank you for this thoughtful observation. Our primary objective is indeed to determine a probability distribution over experiences, which forms the basis of our **fundamentally stochastic approach**. This is a key distinction from RAP, which employs a **deterministic methodology**.\\n\\nWhile cosine similarity is used as part of the approximation, it serves a different purpose within our stochastic framework, enabling the selection of experiences in a probabilistic manner. This allows our method to incorporate uncertainty and diversity in experience selection, which deterministic methods like RAP cannot achieve.\\n\\nMoreover, our experiments demonstrate that this stochastic approach leads to significant performance improvements compared to deterministic methods, as demonstrated in our Figure 3a and 3b. These results underscore both the novelty and the effectiveness of our method, highlighting its potential for broader applicability in cross-task experience sharing. We hope this clarification addresses your concern and further emphasizes the contributions of our work.\\n\\n---\\n\\n## Comment 3\\n\\n> *I am not sure that it is a fair comparison to constrain LATS to have similar running time with CoPS. LATS aims to improve the performance by using inference-time compute.*\\n\\n## Response\\n\\nThank you for raising this concern. We believe it is crucial to evaluate performance under a constrained computational budget, as such scenarios better reflect real-world applications where resources like time and computing are often limited.\\n\\nWhile LATS aims to improve performance through inference-time computation, it requires a significantly larger allocation of LLM sampling resources to achieve competitive final metrics. As detailed in Table 4 of our paper (we also attach it below), **LATS consumes approximately five times the sampling cost of our method under the same time constraints.** This comparison highlights the inefficiency of LATS in resource utilization and underscores the advantage of our approach, which achieves superior sample efficiency while maintaining strong performance. We hope this addresses your concern and provides clarity on the rationale for our evaluation methodology.\\n\\n| **Algorithm** | **Reflexion** | **RAP** | **LATS** | **CoPS** |\\n|-----------------------|---------------|-----------|-------------|-------------|\\n| **LLaMA 3.1 8B** | 159131 | 107504 | 1555365 | 314336 |\\n| **LLaMA 3.1 70B** | 125406 | 109245 | 1058752 | 113849 |\\n\\n## Question 1\\n\\n> *Please see the questions in the first weakness above.*\\n\\n## Response\\n\\nPlease refer to our response for Comment W1.\\n\\n---\\n\\n## Question 2\\n\\n> *Regarding the second weakness, what is the main difference between CoPS and RAP? And, what makes CoPS perform better than RAP?*\\n\\n## Response\\n\\nPlease refer to our response for Comment W2.\"}", "{\"title\": \"Follow-up on Submission #12611 Rebuttal Responses\", \"comment\": \"Dear Reviewer ct7E,\\n\\nWe hope this email finds you well. Thank you again for taking the time to provide thoughtful and constructive feedback on our submission. Your insights have been instrumental in refining our work and making meaningful improvements.\\n\\nAs the rebuttal phase is coming to a close, we wanted to kindly follow up to ensure that our responses have sufficiently addressed your concerns. If there are any remaining points that need further clarification, please let us know.\\n\\nWe are immensely grateful for your efforts and hope you have a wonderful holiday and weekend.\\n\\nBest regards,\\nAuthors of Submission #12611\"}", "{\"summary\": \"This paper proposes CoPS (Cross-Task Experience Sharing), a method that can improve LLM agents by sharing distribution-matched experiences stored in a memory bank. CoPS first generates a trial experience from a LLM by conditioning on an initial state. Next, it sets a probability distribution that can approximately maximize the expected total reward while keeping the distribution close to a task-dependent distribution of the LLM. Then, it repeatedly samples candidate experiences from the probability distribution, and uses them as few-shot examples to sample an action from the LLM. This paper evaluates CoPS on three representative benchmarks such as ALFWorld, WebShop, and HotPotQA. The experiment results show that CoPS can achieve higher success rates than recent advancements such as Reflexion, RAP, and LATS on the benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1. It is interesting to propose an idea that selects distribution-matched experiences from a memory bank for improving the performance of LLM agents.\\n\\nS2. This paper demonstrates that CoPS can achieve higher success rates than recent advancements such as Reflexion, RAP, and LATS on the representative benchmarks such as ALFWorld, WebShop, and HotPotQA.\", \"weaknesses\": \"W1. This paper aims to share cross-task experiences from a memory bank. However, it is rather unclear how to construct cross-task experiences in the memory bank. More specifically, how is offline data collected? Which LLMs are used as a policy to generate offline data? How many experiences are required to achieve the performance provided in the paper? How much different tasks can be used together to take advantages of cross-task experience sharing?\\n\\nW2. To enable cross-task experience sharing, this paper proposes to find a probability distribution (in Equation 2.2) that can maximize the expected reward while keeping the distribution close to a task-dependent distribution of a LLM. However, this paper approximates the probability distribution by using cosine similarity between experiences. This approximation seems to make CoPS too similar to RAP.\\n\\nW3. I am not sure that it is a fair comparison to constraint LAST to have similar running time with CoPS. LAST aims to improve the performance by using inference-time compute.\", \"questions\": \"Q1. Please see the questions in the first weakness above.\\n\\nQ2. Regarding the second weakness, what is the main difference between CoPS and RAP? And, what makes CoPS to perform better than RAP?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the authors response and prepare the additional ablation studies. Regarding my comments 1 and 2, I still think that this work does not differ from the existing memory mechanism or RAG in principle, so I tend to keep my current rating .\"}", "{\"title\": \"Follow-up on Rebuttal Responses for Submission #12611\", \"comment\": \"Dear Reviewer CYSd,\\n\\nWe hope this email finds you well. Thank you again for your thoughtful feedback on our submission. We deeply appreciate the time and effort you\\u2019ve put into reviewing our work.\\n\\nWe have carefully addressed all the comments and concerns you raised in your review. If our responses have clarified your concerns and resolved the issues you identified, would you please consider reflecting on your overall evaluation and score? We are happy to provide further clarification or discuss any remaining questions if needed.\\n\\nThank you once again for your time and thoughtful input. Your feedback has been invaluable in improving our work.\\n\\nBest regards,\\nAuthors of Submission #12611\"}" ] }
9VRFPC29nb
Simplified Mamba with Disentangled Dependency Encoding for Long-Term Time Series Forecasting
[ "Zixuan Weng", "Jindong Han", "Wenzhao Jiang", "Hao Liu" ]
Recent advances in deep learning have led to the development of numerous models for Long-term Time Series Forecasting (LTSF). However, most approaches still struggle to comprehensively capture reliable and informative dependencies inherent in time series data. In this paper, we identify and formally define three critical dependencies essential for improving forecasting accuracy: the order dependency and semantic dependency in the time dimension as well as cross-variate dependency in the variate dimension. Despite their significance, these dependencies are rarely considered holistically in existing models. Moreover, improper handling of these dependencies can introduce harmful noise that significantly impairs forecasting performance. To address these challenges, we explore the potential of Mamba for LTSF, highlighting its three key advantages to capture three dependencies, respectively. We further empirically observe that nonlinear activation functions used in vanilla Mamba are redundant for semantically sparse time series data. Therefore, we propose SAMBA, a Simplified Mamba with disentangled dependency encoding. Specifically, we first eliminate the nonlinearity of vanilla Mamba to make it more suitable for LTSF. Along this line, we propose a disentangled dependency encoding strategy to endow Mamba with efficient cross-variate dependency modeling capability while minimizing the interference between time and variate dimensions. We also provide rigorous theory as a justification for our design. Extensive experiments on nine real-world datasets demonstrate the effectiveness of SAMBA over state-of-the-art forecasting models.
[ "long-term time series forecasting", "time series modeling", "mamba" ]
Reject
https://openreview.net/pdf?id=9VRFPC29nb
https://openreview.net/forum?id=9VRFPC29nb
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zNk4BOHWRw", "wF7KBALCYK", "vrzZWrXaC7", "vNBa2a2QYw", "vFFwRFZaUg", "tqGvZSyFSA", "t5V2oqH6IW", "sHz9Zw6Cmd", "oeBcC5eYQA", "nF1cN5dDvR", "mjieQIqU6j", "l4Mu7g3Xay", "hcgrMA41X4", "gvDOArXgFq", "g9GTK3cjxm", "cNMBtRbtvW", "bcp4rfyapI", "aY7BVu3Ast", "aJSD4A2QZn", "Y2YyjccVlu", "QuEisYRg6F", "QQ45uKEIAQ", "OLLg7wO6A6", "NhbrLvaIv8", "LjrCq8TQ3e", "LRvA3aZNmu", "I2dSmfJVSb", "FCNcBCtBIy", "BvgWM7RwWC", "BG02KMgXJD", "B9DANEo6VR", "6iSAaFIakY", "3Ts6cUBJTM", "3Rv0HjVTOW", "2NENQs7znX", "2K1lnEn4Fr", "1orLRknDbs" ], "note_type": [ "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732199168233, 1737523414213, 1734591882678, 1732198805597, 1732666064483, 1732496295193, 1730602432088, 1732761138716, 1732496419474, 1730704912565, 1732560385634, 1732496257739, 1732742913614, 1732198656919, 1732198180285, 1732199479252, 1732742563201, 1732743510760, 1732199391509, 1732537609578, 1729117233488, 1732199038338, 1730289903936, 1732198471353, 1732198732235, 1732496378600, 1732199317509, 1732198018162, 1732199440744, 1732198968915, 1732512453671, 1732198090729, 1732198543867, 1732742224364, 1732197822728, 1732199256552, 1732744583320 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission767/Area_Chair_x68S" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Reviewer_WkCP" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Reviewer_WkCP" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Reviewer_qdrT" ], [ "ICLR.cc/2025/Conference/Submission767/Reviewer_biFR" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Reviewer_y2RJ" ], [ "ICLR.cc/2025/Conference/Submission767/Reviewer_biFR" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Reviewer_y2RJ" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Reviewer_qdrT" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ], [ "ICLR.cc/2025/Conference/Submission767/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer y2RJ [Part 3]\", \"comment\": \"[W3] 3.The paper has some weaknesses in the experiments, which are not convincing enough.\\n\\n>[W3-1] The authors claim that they implement the baseline results by using the TimesNet Github repository in Section B.1. However, they also claim that the full results of predictions come from iTransformer in Section B.2, which is confusing. It would be helpful to have an explanation for these differences.\\n\\n**[RW3-1]** We apologize for any misunderstanding caused by the phrasing in our paper, and we have revised the ambiguous expressions. However, the results remain accurate. This is because iTransformer is implemented based on the TimesNet GitHub repository. Upon inspection, the TimesNet GitHub repository links to the Time-Series-Library, and iTransformer is implemented using the Time-Series-Library. **Therefore, the results for iTransformer are based on its implementation within the Time-Series-Library.** And we have revised them in the $\\\\underline{\\\\text{Appendix B.2 of revised paper}}$.\\n\\n>[W3-2-1] Some tables lack sufficient explanation, making them difficult to understand. For example, in Table 1, what do the bold results mean?\\n\\n**[RW3-2-1]** Thank you for pointing this out. Initially, the bold text was used solely for emphasis and clarity. We have addressed these issues in our revised paper to avoid any misunderstanding.\\n\\n>[W3-2-2] Does the transformer refer to the ordinary transformer framework or a state-of-the-art (SOTA) transformer-based framework (e.g., iTransformer)?\\n\\n**[RW3-2-2]** Regarding the Transformer, we have already clarified in $\\\\underline{\\\\text{Appendix B.4 of original paper}}$ that it refers to the original Transformer.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The reviewers raised multiple concerns, including the modeling, experiments and novelty. Although the author feedback has largely improved the manuscript, some concerns have not been resolved yet, which suggests a reject. The paper does not manage to clearly differentiate itself from existing research in the field and fails to fully address the concerns raised by multiple reviewers regarding fundamental aspects like the choice of Mamba and the true novelty of the proposed techniques. The authors are suggested to include all the revisions and present a better future version of this paper.\", \"additional_comments_on_reviewer_discussion\": \"Diverged reviews and some reviewers pointed out that their concerns were not resolved yet. The positive reviewers does not argue for accept.\"}", "{\"title\": \"Response to Reviewer WkCP [Part 5]\", \"comment\": \">[Q2-3] Table 3 should be a foundational point of the paper, but it requires validation across more datasets.\\n\\n**[RQ2-3]** Following your suggestion, we expanded the scope of our experiments to include the Exchange and Traffic datasets. **The table below demonstrates that removing non-linear activation functions benefits both Mamba and Transformer models** (more results are included in the $\\\\underline{\\\\text{Appendix F.3 of revised paper}}$). However, the limited improvements on the Traffic dataset, along with the performance decline of MLP, are attributed to the stronger semantic complexity and the involvement of more non-linear relationships in the Traffic dataset. The simplified architecture of MLP, without non-linear activation functions, is unable to effectively handle such complexity. In contrast, the performance improvements observed for Transformer and Mamba suggest that their inherently complex architectures are already capable of handling these relationships, making non-linear activation functions redundant. Training curves in $\\\\underline{\\\\text{Appendix F.3 of revised paper}}$ further support this conclusion. **The ablation experiments in Section 6.2 further support the effectiveness of removing the nonlinear activation function in Mamba.**\\n| Datsets | Model | MLP | | Mamba | | Transformer | |\\n|--------------|---------------|--------|--------|--------|--------|-------------|--------|\\n| | | MSE | MAE | MSE | MAE | MSE | MAE |\\n| **Exchange** | Original | 0.398 | 0.419 | 2.255 | 1.189 | 1.994 | 1.117 |\\n| | Original-n | 0.374 | 0.407 | 2.122 | 1.143 | 1.194 | 0.895 |\\n| | Improvement | 6.03% | 2.86% | 5.90% | 3.79% | 40.13% | 19.87% |\\n| **Traffic** | Original | 0.554 | 0.366 | 0.669 | 0.385 | 0.833 | 0.480 |\\n| | Original-n | 0.621 | 0.400 | 0.658 | 0.381 | 0.829 | 0.479 |\\n| | Improvement | -12.27%| -9.28% | 1.57% | 0.98% | 0.42% | 0.16% |\\n\\n>[Q2-4] Figure 1 shows that removing activation functions increases the model\\u2019s regularization. The authors could consider testing this on more complex datasets, such as Traffic (or maybe PEMS?), to confirm that stronger regularization is indeed beneficial.\\n\\n**[RQ2-4]** Our Training curves in $\\\\underline{\\\\text{Appendix F.3 of revised paper}}$ confirm that removing non-linear activation functions is beneficial, further validating the proposed approach.\\n\\nOverall, we sincerely thank the reviewer for the valuable suggestions mentioned above. Your feedback has significantly contributed to improving the quality of our manuscript. We hope that we have addressed your concerns and questions. We look forward to further discussions with you and hearing your new evaluations.\\n\\n- [1] Are Language Models Actually Useful for Time Series Forecasting?, NeurIPS 2024\\n- [2] Are Transformers Effective for Time Series Forecasting?, AAAI 2023\\n- [3] A Time Series is Worth 64 Words: Long-term Forecasting with Transformers, ICLR 2023\\n- [4] Irregular Multivariate Time Series Forecasting: A Transformable Patching Graph Neural Networks Approach, ICML 2024\"}", "{\"comment\": \"Thanks to the authors for their detailed responses. The authors have conducted sufficient experiments and further analyses. From the beginning, I acknowledged the amount of work put into the paper. Even though we might have differing opinions on the practical value of Mamba in time series applications, I believe that the wealth of information and insights provided by these extensive experiments can be further assessed by our community. Therefore, I have decided to raise the score to 6.\"}", "{\"title\": \"[**Gentle Reminder**]: Kind Request for Reviewers' Feedback\", \"comment\": \"Dear Reviewer WkCP,\\n\\nThank you once again for your valuable and constructive review, which has helped us refine our contribution and clarify its strengths.\\n\\nWe would like to kindly remind you that the discussion deadline is approaching. After this deadline, we may not have the opportunity to respond to your comments.\\n\\nAdditionally, during the rebuttal period, we have supplemented our study with ***over 250 experimental results*** to enhance the comprehensiveness of our experiments and the reliability of our conclusions. These results address your concerns regarding the experiments and are all included in $\\\\underline{\\\\text{the revised paper, highlighted in blue}}$.\\n\\nWe sincerely appreciate your dedication and look forward to your feedback.\\n\\nSincerely,\\nICLR 2025 Conference Submission 767 Authors\"}", "{\"summary\": \"This paper:\\n\\n1. Identifies three types of dependencies in multivariate time series data. \\n2. Simplifies the Mamba activation functions to eliminate non-linearity. \\n3. Proposes a dependency encoding strategy to disentangle these dependencies, minimizing interference between the time and variate dimensions.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1. The authors provide a clear and comprehensive description of the key dependencies in LTSF modeling, including order dependency, semantic dependency, and cross-variate dependency. The model architecture is well-designed to capture these dependencies.\\n\\n2. The architecture introduced in this paper to capture relationships across both temporal and variate dimensions enhances Mamba's ability to effectively model cross-variate relationships.\\n\\n3. The experiments conducted demonstrate that the proposed architecture outperforms previous methods and can be effectively transferred to other models.\", \"weaknesses\": \"1. The paper contains several spelling and formatting errors. The authors should carefully check them before submission. Examples include:\\n a) Inconsistent abbreviations, with \\u201cLTSF\\u201d sometimes written as \\u201cLTST\\u201d; \\n b) Formatting issues in Theorem 1; \\n c) Various spelling errors.\\n\\n2. The title and model name \\\"Simplified Mamba, SAMBA\\\" emphasize a simplification of Mamba, specifically through the removal of non-linearity. However, according to the paper, the performance improvements from this regularization method are not particularly significant compared to other strategies, such as patch tokenization. Additionally, the authors mention that the elimination is not fully implemented, as non-linear activation function remains in the gating mechanism. Furthermore, the overall SAMBA architecture actually employs Mamba along the temporal dimension and bidirectional Mamba along the variate dimension, making it more complex rather than simplified, compared to the original Mamba for sequence modeling. Thus, the authors might consider rebranding their methods to avoid misleading readers.\", \"questions\": \"1. About order dependency:\\n\\nThe statement and evidences to prove that Transformer-based models are unsuitable for modeling order information seems insufficient. \\n\\nFirst, improving a model's perception of order information can depend on two factors: the tokenization and the training method. In tokenization, if different lag timesteps are treated as distinct features inside a token (e.g., series embedding as in Linear models and iTransformer, or patch embedding as in PatchTST), shuffling values across timesteps will clearly impact performance, as the linear projection layer automatically considers them as separate input features. However, if timestep values are projected into the same latent space during tokenization, the model must rely on additional structures (such as attention + positional encoding) to learn this order information. Can this order be learned with pure Transformers? The success of LLMs proves that models can learn precise positional information. LLMs accurately identify and utilize context positions for next-token prediction without making sequencing errors, thanks to the learning of mechanisms like induction heads, which has been widely validated. Autoregressive training method enforces models to learn these temporal ordering algorithms.\\n\\nThe choice of tokenization and training method is largely independent of the model architecture. Both Transformers and Mamba can use these methods to improve the capture of positional information in LTSF tasks. Experimental results in the paper also show that patch tokenization enhances performance across different architectures. It separates timesteps within a local period into distinct features inside a token, rather than treating all timesteps as equivalent tokens in the attention mechanism.\\n\\nConstructing algorithms between tokens in attention (or SSM in Mamba) involves learning the sequence's semantic information. The richer the semantic information, the more we need to model algorithms at a finer granularity (shorter patch sizes). The paper suggests that patching enhances semantic dependency learning, but this seems contradictory. Patching likely reduces the need for complex semantic relationship modeling, functioning as a form of attention regularization that improves performance on datasets with simpler semantics, contrary to Assumption 2. Correspondingly, Linear models treat the entire series as embeddings and all timesteps as lag features, omitting the need to construct semantic relationships between timesteps, which aligns with the statement in the paper. However, the analysis of patching seems inconsistent. The authors could experiment on datasets with richer semantic information (e.g., ODE-based datasets with underlying dynamics) to see if pure timestep embedding outperforms patch embedding.\\n\\nIn summary, does Mamba have an advantage over Transformer-based models in modeling order information as claimed? It seems so, but this advantage may come from patching tokenization and possibly from Mamba's SSM handling of sequential/causal token information. Previous LTSF encoder-only Transformers may lack the sequential structure. The authors could compare Mamba to Transformers using causal masks/training methods to further establish Mamba's structural superiority in LTSF. Additionally, more discussion is needed to support the definitions and statements of semantic dependency.\\n\\n\\n2. About the dataset selection\\uff1a\\n\\nThe experiments in Tables 1-3 and Figure 1 are conducted solely on the ETTm1 dataset, which seems insufficient to draw general conclusions. The nature of the datasets significantly impacts the performance of different models with varying degrees of regularization.\\n\\nThe performance comparison in Table 1 using shuffling on a single dataset appears insufficient (actually, I hold a similar view regarding these experiments conducted in the DLinear paper). Here's a simple counterexample: \\n\\nConsider an invertible MA(1) process, $ X_t = \\\\mu + \\\\varepsilon_t + \\\\theta \\\\varepsilon_{t-1} $, with two sub-optimal predictors: one using the global average of the input and the other using the last value as the prediction. It can be shown that the first predictor is better or equal to the second one under invertible condition. However, if the input is shuffled, the first predictor\\u2019s performance does not deteriorate, while the second, less optimal predictor's MSE decreases when $ \\\\theta > 0 $, remains unchanged at $ \\\\theta = 0 $, and increases when $ \\\\theta < 0 $. This indicates that comparing performance drops due to shuffling may not be sufficient to prove a model\\u2019s sensitivity to order, as better models may also learn predictors insensitive to position.\\n\\nThe conclusions from Table 2 may also be dataset-sensitive. For datasets with underlying dynamics and shifting multivariate effects, exposing more temporal tokens for algorithm construction in attentions could be more advantageous. The authors could extend experiments to datasets like Solar and Exchange to strengthen these claims.\\n\\nTable 3 should be a foundational point of the paper, but it requires validation across more datasets. The choice between model linearity and complexity largely depends on the dataset. When clear non-linear relationships exist, the model may need to build more complex algorithms between temporal tokens, leading to different conclusions.\\n\\nFigure 1 shows that removing activation functions increases the model\\u2019s regularization. The authors could consider testing this on more complex datasets, such as Traffic (or maybe PEMS?), to confirm that stronger regularization is indeed beneficial.\\n\\n\\n### Conclusion\\n\\nEven though I have some concerns about the authors\\u2019 claims and experimental methods, the authors have put in a substantial amount of work throughout the paper. Therefore, I look forward to further discussion with the authors on these points before giving a final score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your response [Part 2]\", \"comment\": \">[W3-1] Furthermore, while cross-variate dependency does not inherently involve order dependency, the design of a bidirectional Mamba raises additional concerns. How would the model handle other perturbed orders? Would an exponentially growing number of Mamba branches be required to encode various order permutations?\\n\\n**[W3-1]** We thank the reviewer for pointing out this issue. In fact, it is unnecessary to adopt different Mamba encoding strategies for different orders, as bidirectional Mamba encoding is sufficient to handle various possible variate orders. We conducted experiments by reversing and randomly shuffling the variable order, and the **results demonstrate that bidirectional Mamba is robust to variate order.**\\n\\n| Dataset | Predict Length | SAMBA | MSE | MAE | SAMBA_s | MSE | MAE | SAMBA_b | MSE | MAE |\\n|---------------|----------------|--------|-------|-------|---------|-------|-------|---------|-------|-------|\\n| **ETTm1** | 96 | | 0.315 | 0.357 | | 0.316 | 0.357 | | 0.314 | 0.356 |\\n| | 192 | | 0.360 | 0.383 | | 0.362 | 0.383 | | 0.361 | 0.383 |\\n| | 336 | | 0.389 | 0.405 | | 0.387 | 0.405 | | 0.389 | 0.406 |\\n| | 720 | | 0.448 | 0.440 | | 0.448 | 0.441 | | 0.445 | 0.439 |\\n| **Traffic** | 96 | | 0.388 | 0.261 | | 0.389 | 0.262 | | 0.388 | 0.262 |\\n| | 192 | | 0.411 | 0.271 | | 0.410 | 0.271 | | 0.409 | 0.270 |\\n| | 336 | | 0.428 | 0.278 | | 0.431 | 0.278 | | 0.427 | 0.278 |\\n| | 720 | | 0.461 | 0.297 | | 0.461 | 0.299 | | 0.462 | 0.297 |\\n\\n*Note: SAMBA_s is trained and tested on the dataset with shuffled variate order, while SAMBA_b is trained and tested on the dataset with reversed variate order.*\\n\\n>[W3-2] I fail to see the necessity or rationale for using Mamba to model variate dependency effectively. In contrast, self-attention could be a perfect fit for modeling cross-variate depdency.\\n\\n**[W3-2]** Regarding whether Mamba or Transformer performs better, **we have already demonstrated in $\\\\underline{\\\\text{Appendix C of original paper}}$ that using Mamba alone to encode cross-variate dependency achieves superior performance and efficiency compared to Transformer.** This conclusion is further supported by the experiments in $\\\\underline{\\\\text{Appendix J of original paper}}$ and other studies on Mamba [2].\\n\\n- [1] TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting, ICLR 2024\\n- [2] Is Mamba Effective for Time Series Forecasting?, Arxiv\"}", "{\"title\": \"[**Gentle Reminder**]: Kind Request for Reviewers' Feedback\", \"comment\": \"Dear Reviewer biFR,\\n \\nThank you once again for your valuable and constructive review, which has helped us refine our contribution and clarify its strengths.\\n\\nWe would like to kindly remind you that the discussion deadline is approaching. After this deadline, we may not have the opportunity to respond to your comments.\\n\\nAdditionally, during the rebuttal period, we have supplemented our study with **over 250 experimental results** to enhance the comprehensiveness of our experiments and the reliability of our conclusions. These results address your concerns regarding the experiments and are all included in $\\\\underline{\\\\text{the revised paper, highlighted in blue}}$.\\n\\nWe sincerely appreciate your dedication and look forward to your feedback.\\n\\nSincerely,\\nICLR 2025 Conference Submission 767 Authors\"}", "{\"summary\": \"This paper investigates the adaptation of a recently emerged Mamba architecture to long-term time-series forecasting. It highlights three key aspects: order, semantic, and cross-variate dependencies. The derived Samba architecture encompasses two branches for temporal and variable dependency encoding, of which the encoded representations are concatenated to produce forecasts.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"It is interesting to investigate the pros and cons of the Mamba architecture, or more broadly, structured state-space models (SSMs), on long-term time-series forecasting.\", \"weaknesses\": \"Concerns Regarding the Soundness and Contributions of the Paper\\n\\n1. Overlooking the Unique Advantage of Mamba: Modeling Long Sequences\\n\\nMamba, a recent and popular instantiation of state-space models (SSMs), is renowned for its efficiency in processing long sequences. It significantly reduces the computational overheads of self-attention in Transformers while seemingly maintaining long-term dependencies in various real-world datasets. However, this paper primarily compares time-series representation learning among MLP, Transformer, and Mamba, fixing the lookback length at T=96. When modeling relatively shorter sequences, the necessity of introducing SSMs diminishes significantly, which in turn reduces the impact of this research.\\n\\n2. Inappropriate and Biased Expressions/Claims\", \"lines_014_016\": \"The claim, \\\"However, most approaches still struggle to comprehensively capture reliable and informative dependencies inherent in time series data,\\\" lacks rigor. Existing time-series forecasting models have made significant progress in learning effective time-series representations and deliver commendable forecasts in many scenarios. The paper's experiments do not show a substantial difference in model performance to support this claim.\", \"lines_052_054\": \"The assertion that \\\"they struggle with perceiving temporal order due to the permutation-invariant nature of self-attention, even with positional encodings,\\\" is debatable. It is not clear how sensitivity to sequence order impacts the capability of learning effective temporal representations. Extensive research on position encoding for Transformers has already led to the success of modern large language models. The experiments in Table 1 only demonstrate that Transformers are less sensitive to order permutation than Linear and Mamba, which is expected. However, it is unclear how this capability affects temporal representation learning. In fact, being less sensitive to order permutation might be advantageous in sequence modeling. Furthermore, the use of Mamba to model cross-variate dependency seems flawed, as time-series variates lack a ground-truth order. Following the author's logic, using an order-sensitive model like Mamba for cross-variate dependency might not be appropriate.\", \"lines_058_061\": \"The statement, \\\"existing approaches that utilize cross-variate dependency (Channel Dependent, CD) frequently underperform compared to methods that treat each variate independently (Channel-Independent, CI),\\\" is not universally applicable. The performance of CD and CI approaches highly depends on specific cases. Studies, such as iTransformer, have demonstrated the benefits of cross-variate modeling.\\n\\n3. Insufficient Experiments Leading to Unreliable Conclusions\\n\\nThe analyses in Section 4, which inform the design of Samba, are primarily based on experiments conducted on the small ETTm1 dataset and compared across basic neural architectures like Linear, vanilla MLP, and Transformer. These results can be interpreted in multiple ways, not solely as the authors suggest. For instance, Table 1 shows Transformers' reduced sensitivity to order perturbation, which may actually be beneficial for encoding effective temporal representations. Table 2 investigates patching effectiveness, originally introduced in PatchTST, yet PatchTST itself is not included in the analysis. Additionally, the impact of patch size on results and whether all datasets lead to the same conclusion are unexplored. Other factors, such as TSMixer extending MLP-based architectures on time-series and PatchTST's designs to prevent overfitting, are not considered in Table 3. Consequently, the analyses in Section 4 are unconvincing regarding the necessity of introducing Mamba.\\n\\n4. Seemingly Downgraded Performance of Baselines\\n\\nFor example, in Table 4, PatchTST on ETTh1 produces an MSE of 0.469 and an MAE of 0.454. However, the original paper reported much lower errors (https://arxiv.org/pdf/2211.14730).\\n\\n5. Lack of Unique Contributions\\n\\nIn addition to studying Mamba for time series, this paper appears to offer few unique contributions or novel insights. Concepts such as patching, cross-variate and cross-time dependency, and normalization to prevent overfitting have been extensively explored in prior research. Furthermore, Transformer variants in time series have effectively addressed challenges in these areas. Therefore, without the focus on modeling very long sequences, the necessity of adapting Mamba for time series remains questionable.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors for solving my concerns, and I agree that during backpropagation, the aggregation operation avoids any entanglement. The revised paper also seems better organized. I will keep my rating.\"}", "{\"title\": \"[**Gentle Reminder**]: Kind Request for Reviewers' Feedback\", \"comment\": \"Dear Reviewer qdrT,\\n\\nThank you once again for your valuable and constructive review, which has helped us refine our contribution and clarify its strengths.\\n\\nWe would like to kindly remind you that the discussion deadline is approaching. After this deadline, we may not have the opportunity to respond to your comments.\\n\\nAdditionally, during the rebuttal period, we have supplemented our study with ***over 250 experimental results*** to enhance the comprehensiveness of our experiments and the reliability of our conclusions. These results address your concerns regarding the experiments and are all included in $\\\\underline{\\\\text{the revised paper, highlighted in blue}}$.\\n\\nWe sincerely appreciate your dedication and look forward to your feedback.\\n\\nSincerely,\\nICLR 2025 Conference Submission 767 Authors\"}", "{\"title\": \"Thanks for your response\", \"comment\": \"We sincerely appreciate your positive evaluation and recognition of the contributions of this work. This study represents our assessment of Mamba's potential in the LTSF domain, and we are grateful for the reviewers' acknowledgment of our efforts. We would also like to extend our thanks once again for the many constructive questions raised during the rebuttal process, which have significantly contributed to improving the quality of this work. If there are any remaining concerns or aspects where further clarification could enhance your evaluation, we would be more than happy to address them in greater detail.\"}", "{\"title\": \"Response to Reviewer WkCP [Part 3]\", \"comment\": \">[Q1-5] Additionally, more discussion is needed to support the definitions and statements of semantic dependency.\\n\\n**[RQ1-5]** Thank you for the suggestions. We have included additional discussions on the differences between order dependency and semantic dependency in the $\\\\underline{\\\\text{Appendix P of revised paper}}$.\\n\\n>[Q2] About the dataset selection\\n\\n**[RQ2]** Thank you for your suggestions on improving the comprehensiveness of our experiments and for providing clear and intuitive examples to illustrate the limitations. Following your recommendations, we have extended the experiments in Section 4 to include the Solar, Exchange, and Traffic datasets. Below, we address your concerns and suggestions point by point:\\n\\n>[Q2-1] The performance comparison in Table 1 using shuffling on a single dataset appears insufficient (actually, I hold a similar view regarding these experiments conducted in the DLinear paper). \\n\\n**[RQ2-1]** We present the results of shuffling on the Exchange dataset. The results indicate that, compared to Linear and Mamba, which show significant performance degradation, the Transformer even exhibits a slight performance improvement. **This suggests that Linear and Mamba are capable of capturing order dependencies, while Transformers struggle in this regard.** The results of other datasets and more detailed discussions can be found in $\\\\underline{\\\\text{Appendix F.1 of revised paper}}$.\\n\\n| Datasets | Prediction Length | Linear Model | | | | Mamba | | | | Transformer | | | |\\n|--------------|--------------------|--------------|--------|-------|---------|-------|--------|--------|----------|-------------|------|------|---------|\\n| | | O.MSE | S.MSE | O.MAE | S.MAE | O.MSE | S.MSE | O.MAE | S.MAE | O.MSE | S.MSE| O.MAE| S.MAE |\\n| **Exchange** | 96 | 0.0832 | 0.210 | 0.201 | 0.332 | 1.260 | 1.401 | 0.915 | 0.943 | 0.730 | 0.738| 0.782| 0.722 |\\n| | 192 | 0.179 | 0.325 | 0.299 | 0.414 | 1.398 | 1.626 | 1.040 | 1.060 | 1.304 | 1.284| 0.913| 0.949 |\\n| | 336 | 0.338 | 0.521 | 0.418 | 0.534 | 1.835 | 1.921 | 1.111 | 1.141 | 1.860 | 1.862| 1.090| 1.085 |\\n| | 720 | 0.903 | 1.167 | 0.714 | 0.822 | 3.940 | 4.023 | 1.687 | 1.697 | 3.860 | 3.865| 1.684| 1.685 |\\n| | **Avg. Drop** | - | 47.89% | - | 28.80% | - | 6.38% | - | 1.85% | - | -0.06%|- |-0.63% |\\n\\n*Note: O.MSE and O.MAE are evaluated in the original test set. S.MSE and S.MAE are evaluated in the shuffling test set.*\"}", "{\"title\": \"Response to Reviewer qdrT [Part 4]\", \"comment\": \">[W5] Lack of Unique Contributions\\n\\n**[RW5]** We sincerely hope that the reviewers can reassess the contributions of this work. This paper provides four contributions that promote the development of the LTSF community along the lines of this research. The contributions of our work compared to previous studies are as follows:\\n\\n1. **Identification and formal definition of three critical dependencies in time series data:** \\n We define these dependencies to guide the design of future LTSF models effectively.\\n\\n2. **In-depth analysis of Mamba\\u2019s advantages:** \\n Compared to existing Mamba-based LTSF studies [3][4][9][10], we are the first to analyze the advantages of Mamba relative to Transformer and Linear models. We explain why Mamba is a promising backbone, which facilitates a reassessment of its potential and advantages in LTSF. At the same time, we explore the limitations of Transformer and Linear models in capturing order dependency and semantic dependency.\\n\\n3. **Addressing overfitting due to non-linearities:** \\n We find that directly applying MLP, Transformer, and Mamba to LTSF can lead to overfitting issues caused by non-linearities. Removing non-linear activation functions yields performance improvements.\\n\\n4. **Proposal of a disentangled encoding approach for cross-variable and cross-temporal dependencies:** \\n Unlike previous methods for introducing cross-variable dependencies that lacked theoretical explanation, we provide theoretical proof showing that our disentangled encoding approach is more effective. This lays a theoretically grounded path for future research in this area, encouraging further developments along this line.\\n\\nOverall, we sincerely thank the reviewer for the valuable suggestions mentioned above. Your feedback has significantly contributed to improving the quality of our manuscript. We hope that we have addressed your concerns and questions. We look forward to further discussions with you and hearing your new evaluations.\\n\\n- [1] Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting, NeurIPS 2021\\n- [2] iTransformer: Inverted Transformers Are Effective for Time Series Forecasting, ICLR 2024\\n- [3] Is Mamba Effective for Time Series Forecasting?, Arxiv\\n- [4] TimeMachine: A Time Series is Worth 4 Mambas for Long-term Forecasting, Arxiv\\n- [5] Are Transformers Effective for Time Series Forecasting?, AAAI 2023\\n- [6] Are Language Models Actually Useful for Time Series Forecasting?, NeurIPS 2024\\n- [7] Inductive Representation Learning on Large Graphs, NeurIPS 2017\\n- [8] Graph Mamba: Towards Learning on Graphs with State Space Models, KDD 2024\\n- [9] Bi-Mamba4TS: Bidirectional Mamba for Time Series Forecasting, Arxiv\\n- [10] MambaTS: Improved Selective State Space Models for Long-term Time Series Forecasting , Arxiv\"}", "{\"title\": \"Response to Reviewer biFR [Part 3]\", \"comment\": \">[Q3] How does SAMBA blocks achieve disentanglement encoding successully separate cross-time and cross-variate dependencies? It seems to me that such entanglements still exist through sum and back propagation.\\n\\n**[RQ3]** We appreciate the reviewer for pointing this out. This is, in fact, **a key aspect of the disentangled encoding design.** During the disentangled encoding process, we use $x$ layers of SAMBA and $y$ layers of bi-SAMBA (where $x$ and $y$ can be different values) to separately encode cross-time dependency and cross-variate dependency. Regarding the reviewer's concern that entanglements may still exist in the aggregation operation, we will now provide a detailed discussion of how we achieve disentangled. \\n\\nFirst, there is a misunderstanding in the reviewer's concern. We achieve aggregation by concatenating the separately learned temporal and variate representations and inputting them into the FFN, rather than aggregating them through a sum operation.\", \"the_aggregation_formula_can_be_expressed_as_follows\": \"$\\n\\\\mathbf{E}\\\\_o = \\\\text{FFN}(\\\\mathbf{E}\\\\_{\\\\text{time}} || \\\\mathbf{E}\\\\_{\\\\text{var}}).\\n$\\n\\nFFN is a fully connected feed-forward neural network, it can be expressed as:\\n\\n$\\n\\\\text{FFN}(\\\\mathbf{x}) = \\\\mathbf{W}\\\\_2 \\\\sigma(\\\\mathbf{W}\\\\_1 \\\\mathbf{x} + \\\\mathbf{b}\\\\_1) + \\\\mathbf{b}\\\\_2,\\n$\\n\\nwhere $\\\\mathbf{x} = \\\\mathbf{E}\\\\_{\\\\text{time}} || \\\\mathbf{E}\\\\_{\\\\text{var}}$ represents the concatenated input vector, $\\\\mathbf{W}\\\\_1, \\\\mathbf{W}\\\\_2$ are weight matrices, $\\\\mathbf{b}\\\\_1, \\\\mathbf{b}\\\\_2$ are bias vectors, $\\\\sigma$ is a non-linear activation function (e.g., ReLU).\\n\\nSince backpropagation involves the derivatives of the output with respect to the inputs, we analyze the derivatives of $\\\\mathbf{E}\\\\_o$ with respect to $\\\\mathbf{E}\\\\_{\\\\text{time}}$ and $\\\\mathbf{E}\\\\_{\\\\text{var}}$.\\n\\n### Derivative with Respect to $\\\\mathbf{E}\\\\_{\\\\text{time}}$\", \"for_the_concatenated_input\": \"$\\n\\\\mathbf{x} = \\\\begin{bmatrix} \\\\mathbf{E}\\\\_{\\\\text{time}} \\\\\\\\ \\\\mathbf{E}\\\\_{\\\\text{var}} \\\\end{bmatrix},\\n$\\n\\nthe derivative of $\\\\mathbf{E}\\\\_o$ with respect to $\\\\mathbf{E}\\\\_{\\\\text{time}}$ is:\\n\\n$\\n\\\\frac{\\\\partial \\\\mathbf{E}\\\\_o}{\\\\partial \\\\mathbf{E}\\\\_{\\\\text{time}}} = \\\\frac{\\\\partial \\\\mathbf{E}\\\\_o}{\\\\partial \\\\mathbf{x}} \\\\cdot \\\\frac{\\\\partial \\\\mathbf{x}}{\\\\partial \\\\mathbf{E}\\\\_{\\\\text{time}}}.\\n$\\n\\nSince $\\\\mathbf{x}$ contains $\\\\mathbf{E}\\\\_{\\\\text{time}}$ as its first part:\\n\\n$\\n\\\\frac{\\\\partial \\\\mathbf{x}}{\\\\partial \\\\mathbf{E}\\\\_{\\\\text{time}}} = \\\\mathbf{I},\\n$\\n\\nwhere $\\\\mathbf{I}$ is the identity matrix.\\n\\nThe derivative of the FFN with respect to the input $\\\\mathbf{x}$ is:\\n\\n$\\n\\\\frac{\\\\partial \\\\mathbf{E}\\\\_o}{\\\\partial \\\\mathbf{x}} = \\\\mathbf{W}\\\\_2 \\\\cdot \\\\text{diag}(\\\\sigma'(\\\\mathbf{W}\\\\_1 \\\\mathbf{x} + \\\\mathbf{b}\\\\_1)) \\\\cdot \\\\mathbf{W}\\\\_1.\\n$\", \"thus\": \"$\\n\\\\frac{\\\\partial \\\\mathbf{E}\\\\_o}{\\\\partial \\\\mathbf{E}\\\\_{\\\\text{time}}} = \\\\mathbf{W}\\\\_2 \\\\cdot \\\\text{diag}(\\\\sigma'(\\\\mathbf{W}\\\\_1 \\\\mathbf{x} + \\\\mathbf{b}\\\\_1)) \\\\cdot \\\\mathbf{W}\\\\_1 \\\\cdot \\\\mathbf{P}\\\\_{\\\\text{time}},\\n$\\n\\nwhere $\\\\mathbf{P}\\\\_{\\\\text{time}}$ is the projection matrix that selects the $\\\\mathbf{E}\\\\_{\\\\text{time}}$ part from the concatenated vector.\\n\\n---\\n\\n### Derivative with Respect to $\\\\mathbf{E}\\\\_{\\\\text{var}}$\\n\\nSimilarly, the derivative with respect to $\\\\mathbf{E}\\\\_{\\\\text{var}}$ is:\\n\\n$\\n\\\\frac{\\\\partial \\\\mathbf{E}\\\\_o}{\\\\partial \\\\mathbf{E}\\\\_{\\\\text{var}}} = \\\\mathbf{W}\\\\_2 \\\\cdot \\\\text{diag}(\\\\sigma'(\\\\mathbf{W}\\\\_1 \\\\mathbf{x} + \\\\mathbf{b}\\\\_1)) \\\\cdot \\\\mathbf{W}\\\\_1 \\\\cdot \\\\mathbf{P}\\\\_{\\\\text{var}},\\n$\\n\\nwhere $\\\\mathbf{P}\\\\_{\\\\text{var}}$ is the projection matrix that selects the $\\\\mathbf{E}\\\\_{\\\\text{var}}$ part from the concatenated vector.\\n\\n---\\n\\n### Summary\\n\\n$\\n\\\\frac{\\\\partial \\\\mathbf{E}\\\\_o}{\\\\partial \\\\mathbf{E}\\\\_{\\\\text{time}}} = \\\\mathbf{W}\\\\_2 \\\\cdot \\\\text{diag}(\\\\sigma'(\\\\mathbf{W}\\\\_1 \\\\mathbf{x} + \\\\mathbf{b}\\\\_1)) \\\\cdot \\\\mathbf{W}\\\\_1 \\\\cdot \\\\mathbf{P}\\\\_{\\\\text{time}},\\n$\\n\\n$\\n\\\\frac{\\\\partial \\\\mathbf{E}\\\\_o}{\\\\partial \\\\mathbf{E}\\\\_{\\\\text{var}}} = \\\\mathbf{W}\\\\_2 \\\\cdot \\\\text{diag}(\\\\sigma'(\\\\mathbf{W}\\\\_1 \\\\mathbf{x} + \\\\mathbf{b}\\\\_1)) \\\\cdot \\\\mathbf{W}\\\\_1 \\\\cdot \\\\mathbf{P}\\\\_{\\\\text{var}}.\\n$\\n\\nWe can observe that the derivative of $\\\\mathbf{E}\\\\_o$ with respect to $\\\\mathbf{E}\\\\_{\\\\text{time}}$ involves only $\\\\mathbf{P}\\\\_{\\\\text{time}}$, while the derivative with respect to $\\\\mathbf{E}\\\\_{\\\\text{var}}$ involves only $\\\\mathbf{P}\\\\_{\\\\text{var}}$. Therefore, **even during backpropagation, the aggregation operation avoids any entanglement.** We hope this explanation helps to resolve your doubts. This content also be included in our Appendix O of revised paper.\\n\\nOverall, We greatly appreciate the reviewer's insights, as they have significantly enhanced the quality of our manuscript. \\nWe hope that we have addressed your concerns and questions. We look forward to your further feedback and evaluations.\\n\\n[1] LLM4TS: Aligning Pre-Trained LLMs as Data-Efficient Time-Series Forecasters, Arxiv 2024\\n\\n[2] CATS: Enhancing Multivariate Time Series Forecasting by Constructing Auxiliary Time Series as Exogenous Variables, ICML 2024\\n\\n[3] AutoTimes: Autoregressive Time Series Forecasters via Large Language Models, NeurIPS 2024\"}", "{\"title\": \"Thanks for your response\", \"comment\": \"We sincerely appreciate your positive evaluation and recognition of the contributions of this work. Based on your encouraging comments, we were wondering if you might consider reflecting this in the score, as we would greatly value your support in further acknowledging the impact of this study. And if there are any remaining concerns or points where further clarification could enhance your evaluation, we would be happy to address them in more detail.\"}", "{\"title\": \"Thanks for your response [Part 2]\", \"comment\": \">[W3-1] I wonder why are the results for other methods almost identical to those in the paper, while the results for CARD differ significantly. Why didn\\u2019t the author directly use the results from the CARD paper?\\n\\n**[RW3-1]** As stated in $\\\\underline{\\\\text{Section B.2 of original paper}}$, **CARD uses a different loss function during training compared to other baselines.** Comparing baselines with different loss functions in the same table is not fair. To ensure fairness, we utilized the official implementation of CARD and trained and evaluated it under the same time series setting to maintain consistency.\\n\\n>[W3-2] In addition, the experimental results need to be carefully verified. For example, the MSE results of iTransformer on ETTm2 dataset should change from 0.246 to 0.250, when the prediction length is 192.\\n\\n**[RW3-2]** We appreciate the reviewer for pointing out the typo. We conducted a thorough review of the experimental results. **The error only affects the results of iTransformer on the ETTm2 dataset,** but the average results for this dataset remain correct. Therefore, **this does not affect the validity of our conclusion that SAMBA achieved state-of-the-art performance.**\"}", "{\"title\": \"Response to Reviewer biFR [Part 1]\", \"comment\": \"Dear Reviewer biFR,\\n\\nThank you for the suggestions regarding the model structure and definition, adding baselines, conducting experiments on the patch length hyperparameter, and disentangling the confusion in encoding have greatly enhanced the comprehensiveness of our experiments and resolved ambiguities. These suggestions significantly improve our paper. **Following your suggestions, we have added new baselines and evaluated SAMBA's sensitivity to patch parameters, reporting 54 new experimental results. We hope these efforts lead to a reassessment of our manuscript.** Below, we provide detailed, point-by-point responses to your concerns.\\n\\n>[W1] The introduction of Mamba structure and its advantages compared with Transformer structure in MTSF should be in more details.\\n\\n**[RW1]** Thank you for your valuable comments. Your suggestions help improve the readability of our manuscript and reduce the difficulty for readers to understand. In the $\\\\underline{\\\\text{Appendix G of revised paper}}$, we start with the classical state-space model theory, introduce the advancements in modern SSM, and then highlight Mamba's contributions, **comprehensively outlining the development trajectory of state-space models.** Additionally, we provide **a detailed explanation of Mamba's advantages over Transformer in the LTSF domain.** We hope our efforts have effectively addressed your concerns.\\n\\n>[W2-1] The explanation of semantic dependency is not quite informative, and the boundary between order (temporal) dependency and semantic dependency seems unclear. \\n\\n**[RW2-1]** We appreciate the reviewer's observations about potential limitations. In addition to the formal definitions provided in $\\\\underline{\\\\text{Section 3}}$, we have added a new section in $\\\\underline{\\\\text{Appendix P of revised paper}}$ to explain order dependency and semantic dependency comprehensively. This section is intended to help readers better understand these dependencies and their distinctions.\\n\\n>[W2-2] I wonder why the semantic dependency here does not include historical values from other covariates.\\n\\n**[RW2-2]** Thank you for pointing out this interesting and insightful question. Indeed, historical values from other covariates represent an important dependency. In our definition, **we adopt a more fine-grained perspective to consider the semantics of time series.** LTSF actually involves two types of semantic relationships: **intra-variate patterns,** i.e., patterns within a single variate\\u2019s time series, and **inter-variate patterns,** i.e., interactions between different variates, which corresponds to what you refer to as historical values from other covariates. The semantics we define focus on intra-variate patterns, while cross-variate dependency emphasizes inter-variate patterns. Such a definition considers the intrinsic properties of each variate independently, **enabling a more effective analysis of their distinct contributions.**\"}", "{\"comment\": \"Thanks for the detailed response. My concerns have been partially addressed, so I have decided to maintain my original score. Below are the specific reasons:\\n\\n1. From the response to question 1, I still do not understand the detailed design of the 'disentangled dependency encoding strategy.' The authors claim that they 'explore the direction of introducing distinct inductive biases...', however, **I could not find any specific design regarding 'inductive biases' in the paper** (by searching the keyword 'inductive biases'). In addition, there are some other Disentangled methods towards the time dimension and the variate dimension (e.g., TImeDRL [1]), the authors should demonstrate the advantages or differences of their methods compared to these existing methods.\\n \\n2. I carefully read the comparison experiments regarding the nonlinear activation function, including the results in $\\\\underline{\\\\text{Appendix F.3 of revised paper}}$. First, the Exchange dataset is challenging, and it seems that even naive methods (without any parameters) perform better than deep learning-based methods [2]. I would argue that **we cannot be confident that experiments on the Exchange dataset can validate the effectiveness of removing nonlinear activation functions**. Second, the experimental results on Traffic and ETTm1 datasets show that **removing nonlinear activation functions in many variants leads to performance degradation**. This raises concerns about the necessity of removing nonlinear activation functions. In addition, the difference between SAMBA and the existing works is that the authors only remove nonlinear activation functions of Mamba, which further raises concerns about the novelty of the proposed method.\\n \\n3. It seems that most of the experimental results are based on the reproduced results from the Time Series Library. I wonder why are the results for other methods almost identical to those in the paper, while the results for CARD differ significantly. Why didn\\u2019t the author directly use the results from the CARD paper? In addition, **the experimental results need to be carefully verified**. For example, the MSE results of iTransformer on ETTm2 dataset should change from 0.246 to 0.250, when the prediction length is 192.\\n \\n\\n[1] Chang C, Chan C T, Wang W Y, et al. TimeDRL: Disentangled Representation Learning for Multivariate Time-Series. ICDE, 2024.\\n\\n[2] Hewamalage H, Ackermann K, Bergmeir C. Forecast Evaluation for Data Scientists: Common Pitfalls and Best Practices. DMKD, 2023.\"}", "{\"summary\": \"This paper proposes a Mamba-based structure to capture three sources of information from multivariate time series data: order dependency, semantic dependency and cross-variate dependency, and optimizes the Mamba structure by removing nonlinearities and incorporating a theoretically sound disentangled encoding strategy that appropriately integrates cross-variate dependency to the model, enhancing the model\\u2019s global representation and predictive capabilities. Experiments on multiple datasets conforms the efficacy of SAMBA structure.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The overall idea of applying Mamba for MTSF is novel, and the introduction of 'semantic dependency' is novel to the field of time series. Moreover, theroretical properties of the method are also provided.\", \"weaknesses\": \"The introduction of Mamba structure and its advantages compared with Transformer structure in MTSF should be in more details.\\n\\nThe explanation of semantic dependency is not quite informative, and the boundary between order (temporal) dependency and semantic dependency seems unclear. Moreover, I wonder why the semantic dependency here does not include historical values from other covariates. \\n\\nMoreover, there are many LLM-based methods for MTSF in recent years, and is highly recommended to be included in literature / benchmark methods such as LLM4TS, GPT4TS, CATS etc. I will happily raise my score if these concerns are addressed.\", \"questions\": \"I have several comments regarding the methodology details:\\n1. Is the model performance highly dependent on the parameters of patching?\\n2. The structure of SAMBA block should be directly compared with MAMBA in Fig 3 to show the difference. \\n3. How does SAMBA blocks achieve disentanglement encoding successully separate cross-time and cross-variate dependencies? It seems to me that such entanglements still exist through sum and back propagation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer y2RJ [Part 2]\", \"comment\": \">[W2-2] Although the authors claim that this approach can mitigate the overfitting issue, Table 3 shows that for the Path+Mamba method, removing the nonlinear activation function actually results in a performance drop.\\n\\n**[RW2-2]** Thank you for pointing out this concern. In fact, removing non-linear activation functions is generally beneficial, as confirmed by our ablation experiments. To further address the reviewer\\u2019s concern, we conducted large-scale experiments to verify the benefits of removing non-linear activation functions. The table below demonstrates the impact of removing non-linear activation functions on the Exchange dataset. The experimental results show that removing non-linear activation functions is consistently beneficial. As shown in our response to reviewer qdrT [W3-4], this result is also robust to the patch size. More comprehensive experimental results are included in the $\\\\underline{\\\\text{Appendix F.3 of revised paper}}$.\\n\\n**Table: The effect of nonlinear activation function on the model**\\n| Dataset | T | Patch+MLP | | Patch+MLP-n | |Patch+Mamba | | Patch+Mamba-n | | Patch+Transformer | |Patch+ Transformer-n | |\\n|---------------|-----|----------------|------------|------------|----------------|------------|------------|----------------|------------|------------|----------------|------------|------------|\\n| | | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE |\\n| **Exchange** | 96 | 0.0968 | 0.233 | 0.0945 | 0.219 | 0.0871 | 0.207 | 0.0834 | 0.202 | 0.0861 | 0.204 | 0.0866 | 0.205 |\\n| | 192 | 0.269 | 0.392 | 0.156 | 0.285 | 0.176 | 0.298 | 0.174 |0.295 | 0.183 | 0.303 | 0.180 | 0.302 |\\n| | 336 | 0.366 | 0.454 | 0.291 | 0.399 | 0.327 | 0.413 | 0.326 | 0.412 | 0.332 | 0.417 | 0.329 | 0.415 |\\n| | 720 | 0.879 | 0.714 | 0.744 | 0.694 | 0.853 |0.694 | 0.845 | 0.691 | 0.854 | 0.697 | 0.851 | 0.695 |\\n| | Avg | 0.403 | 0.448 | 0.321 | 0.399 | 0.361 |0.403 | 0.357 | 0.400 | 0.364 | 0.405 | 0.362 | 0.404 |\\n\\n*Note: T means prediction length.`-n` indicates the removal of nonlinear activation functions. Avg means the average results over four prediction lengths: 96, 192, 336, and 720.*\\n\\n>[W2-3] The authors should provide a theoretical analysis about removing the nonlinear activation function can mitigate the overfitting issue and validate their claim on additional datasets (e.g., ECL, Traffic, and Weather datasets).\\n\\n**[RW2-3]** Thank you for providing valuable directions for our work.Theoretical analysis will be the focus of our future research. Following your suggestion, we expanded the scope of our experiments to include the Exchange and Traffic datasets. **The table below demonstrates that removing non-linear activation functions benefits both Mamba and Transformer models** (more results are included in the $\\\\underline{\\\\text{Appendix F.3 of revised paper}}$). However, the limited improvements on the Traffic dataset, along with the performance decline of MLP, are attributed to the stronger semantic complexity and the involvement of more non-linear relationships in the Traffic dataset. The simplified architecture of MLP, without non-linear activation functions, is unable to effectively handle such complexity. In contrast, the performance improvements observed for Transformer and Mamba suggest that their inherently complex architectures are already capable of handling these relationships, making non-linear activation functions redundant. Training curves in $\\\\underline{\\\\text{Appendix F.3 of revised paper}}$ further support this conclusion.\\n\\n| Datsets | Model | MLP | | Mamba | | Transformer | |\\n|--------------|---------------|--------|--------|--------|--------|-------------|--------|\\n| | | MSE | MAE | MSE | MAE | MSE | MAE |\\n| **Exchange** | Original | 0.398 | 0.419 | 2.255 | 1.189 | 1.994 | 1.117 |\\n| | Original-n | 0.374 | 0.407 | 2.122 | 1.143 | 1.194 | 0.895 |\\n| | Improvement | 6.03% | 2.86% | 5.90% | 3.79% | 40.13% | 19.87% |\\n| **Traffic** | Original | 0.554 | 0.366 | 0.669 | 0.385 | 0.833 | 0.480 |\\n| | Original-n | 0.621 | 0.400 | 0.658 | 0.381 | 0.829 | 0.479 |\\n| | Improvement | -12.27%| -9.28% | 1.57% | 0.98% | 0.42% | 0.16% |\"}", "{\"summary\": \"The authors propose a simplified Mamba with disentangled dependency encoding for long-term time series forecasting, in which the nonlinearities in vanilla Mamba are removed to improve the generalization ability of the framework and a theoretically sound disentangled encoding strategy is introduced to separate the cross-time and cross-variate dependencies. Experiments demonstrate the effectiveness of the proposed framework.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.The organization of this paper is clear.\\n\\n2.The definition of three critical dependencies in time series data is novel and interesting.\", \"weaknesses\": \"1.This paper lacks innovation, as the authors only model interactions along time dimension and variate dimension separately, without the targeted design towards disentangled dependency encoding strategy. The authors should provide a detailed description towards their disentangled dependency encoding strategy.\\n\\n2.The design of the SAMBA block is also similar to existing works [1, 2]. It seems that the only difference between this work and existing work is the removal of the nonlinear activation function between the Conv1D and SSM layers. Although the authors claim that this approach can mitigate the overfitting issue, Table 3 shows that for the Path+Mamba method, removing the nonlinear activation function actually results in a performance drop. The authors should provide a theoretical analysis about removing the nonlinear activation function can mitigate the overfitting issue and validate their claim on additional datasets (e.g., ECL, Traffic, and Weather datasets).\\n\\n3.The paper has some weaknesses in the experiments, which are not convincing enough:\\n\\n(1)The authors claim that they implement the baseline results by using the TimesNet Github repository in Section B.1. However, they also claim that the full results of predictions come from iTransformer in Section B.2, which is confusing. It would be helpful to have an explanation for these differences.\\n\\n(2) Some tables lack sufficient explanation, making them difficult to understand. For example, in Table 1, what do the bold results mean? Does the transformer refer to the ordinary transformer framework or a state-of-the-art (SOTA) transformer-based framework (e.g., iTransformer)? In addition, in Table 3, why is there no comparison with the Patch+MLP method? The authors should provide a detailed explanation of the notations and abbreviations used in the Tables. In addition, the authors should include comparative experiments with the SOTA transformer-based framework and the Patch+MLP method in Table 1 and Table 3, respectively.\\n\\n(3) Although the authors add the efficiency comparison between SAMBA and the baseline models, there is no clarification on which dataset(s) the comparison is conducted. In addition, the term 'Mem' in the efficiency comparison is not explained, which makes the experiments regarding efficiency comparison confusing. Please specify which dataset(s) are used for the efficiency comparison, and provide an explanation for the 'Mem' metric used for the efficiency comparison.\\n\\n4. There are many typos and writing mistakes in the manuscript. For example, on page 22, \\u201cSolar-Energy)or\\u201d should be \\\"Solar-Energy) or\\\" and \\\"Exchange ) overall\\\" should be \\\"Exchange) overall\\\". The manuscript requires thorough proofreading.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer WkCP [Part 1]\", \"comment\": \"Dear Reviewer WkCP,\\n\\nWe would like to sincerely thank you for providing a detailed review and insightful suggestions, particularly on the discussion about patches and suggestions regarding experimental setups. **Following your suggestions, we conducted over 500 additional experiments and reported new results for more than 250 of them. We hope these efforts lead to a reassessment of our manuscript.** Below, we provide detailed, point-by-point responses to your concerns.\\n\\n>[W1] The paper contains several spelling and formatting errors. The authors should carefully check them before submission. Examples include: a) Inconsistent abbreviations, with \\u201cLTSF\\u201d sometimes written as \\u201cLTST\\u201d; b) Formatting issues in Theorem 1; c) Various spelling errors.\\n\\n**[RW1]** Thanks for pointing out these problems. We have revised them, and We have proofread our paper to examine the typos and revise them.\\n\\n>[W2-1] However, according to the paper, the performance improvements from removal of non-linearity regularization method are not particularly significant compared to other strategies, such as patch tokenization. \\n\\n**[RW2-1]** Thank you for the reviewers' valuable suggestions. Eliminating non-linear activation functions in Mamba is an effective and simple regularization method that also yields significant performance improvements. As shown in $\\\\underline{\\\\text{Table 3}}$, directly removing non-linear activation functions results in a 5.79% improvement in the MSE metric. Supplementary experiments measured on more datasets, provided in $\\\\underline{\\\\text{Appendix F of revised paper}}$, are consistent with this conclusion. Moreover, even with the use of patch operations, our ablation experiments demonstrate that this is an effective regularization method for Mamba.\\n\\n>[W2-2] Elimination is not fully implemented, as non-linear activation function remains in the gating mechanism.\\n\\n**[RW2-2]** Thank you for pointing out the confusion. As stated in $\\\\underline{\\\\text{Lines 351 and 352}}$, the non-linearity in the gating mechanism is designed to **maintain learning stability and robustness, rather than to learn complex representations. Removing the non-linear activation function here would disrupt the gating mechanism.** Therefore, we retained this non-linearity.\\n\\n>[W2-3] the overall SAMBA architecture it more complex rather than simplified, compared to the original Mamba for sequence modeling. Thus, the authors might consider rebranding their methods to avoid misleading readers.\\n\\n**[RW2-3]** Thank you for the reviewers' valuable suggestions. SAMBA refers to the Simplified Mamba Block. Regarding the structure of the overall model, we will consider renaming it to avoid misleading interpretations.\\n\\n>[Q1] About order dependency:\\n\\n**[RW1]** We deeply appreciate and thank the reviewer for raising this intriguing perspective for discussion. Let us elaborate and discuss further:\\n\\n>[Q1-1] Can this order be learned with pure Transformers? The success of LLMs proves that models can learn precise positional information. \\n\\n**[RQ1-1]** Indeed, existing works have shown that LLMs can learn the order of textual sequences, but no work has yet demonstrated that LLMs can capture order dependency in time series. A recent study [1] suggests that **\\\"LLMs do not have unique capabilities for representing sequential dependencies in time series.\\\"** Furthermore, they discovered that \\\"LLMs fail to convincingly improve time series forecasting.\\\" **These findings indicate that the success of LLMs learning order dependency on text cannot be directly transferred to time series data.** Similarly, our experimental results in $\\\\underline{\\\\text{Table 3}}$ and the $\\\\underline{\\\\text{Appendix F.1 of revised paper}}$ show that transformers struggle to effectively learn the order dependencies in time series data. This may be due to the low semantic density of time series data. However, we believe that exploring how LLMs can leverage the characteristics of time series effectively is a promising research direction.\"}", "{\"title\": \"Response to Reviewer WkCP [Part 4]\", \"comment\": \">[Q2-2] The conclusions from Table 2 may also be dataset-sensitive. For datasets with underlying dynamics and shifting multivariate effects, exposing more temporal tokens for algorithm construction in attentions could be more advantageous. The authors could extend experiments to datasets like Solar and Exchange to strengthen these claims.\\n\\n**[RQ2-2]** Following your suggestion, we measured the results on the Solar and Exchange datasets. **On the Exchange dataset, patching improves model performance,** aligning with the conclusions in Table 2. However, on the Solar dataset, patching leads to a performance decline, **possibly due to the high proportion of zero values in Solar,** which disrupts the intrinsic semantic information of individual data points when patching is applied. The results of other datasets and more detailed discussions can be found in the $\\\\underline{\\\\text{Appendix F.2 of revised paper}}$.\\nOverall, the additional experimental results support our conclusions in Table 2 and align with discussions in [WQ1-3], demonstrating that on datasets with ODE dynamics, **patching enhances the semantic richness of the data, enabling semantic-aware models like Transformers and Mamba to achieve better performance.**\\n\\n| Dataset | Prediction Length | Linear Model | | Patch+Linear Model | | Mamba | | Patch+Mamba | | Transformer | | Patch+Transformer | |\\n|---------------|--------------|--------------|------|---------------------|------|-------|------|--------------|------|-------------|------|-------------------|------|\\n| | | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE |\\n| **Exchange** | 96 | 0.0832 | 0.201| 0.0823 | 0.207| 1.260 | 0.915| 0.0871 | 0.207| 0.989 | 0.782| 0.0861 | 0.204|\\n| | 192 | 0.179 | 0.299| 0.165 | 0.302| 1.398 | 1.040| 0.176 | 0.298| 1.265 | 0.913| 0.183 | 0.303|\\n| | 336 | 0.338 | 0.418| 0.285 | 0.401| 1.835 | 1.111| 0.327 | 0.413| 1.860 | 1.090| 0.332 | 0.417|\\n| | 720 | 0.903 | 0.714| 0.799 | 0.685| 3.940 | 1.687| 0.853 | 0.694| 3.860 | 1.684| 0.854 | 0.697|\\n| | **Avg** | 0.376 | 0.408| 0.333 | 0.399| 2.254 | 1.188| 0.361 | 0.403| 1.993 | 1.117| 0.364 | 0.405|\\n| **Solar-Energy** | 96 | 0.326 | 0.346| 0.357 | 0.439| 0.190 | 0.248| 0.204 | 0.243| 0.201 | 0.269| 0.218 | 0.264|\\n| | 192 | 0.363 | 0.364| 0.373 | 0.446| 0.224 | 0.292| 0.237 | 0.265| 0.233 | 0.289| 0.250 | 0.284|\\n| | 336 | 0.402 | 0.378| 0.395 | 0.454| 0.315 | 0.354| 0.254 | 0.277| 0.232 | 0.294| 0.271 | 0.300|\\n| | 720 | 0.402 | 0.368| 0.393 | 0.445| 0.293 | 0.295| 0.254 | 0.278| 0.216 | 0.280| 0.271 | 0.295|\\n| | **Avg** | 0.373 | 0.364| 0.380 | 0.446| 0.217 | 0.288| 0.237 | 0.266| 0.220 | 0.283| 0.252 | 0.286|\"}", "{\"title\": \"[**Gentle Reminder**]: Kind Request for Reviewers' Feedback\", \"comment\": \"Dear Reviewer y2RJ,\\n\\nThank you once again for your valuable and constructive review, which has helped us refine our contribution and clarify its strengths.\\n\\nWe would like to kindly remind you that the discussion deadline is approaching. After this deadline, we may not have the opportunity to respond to your comments.\\n\\nAdditionally, during the rebuttal period, we have supplemented our study with **over 250 experimental results** to enhance the comprehensiveness of our experiments and the reliability of our conclusions. These results address your concerns regarding the experiments and are all included in $\\\\underline{\\\\text{the revised paper, highlighted in blue}}$.\\n\\nWe sincerely appreciate your dedication and look forward to your feedback.\\n\\nSincerely,\\nICLR 2025 Conference Submission 767 Authors,\"}", "{\"title\": \"Response to Reviewer y2RJ [Part 5]\", \"comment\": \">[W3-3-1] Although the authors add the efficiency comparison between SAMBA and the baseline models, there is no clarification on which dataset(s) the comparison is conducted.\\n\\n**[RW3-3-1]** Thanks for the reviewer's valuable suggestion. Your feedback helps improve the completeness of our experiments. The results in the paper were measured on the ETTm1 dataset. **To provide a more comprehensive evaluation, we further measured the results on the Traffic dataset.** The results are shown below and have been included in the $\\\\underline{\\\\text{Appendix G of revised paper}}$. SAMBA achieves both a faster training speed and a smaller memory usage compared to many SOTA transformer-based models, such as PatchTST and Crossformer, which also employ attention mechanisms in temporal dimensions.\", \"table\": \"Efficiency Analysis: The GPU memory (MiB) and speed (running time, s/iter) of each model on the Traffic dataset. Mem means memory footprint.\\n\\n\\n| Input Length | 96 | | 336 | | 720 | | \\n|----------------|--------------|-------|---------|-------|---------|-------|\\n| Models | Mem | Speed | Mem | Speed | Mem | speed |\\n| SAMBA | 2235 | 0.0403 | 2275 | 0.0711 | 2311 | 0.1232 |\\n| PatchTST | 3065 | 0.0658 | 12299 | 0.2382 | 25023 | 0.4845 |\\n| iTransformer | 3367 | 0.0456 | 3389 | 0.0465 | 3411 | 0.0482 |\\n| DLinear | 579 | 0.0057 | 619 | 0.0082 | 681 | 0.0139 |\\n| TimesNet | 6891 | 0.2492 | 7493 | 0.4059 | 8091 | 0.6289 |\\n| Crossformer | 21899 | 0.1356 | 40895 | 0.1369 | 69711 | 0.1643 |\\n| FEDFormer | 1951 | 0.1356| 1957 | 0.1369 | 2339 | 0.1643 |\\n| Autoformer | 1489 | 0.0309 | 1817 | 0.0362 | 2799 | 0.0457 |\\n\\n\\n\\n>[W3-3-2] In addition, the term 'Mem' in the efficiency comparison is not explained, which makes the experiments regarding efficiency comparison confusing. Please specify which dataset(s) are used for the efficiency comparison, and provide an explanation for the 'Mem' metric used for the efficiency comparison.\\n\\n**[RW3-3-2]** Thank you for the valuable suggestion to avoid misunderstandings. The term Mem refers to the GPU memory utilized during model training. We have added this explanation to the revised paper.\\n\\n>[RW4] There are many typos and writing mistakes in the manuscript. For example, on page 22, \\u201cSolar-Energy)or\\u201d should be \\\"Solar-Energy) or\\\" and \\\"Exchange ) overall\\\" should be \\\"Exchange) overall\\\". The manuscript requires thorough proofreading.\\n\\n**[RW4]** Thanks for pointing out these problems. We have proofread our paper to examine the typos and revise them.\\n\\nOverall, we sincerely thank the reviewer for the valuable suggestions mentioned above. Your feedback has significantly contributed to improving the quality of our manuscript. We hope that we have addressed your concerns and questions. We look forward to further discussions with you and hearing your new evaluations.\\n\\n[1] MambaTS: Improved Selective State Space Models for Long-term Time Series Forecasting, Arxiv\\n\\n[2] C-Mamba: Channel Correlation Enhanced State Space Models for Multivariate Time Series Forecasting, Arxiv\\n\\n[3] Time series modeling and forecasting with sample convolution and interaction, NeurIPS 2022\\n\\n[4] MICN:Multi-scale local and global context modeling for long-term series forecasting, ICLR 2023\"}", "{\"title\": \"Response to Reviewer qdrT [Part 2]\", \"comment\": \">[w3] Insufficient Experiments Leading to Unreliable Conclusions\\n\\n>[W3-1] The analyses in Section 4, which inform the design of SAMBA, are primarily based on experiments conducted on the small ETTm1 dataset and compared across basic neural architectures like Linear, vanilla MLP, and Transformer. \\n\\n**[RW3-1]** Thank you for the valuable suggestion regarding the comprehensiveness of our experiments. To address the reviewer\\u2019s concerns, we have extended the $\\\\underline{\\\\text{Section 4}}$ experiments to include the more datasets. According to your suggestions, we included the latest SOTA model, iTransformer, which uses a Linear Model to encode temporal relationships and is therefore expected to be sensitive to time order. **The following experimental results demonstrate that both Linear and Mamba effectively capture order dependency.** However, the following results reveal that even Transformer exhibits improved performance on the Exchange dataset after shuffling, indicating that Transformer models fail to learn the critical order dependency in time series. The results of other datasets and more detailed discussions can be found in $\\\\underline{\\\\text{Appendix F of revised paper}}$.\\n\\n| Datasets | Prediction Length | Linear Model | | | | Mamba | | | | Transformer | | | | iTransformer| | | |\\n|--------------|--------------------|--------------|--------|-------|---------|-------|--------|--------|----------|-------------|------|------|---------|-------------|------|-------|-------------|\\n| | | O.MSE | S.MSE | O.MAE | S.MAE | O.MSE | S.MSE | O.MAE | S.MAE | O.MSE | S.MSE| O.MAE| S.MAE | O.MSE | S.MSE| O.MAE | S.MAE |\\n| **ETTm1** | 96 | 0.383 | 0.988 | 0.400 | 0.697 | 0.517 | 0.922 | 0.508 | 0.688 | 0.643 | 0.884| 0.575| 0.643 | 0.345 | 0.892| 0.378 | 0.610 |\\n| | 192 | 0.413 | 0.986 | 0.415 | 0.697 | 0.575 | 0.931 | 0.546 | 0.699 | 0.805 | 1.01 | 0.664| 0.730 | 0.383 | 0.903| 0.395 | 0.617 |\\n| | 336 | 0.441 | 0.987 | 0.435 | 0.698 | 0.730 | 0.957 | 0.634 | 0.703 | 0.882 | 1.12 | 0.737| 0.817 | 0.423 | 0.923| 0.420 | 0.630 |\\n| | 720 | 0.497 | 0.992 | 0.469 | 0.704 | 0.873 | 0.973 | 0.704 | 0.723 | 0.928 | 1.12 | 0.752| 0.800 | 0.489 | 0.932| 0.456 | 0.641 |\\n| | **Avg. Drop** | - | 127.97%| - | 62.55% | - | 40.37% | - | 17.60% | - | 22.40%|- | 6.55% | - | 122.56%|- | 0.515% |\\n| **Exchange** | 96 | 0.0832 | 0.210 | 0.201 | 0.332 | 1.260 | 1.401 | 0.915 | 0.943 | 0.730 | 0.738| 0.782| 0.722 | 0.0869 | 0.242| 0.207 | 0.358 |\\n| | 192 | 0.179 | 0.325 | 0.299 | 0.414 | 1.398 | 1.626 | 1.040 | 1.060 | 1.304 | 1.284| 0.913| 0.949 | 0.179 | 0.374| 0.301 | 0.450 |\\n| | 336 | 0.338 | 0.521 | 0.418 | 0.534 | 1.835 | 1.921 | 1.111 | 1.141 | 1.860 | 1.862| 1.090| 1.085 | 0.331 | 0.535| 0.417 | 0.557 |\\n| | 720 | 0.903 | 1.167 | 0.714 | 0.822 | 3.940 | 4.023 | 1.687 | 1.697 | 3.860 | 3.865| 1.684| 1.685 | 0.856 | 1.202| 0.698 | 0.841 |\\n| | **Avg. Drop** | - | 47.89% | - | 28.80% | - | 6.38% | - | 1.85% | - | -0.06%|- |-0.63% | - | 63.33%|- | 35.89% |\\n\\n*Note: O.MSE and O.MAE are evaluated in original test set. S.MSE and S.MAE are evaluated in shuffling test set.*\\n\\n>[W3-2] For instance, Table 1 shows Transformers' reduced sensitivity to order perturbation, which may actually be beneficial for encoding effective temporal representations\\n\\n**[RW3-2]** We appreciate the reviewers for presenting an interesting perspective, **but this perspective contradicts the current work [5][6].** In $\\\\underline{\\\\text{Section 4.1}}$, our work adopts the same settings as DLinear [5] to demonstrate that Mamba effectively captures order dependency. **The importance of effectively capturing order dependency for accurate time series forecasting has already been proven by DLinear.** This view has also been acknowledged in recent work [6], which uses it to demonstrate that LLMs are unable to effectively learn the order dependency in time series.\\n\\n>[W3-3] Table 2 investigates patching effectiveness, originally introduced in PatchTST, yet PatchTST itself is not included in the analysis.\\n\\n**[RW3-3]** We have already clarified in $\\\\underline{\\\\text{Appendix B.4 (Implementation) of original paper}}$ that \\\"Patch + Transformer\\\" is PatchTST.\"}", "{\"title\": \"Response to Reviewer biFR [Part 2]\", \"comment\": \">[W3] Moreover, there are many LLM-based methods for MTSF in recent years, and is highly recommended to be included in literature / benchmark methods such as LLM4TS, GPT4TS, CATS etc.\\n\\n**[RW3]** Thank you for pointing out several impactful and interesting works. Incorporating these as baselines will enhance the comprehensiveness of our manuscript. Indeed, GPT4TS has already been included as a baseline under the name FTP in our work. And unfortunately, we could not find official code implementations for LLM4TS [1], and CATS [2] faces a similar issue, as only a Jupyter Notebook is provided without a formal implementation. We plan to reproduce the aforementioned works in the future and include them as baselines. But due to the tight schedule during rebuttal, we incorporated AutoTimes [3], a recently LLM-based published work on NeurIPS 2024, and used its GPT-2 implementation as a baseline. The experimental results demonstrate that SAMBA still achieve the best performance.\\n\\n| Models | SAMBA (Ours) | | AutoTimes (2024) | | FTP (2023) | |\\n|----------------|------------------|------------------|------------------|------------------|------------------|------------------|\\n| | MSE | MAE | MSE | MAE | MSE | MAE |\\n| **ECL** | 0.172 | 0.268 | 0.188 | 0.275 | 0.210 | 0.291 |\\n| **ETTh1** | 0.443 | 0.432 | 0.464 | 0.451 | 0.450 | 0.439 |\\n| **ETTh2** | 0.363 | 0.392 | 0.403 | 0.417 | 0.385 | 0.411 |\\n| **ETTm1** | 0.378 | 0.394 | 0.392 | 0.409 | 0.392 | 0.401 |\\n| **ETTm2** | 0.276 | 0.322 | 0.289 | 0.334 | 0.285 | 0.331 |\\n| **Exchange** | 0.356 | 0.401 | 0.366 | 0.405 | 0.368 | 0.406 |\\n| **Traffic** | 0.422 | 0.276 | 0.501 | 0.330 | 0.511 | 0.334 |\\n| **Weather** | 0.249 | 0.278 | 0.270 | 0.293 | 0.267 | 0.287 |\\n| **Solar-Energy** | 0.229 | 0.253 | 0.256 | 0.297 | 0.260 | 0.304 |\\n\\n\\n*Note: Reported results are the averages over four prediction horizons: 96, 192, 336, and 720.*\\n\\n>[Q1] Is the model performance highly dependent on the parameters of patching?\\n\\n**[RQ1]** Thank you to the reviewer for raising the question about the patch length hyperparameter. To address this, we conduct experiments on the ETTm1, ETTh1, and Weather datasets with the patch size from 2 to 32. The results are as follows. **The experimental results indicate that SAMBA is robust to changes in patch length.** The visualization results are included in $\\\\underline{\\\\text{Appendix H of revised paper}}$.\\n\\n| Dataset | Metric | 2 | 4 | 8 | 16 | 24 | 32 |\\n|-----------|---------|--------|--------|--------|--------|--------|--------|\\n| **ETTm1** | MSE | 0.329 | 0.320 | 0.320 | 0.315 | 0.317 | 0.327 |\\n| | MAE | 0.366 | 0.360 | 0.359 | 0.357 | 0.358 | 0.365 |\\n| **ETTh1** | MSE | 0.422 | 0.383 | 0.382 | 0.376 | 0.385 | 0.386 |\\n| | MAE | 0.427 | 0.409 | 0.401 | 0.400 | 0.404 | 0.406 |\\n| **Weather** | MSE | 0.180 | 0.175 | 0.167 | 0.165 | 0.174 | 0.178 |\\n| | MAE | 0.223 | 0.220 | 0.212 | 0.214 | 0.222 | 0.223 |\\n\\n\\n\\n>[Q2] The structure of SAMBA block should be directly compared with MAMBA in Fig 3 to show the difference.\\n\\n**[RQ2]** Thank you for the suggestion. In the $\\\\underline{\\\\text{Appendix U of revised paper}}$, we have included a direct comparison between SAMBA and the Mamba block to help readers better understand the differences between the two.\"}", "{\"title\": \"Response to Reviewer y2RJ [Part 1]\", \"comment\": \"Dear Reviewer y2RJ,\\n\\nwe would like to sincerely thank you for providing a detailed review and insightful suggestions. **Following your suggestions, we conducted over 500 additional experiments and reported new results for more than 250 of them. We hope these efforts lead to a reassessment of our manuscript.** Below, we provide detailed, point-by-point responses to your concerns.\\n\\n>[W1] This paper lacks innovation, as the authors only model interactions along time dimension and variate dimension separately, without the targeted design towards disentangled dependency encoding strategy. The authors should provide a detailed description towards their disentangled dependency encoding strategy.\\n\\n**[RW1]** Thank you for the constructive suggestion. We understand that the reviewer may be concerned about the lack of explicit mechanisms to ensure successful disentanglement, similar to those was done in Disentangled Representation Learning (DRL) literature. DRL focuses on identifying representations of latent variables within the Data Generation Process (DGP), often relying on tailored regularizers grounded in strict assumptions on DGP, which limits its broad applicability.\\n\\n**Our main objective is to improve the modeling of both cross-time and cross-variate dependencies in LTSF data.** While using DRL to generate latent representations that characterize these two dependencies separately is a meaningful direction, designing a universal approach that ensures identifiability across time series data in diverse domains is very challenging\\u2014given the significant DGP differences between datasets. For example, in traffic datasets, the cross-variate dependency originates from the human moving patterns, while the cross-variate dependency in electricity data originates from semantic relations among varied attributes of electricity power.\\n**Instead, our approach explores the direction of introducing distinct inductive biases into the encoders to encourage disentanglement.** Specifically, we alter the dimension of sequential modeling within the encoder. We have empirically validated this strategy across various backbone models. Additionally, we provide theoretical guarantees that, under mild assumptions about the two dependencies, our disentangled encoding strategy outperforms the existing time-then-variate encoding approach. We believe our strategy possesses generality and novelty within the LTSF literature.\\n\\nFinally, apart from the disentangled strategy, we also make other innovative contributions in this paper: (1) Identification and formal definition of three critical dependencies in time series data; (2) In-depth analysis of Mamba\\u2019s advantages; (3) Discovering and addressing overfitting issues caused by model non-linearities.\\n\\n>[W2-1] The design of the SAMBA block is also similar to existing works [1, 2]. It seems that the only difference between this work and existing work is the removal of the nonlinear activation function between the Conv1D and SSM layers.\\n\\n**[RW2-1]** Thank you for the reviewer\\u2019s suggestions. We indeed focus on studying different modules of the Mamba. However, the existing works on Mamba that remove Conv1D convolutions [1][2] rely on prior findings [3][4], whereas **our proposal to remove non-linear activation functions is based on our empirical analysis and is presented here for the first time.**\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"I appreciate the authors' considerable efforts in providing a thorough response.\\n\\nHowever, I do not believe my critical concerns have been adequately addressed, particularly regarding the necessity of adapting Mamba to time-series data.\\n\\nThe primary demonstrated strength of Mamba lies in its efficiency when processing long contexts, rather than in improved expressiveness or learning capabilities (https://arxiv.org/abs/2406.07887). Consequently, I feel the authors may have overlooked the best opportunity for adapting Mamba to time-series applications.\\n\\nRegarding the claimed superiority of preserving order dependency (citing arguments based on DLinear, a paper published over two years ago), I do not see a clear necessity for preserving order dependency to achieve accurate forecasts. For instance, there have been significant advancements in improving time-series Transformers since then, such as reversible instance normalization (https://openreview.net/forum?id=cGDAkQo1C0p) and PatchTST (https://arxiv.org/abs/2211.14730), which substantially enhance vanilla Transformer performance in time-series forecasting without relying on order sensitivity. As such, I remain unconvinced by the designed experiments perturbing input order to justify whether a model is better suited for time-series forecasting. A simple recurrent neural network (e.g., GRU, LSTM) may show higher sensitivity to order perturbations, but this does not imply superior learning or generation capabilities compared to Transformers.\\n\\nFurthermore, while cross-variate dependency does not inherently involve order dependency, the design of a bidirectional Mamba raises additional concerns. How would the model handle other perturbed orders? Would an exponentially growing number of Mamba branches be required to encode various order permutations? I fail to see the necessity or rationale for using Mamba to model variate dependency effectively. In contrast, self-attention could be a perfect fit for modeling cross-variate depdency.\\n\\nIn summary, I believe this work misses the key strength of Mamba, and its claimed properties, contributions, and novel designs appear unnecessary and unsubstantiated.\"}", "{\"title\": \"Response to Reviewer qdrT [Part 3]\", \"comment\": \">[W3-4] Additionally, the impact of patch size on results and whether all datasets lead to the same conclusion are unexplored.\\n\\n**[RW3-4]** Thank you for the valuable suggestion regarding the comprehensiveness of our experiments. We follow your suggestion to analyze the impact of patch size. **The results are shown in the table below, and our conclusion is that removing nonlinear activation functions helps mitigate overfitting and is robust to patch size.** Results from additional datasets also align with our conclusion; please refer to $\\\\underline{\\\\text{Appendix F of revised paper}}$ for more details.\\n| Dataset | patch size | Patch+MLP | | Patch+MLP-n | |Patch+Mamba | | Patch+Mamba-n | | Patch+Transformer | |Patch+ Transformer-n | |\\n|---------------|-----|----------------|------------|------------|----------------|------------|------------|----------------|------------|------------|----------------|------------|------------|\\n| | | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE |\\n| **ETTm1** | 2 | 0.429 | 0.431 | 0.413 | 0.415 | 0.410 | 0.415 | 0.400 | 0.414 | 0.418 | 0.424 | 0.407 | 0.418 |\\n| | 8 | 0.427 | 0.430 | 0.413 | 0.414 | 0.402 | 0.412 | 0.397 | 0.412 | 0.417 | 0.424 | 0.407 | 0.416 |\\n| | 16 | 0.424 | 0.427 | 0.411 | 0.412 | 0.402 | 0.412 | 0.399 | 0.414 | 0.414 | 0.422 | 0.406 | 0.417 |\\n| | 32 | 0.425 | 0.427 | 0.410 | 0.411 | 0.405 |0.414 | 0.398 | 0.414 | 0.415 | 0.422 | 0.410 | 0.419 |\\n| **Exchange** | 2 | 0.401 | 0.445 | 0.327 | 0.401 | 0.365 | 0.405 | 0.361 | 0.401 | 0.370 | 0.408 | 0.364 | 0.405 |\\n| | 8 | 0.399 | 0.444 | 0.319 | 0.397 | 0.360 | 0.402 | 0.355 | 0.398 | 0.369 | 0.407 | 0.363 | 0.403 |\\n| | 16 | 0.403 | 0.448 | 0.321 | 0.399 | 0.361 | 0.403 | 0.357 | 0.400 | 0.364 | 0.405 | 0.362 | 0.404 |\\n| | 32 | 0.402 | 0.447 | 0.321 | 0.398 | 0.362 | 0.403 | 0.358 | 0.401 | 0.365 | 0.407 | 0.362 | 0.404 |\\n\\n*Note: `-n` indicates the removal of nonlinear activation functions. Reported results are the averages over four prediction horizons: 96, 192, 336, and 720.*\\n\\n>[W3-5] Other factors, such as TSMixer extending MLP-based architectures on time-series and PatchTST's designs to prevent overfitting, are not considered in Table 3. Consequently, the analyses in Section 4 are unconvincing regarding the necessity of introducing Mamba.\\n\\n**[RW3-5]** $\\\\underline{\\\\text{Section 4}}$ focuses on the potential of vanilla Linear, Transformer, and Mamba architectures as backbones for time series forecasting. **Introducing too many new techniques would increase the complexity of the analysis.** In $\\\\underline{\\\\text{Section 4}}$, to evaluate semantic dependencies and verify the role of nonlinear activation functions, we introduced the Patching technique. **As answered in $\\\\underline{\\\\text{[W2-2]}}$, Patch+Transformer is PatchTST.** Our $\\\\underline{\\\\text{Section 4}}$ results show that Mamba is the only model among the three that can simultaneously capture both order dependency and semantic dependency without additional techniques, demonstrating its potential and superiority as a backbone for time series modeling.\\n\\n>[W4] Seemingly Downgraded Performance of Baselines\\n\\n**[RW4]** In $\\\\underline{\\\\text{lines 432\\u2013433 of the paper}}$, we have already clarified in the introduction of baselines: \\u201cWe carefully use 13 popular LTSF forecasting models as our baselines and we cite their performance from Liu et al. (2023) if applicable.\\u201d Therefore, **the reported results for PatchTST are based on the results provided in iTransformer.**\\nAccording to the explanation in the iTransformer paper, this difference from:\\n\\n- **Enlarged lookback window**: The PatchTST paper adopts tunable lookback lengths (336, 512), while our method uniformly uses a length of 96, following the unified long-term forecasting protocol of TimesNet. \\n- **More epochs to train**: The PatchTST paper trains the PatchTST model with 100 epochs, whereas we train all models with only 10 epochs. \\n- **Learning rate**: The PatchTST paper adopts a carefully designed learning rate strategy.\"}", "{\"title\": \"Response to Reviewer WkCP [Part 2]\", \"comment\": \">[Q1-2] Constructing algorithms between tokens in attention (or SSM in Mamba) involves learning the sequence's semantic information. The richer the semantic information, the more we need to model algorithms at a finer granularity (shorter patch sizes). The paper suggests that patching enhances semantic dependency learning, but this seems contradictory.\\n\\n**[RQ1-2]** The reviewer raised a valuable discussion regarding patching. We agree with the notion that richer semantics require finer-grained modeling. **This statement does not conflict with our Assumption 2.** The former refers to data that inherently has strong semantic information, such as text, where its rich semantics require fine-grained processing. In contrast, the time series data we study are semantically sparse [2][3]. As noted in PatchTST [3], \\\"A single time step does not have semantic meaning like a word in a sentence; thus, extracting local semantic information is essential in analyzing their connections.\\\" **Hence, Patches in time series data enhance locality and capture comprehensive semantic information that cannot be achieved at the point level by aggregating time steps into subseries-level patches [3].** Therefore, our Assumption 2 is reasonable.\\n\\n>[Q1-3] The authors could experiment on datasets with richer semantic information (e.g., ODE-based datasets with underlying dynamics) to see if pure timestep embedding outperforms patch embedding.\\n\\n**[RQ1-3]** Regarding the comparison of patch embeddings versus pure time steps on datasets with strong ODE dynamics, this is an intriguing discussion, and we appreciate the reviewer's deep insights into this area. **Existing work [4] demonstrates that even on datasets with strong ODE dynamics, patch embeddings still achieve significant performance improvements.** This aligns with our previous discussions on the semantic sparsity of time series. However, it is worth noting that this work also shows that **datasets with strong ODE dynamics are sensitive to patch size**. Larger patches may harm performance, which is consistent with the reviewer's speculation. Overall, this work supports findings from previous studies and aligns with our Assumption 2 while reflecting the reviewer's valuable insight: **the semantics of individual time series points are sparse, and using appropriately sized patches can enhance the semantic richness of the data.** However, excessively large patches may disrupt the sequence's semantic information.\\n\\n>[Q1-4] In summary, does Mamba have an advantage over Transformer-based models in modeling order information as claimed? It seems so, but this advantage may come from patching tokenization and possibly from Mamba's SSM handling of sequential/causal token information. The authors could compare Mamba to Transformers using causal masks/training methods to further establish Mamba's structural superiority in LTSF. \\n\\n**[RQ1-4]** Thank you for the reviewer's discussion on whether Mamba can better model order dependencies. When conducting the experiments in Table 1, we ensured that both Mamba and the Transformer used the same tokenization method as suggested by the Time Series Library GitHub repository. **This ensured fairness in the experiments** and avoided any advantage of Mamba in capturing order dependencies stemming from unfair tokenization methods. As a result, the advantage is attributable to the recursive processing of the SSM compared to attention mechanisms. We also tested the performance of a transformer trained using a causal-masked autoregressive approach on the Exchange dataset. **The experimental results show that causal transformer is more sensitive to order than a standard transformer but still underperforms Mamba.**\\n\\n| Datasets | Prediction Length | Mamba | | | | Transformer| | | | Causal Transformer| | | |\\n|--------------|--------------------|-------|-------|-------|--------|------|-------|-------|----------|------|------|-------|---------------|\\n| | | O.MSE | S.MSE | O.MAE | S.MAE | O.MSE | S.MSE | O.MAE | S.MAE | O.MSE | S.MSE | O.MAE | S.MAE |\\n| **Exchange** | 96 | 1.260 | 1.401 | 0.915 | 0.943 | 0.730 | 0.738 | 0.782 | 0.722 | 0.570 | 0.584 | 0.610 | 0.629 |\\n| | 192 | 1.398 | 1.626 | 1.040 | 1.060 | 1.304 | 1.284 | 0.913 | 0.949 | 1.182 | 1.259 | 0.918 | 0.938 |\\n| | 336 | 1.835 | 1.921 | 1.111 | 1.141 | 1.860 | 1.862 | 1.090 | 1.085 | 1.405 | 1.445 | 0.943 | 0.945 |\\n| | 720 | 3.940 | 4.023 | 1.687 | 1.697 | 3.860 | 3.865 | 1.684 | 1.685 | 3.532 | 3.605 | 0.698 | 0.837 |\\n| | **Avg. Drop** | - | 6.38% | - | 1.85% | - | -0.06% | - | -0.63% | - | 3.05% | - | 1.35% |\"}", "{\"title\": \"Thanks for your response [Part 1]\", \"comment\": \"We appreciate the reviewer\\u2019s feedback, and we will address their points individually.\\n\\n>[W1-1] I could not find any specific design regarding 'inductive biases' in the paper\\n\\n**[RW1-1]** In fact, **inductive bias is a fundamental concept in machine learning and academic terminology.** The inductive bias (also known as learning bias) of a learning algorithm refers to the set of assumptions that the learner uses to predict the outputs of given inputs it has not encountered before [1]. For instance, the inductive bias of CNNs includes translation invariance, as well as the assumption that local regions in the input data typically contain meaningful information. In our case, we used the term \\\"inductive bias\\\" to more clearly illustrate our architectural design principle in contrast to DRL, which we will ensure to include in the revised version of our paper. Similar to GNNs, which prioritize neighboring nodes for target node prediction, the two encoders in our strategy assume that **cross-time or cross-variate dependencies are more prominent, which naturally leads to their respective encoding dimensions.** Incorporating these biases enables the encoders to concentrate on distinct dependencies without mutual interference.\\n\\n>[W1-2] there are some other Disentangled methods towards the time dimension and the variate dimension (e.g., TImeDRL [1]), the authors should demonstrate the advantages or differences of their methods compared to these existing methods.\\n\\n**[RW1-2]** In fact, **the paper provided by the reviewer does not focus on the decoupling of temporal and variable dimensions.** Instead, it introduces a concept inspired by the [CLS] token in BERT to design a timestamp-level embedding and an instance-level embedding, thereby achieving the decoupling of local representation and global representation. This is entirely different from the disentangling of temporal and variable dimensions that we proposed. The difference between our work and** the most closely related CARD **has already been discussed in the $\\\\underline{\\\\text{Related Work section of original paper}}$.\\n\\n>[W2-1] I would argue that we cannot be confident that experiments on the Exchange dataset can validate the effectiveness of removing nonlinear activation functions. Second, the experimental results on Traffic and ETTm1 datasets show that removing nonlinear activation functions in many variants leads to performance degradation. \\n\\n **[RW2]** First, **I did not find any reference in the materials [2] suggesting that non-deep learning methods perform better than deep learning methods in Exchange dataset.** Secondly, we provided the simplest linear layer approach, which is essentially a linear estimation model. The experimental results show that its performance is worse than that of the Patch+Transformer and Patch+Mamba, which contradicts the conclusion the reviewer has suggested. Third, the Exchange dataset is already widely used in the LTSF field, indicating that the dataset's value has been recognized in this area. We have provided an explanation for the performance decline in many variants. Furthermore, the continuous performance improvement observed with Mamba after removing the activation functions underscores the importance of this approach.\\n\\n>[W2-2] In addition, the difference between SAMBA and the existing works is that the authors only remove nonlinear activation functions of Mamba, which further raises concerns about the novelty of the proposed method.\\n\\n**[RW2-2]** As noted by the reviewer in [W1-2] that **we also proposed a model-agnostic disentangled encoding method with theoretical guarantees, which is one of the key contributions of our work.** The reviewer's assessment of the novelty of our approach should not overlook this method. Unlike previous methods for introducing cross-variable dependencies that lacked theoretical explanation, we provide theoretical proof showing that our disentangled encoding approach is more effective. This lays a theoretically grounded path for future research in this area, encouraging further developments along this line.\\n\\n[1] The Need for Biases in Learning Generalizations\"}", "{\"title\": \"Response to Reviewer qdrT [Part 1]\", \"comment\": \"Dear Reviewer qdrT,\\n\\nWe sincerely appreciate the reviewer\\u2019s valuable feedback. **Following your suggestions, we conducted over 500 additional experiments and reported new results for more than 250 of them. We hope these efforts lead to a reassessment of our manuscript.** Below, we provide detailed, point-by-point responses to your concerns.\\n\\n>[W1] Overlooking the Unique Advantage of Mamba: Modeling Long Sequences.\\n\\n**[RW1]** \\n- First, we want to emphasize that **lookback window 96 is a common setting in the LTSF domain,** widely adopted by most works [1][2], and are also a standard setting for many Mamba for LTSF studies [3][4].\\n- Second, rather than directly leveraging the advantages of Mamba already proven in other domains, **our work reveals an unexplored advantage of Mamba in the LTSF domain,** namely its ability to simultaneously capture order dependency and semantic dependency, which adds greater value to our research. \\n- Moreover, **we do not deny Mamba's strengths in long-sequence modeling,** and we have already demonstrated in $\\\\underline{\\\\text{Appendix I of original paper}}$ that Mamba can effectively leverage longer lookback windows from 48 to 720.\\n\\n>[W2-1] The claim, \\\"However, most approaches still struggle to comprehensively capture reliable and informative dependencies inherent in time series data,\\\" lacks rigor.\\n\\n**[RW2-1]** Based on the three types of dependencies we defined, we empirically find that existing Transformer and Linear models are unable to simultaneously capture both order dependency and semantic dependency (see Section 4 and the additional results on more datasets provided in the $\\\\underline{\\\\text{Appendix F of revised paper}}$). Therefore, models based on Transformers and Linear architectures cannot effectively capture these two dependencies simultaneously.\\n\\n>[W2-2] The assertion that \\\"they struggle with perceiving temporal order due to the permutation-invariant nature of self-attention, even with positional encodings,\\\" is debatable. \\n\\n**[RW2-2]** The conclusion that \\\"they struggle with perceiving temporal order due to the permutation-invariant nature of self-attention, even with positional encodings\\\" is a viewpoint we cited from DLinear. In the DLinear abstract, the original statement is: \\\"While employing positional encoding and using tokens to embed sub-series in Transformers facilitate preserving some ordering information, the nature of the permutation-invariant self-attention mechanism inevitably results in temporal information loss.\\\" Therefore, **this perspective is not a novel claim we are proposing but rather a conclusion that has been discovered by others.**\\n\\n>[W2-3] Experiments in Table 1 only demonstrate that Transformers are less sensitive to order permutation than Linear and Mamba, which is expected. It is not clear how sensitivity to sequence order impacts the capability of learning effective temporal representations.\\n\\n**[RW2-3]** We appreciate the reviewers' possible inference regarding this conclusion. However, existing studies [5][6] suggest that **this effect is not unclear but rather explicit:** a good time series predictor should be sensitive to the order of the sequence. **This perspective is clearly stated in Section 5.3 of DLinear [5]: \\\"in time series forecasting, the sequence order often plays a crucial role.\\\"** This view has also been acknowledged in recent work [6], which uses it to demonstrate that LLMs are unable to effectively learn the order dependency in time series.\\n\\n>[W2-4] use of Mamba to model cross-variate dependency seems flawed, as time-series variates lack a ground-truth order. Following the author's logic, using an order-sensitive model like Mamba for cross-variate dependency might not be appropriate.\\n\\n**[RW2-4]** As stated in $\\\\underline{\\\\text{Appendix C of original paper, \\\"Why Choose Mamba To Encode Cross-variate Dependency?\\\"}}$, using sequential models to process unordered sequences is feasible, as evidenced by using LSTM or Mamba to encode unordered graph neighbors [7][8]. Additionally, considering the unordered nature of variable sequences, we designed a bidirectional Mamba to encode them effectively.\\n\\n>[W2-5] Statement \\\"existing approaches that utilize cross-variate dependency (Channel Dependent, CD) frequently underperform compared to methods that treat each variate independently (Channel-Independent, CI),\\\" is not universally applicable. The performance of CD and CI approaches highly depends on specific cases. Studies, such as iTransformer, have demonstrated the benefits of cross-variate modeling.\\n\\n**[RW2-5]** We kindly request reviewers not to overlook the context surrounding this statement. In the following sentence, we emphasize that iTransformer largely addresses this issue but still fails to account for the interplay between the temporal and variable dimensions. Therefore, we propose a disentangled encoding approach to model cross-variable dependencies more effectively, which is also theoretically validated.\"}", "{\"title\": \"Response to Reviewer y2RJ [Part 4]\", \"comment\": \">[W3-2-3] In addition, in Table 3, why is there no comparison with the Patch+MLP method? The authors should include comparative experiments with the SOTA transformer-based framework and the Patch+MLP method in Table 1 and Table 3, respectively.\\n\\n**[RW3-2-3]** Thank you for the valuable suggestion regarding the comprehensiveness of our experiments. Following your suggestion, we included the latest SOTA model, iTransformer, which uses a Linear Model to encode temporal relationships and is therefore expected to be sensitive to time order. The results are shown in the table below. **The following experimental results demonstrate that both Linear and Mamba effectively capture order dependency.** The additional results for Patch+MLP regarding Table 3 have already been presented in [RW2-2]. More comprehensive results for the above two tables are included in the $\\\\underline{\\\\text{Appendix F of revised paper}}$.\\n| Datasets | Prediction Length | Linear Model | | | | Mamba | | | | Transformer | | | | iTransformer| | | |\\n|--------------|--------------------|--------------|--------|-------|---------|-------|--------|--------|----------|-------------|------|------|---------|-------------|------|-------|-------------|\\n| | | O.MSE | S.MSE | O.MAE | S.MAE | O.MSE | S.MSE | O.MAE | S.MAE | O.MSE | S.MSE| O.MAE| S.MAE | O.MSE | S.MSE| O.MAE | S.MAE |\\n| **ETTm1** | 96 | 0.383 | 0.988 | 0.400 | 0.697 | 0.517 | 0.922 | 0.508 | 0.688 | 0.643 | 0.884| 0.575| 0.643 | 0.345 | 0.892| 0.378 | 0.610 |\\n| | 192 | 0.413 | 0.986 | 0.415 | 0.697 | 0.575 | 0.931 | 0.546 | 0.699 | 0.805 | 1.01 | 0.664| 0.730 | 0.383 | 0.903| 0.395 | 0.617 |\\n| | 336 | 0.441 | 0.987 | 0.435 | 0.698 | 0.730 | 0.957 | 0.634 | 0.703 | 0.882 | 1.12 | 0.737| 0.817 | 0.423 | 0.923| 0.420 | 0.630 |\\n| | 720 | 0.497 | 0.992 | 0.469 | 0.704 | 0.873 | 0.973 | 0.704 | 0.723 | 0.928 | 1.12 | 0.752| 0.800 | 0.489 | 0.932| 0.456 | 0.641 |\\n| | **Avg. Drop** | - | 127.97%| - | 62.55% | - | 40.37% | - | 17.60% | - | 22.40%|- | 6.55% | - | 122.56%|- | 0.515% |\\n| **Exchange** | 96 | 0.0832 | 0.210 | 0.201 | 0.332 | 1.260 | 1.401 | 0.915 | 0.943 | 0.730 | 0.738| 0.782| 0.722 | 0.0869 | 0.242| 0.207 | 0.358 |\\n| | 192 | 0.179 | 0.325 | 0.299 | 0.414 | 1.398 | 1.626 | 1.040 | 1.060 | 1.304 | 1.284| 0.913| 0.949 | 0.179 | 0.374| 0.301 | 0.450 |\\n| | 336 | 0.338 | 0.521 | 0.418 | 0.534 | 1.835 | 1.921 | 1.111 | 1.141 | 1.860 | 1.862| 1.090| 1.085 | 0.331 | 0.535| 0.417 | 0.557 |\\n| | 720 | 0.903 | 1.167 | 0.714 | 0.822 | 3.940 | 4.023 | 1.687 | 1.697 | 3.860 | 3.865| 1.684| 1.685 | 0.856 | 1.202| 0.698 | 0.841 |\\n| | **Avg. Drop** | - | 47.89% | - | 28.80% | - | 6.38% | - | 1.85% | - | -0.06%|- |-0.63% | - | 63.33%|- | 35.89% |\\n\\n*Note: O.MSE and O.MAE are evaluated in the original test set. S.MSE and S.MAE are evaluated in the shuffling test set.*\\n\\n>[W3-2-4] The authors should provide a detailed explanation of the notations and abbreviations used in the Tables. \\n\\n**[RW3-2-4]** Thank you for the valuable suggestion. We revise the tables to provide detailed explanations of the symbols and abbreviations used.\"}", "{\"title\": \"Thank you for your response [Part 1]\", \"comment\": \"We appreciate the reviewer\\u2019s feedback. Before addressing the reviewer\\u2019s comments point by point, we would like to emphasize the main claim of this work: the advantage of Mamba lies in its ability to **simultaneously capture order dependency and semantic dependency.** The coexistence of these two capabilities enables Mamba's superiority in time series tasks. Below, we provide detailed responses to the reviewer\\u2019s comments:\\n\\n>[W1] The primary demonstrated strength of Mamba lies in its efficiency when processing long contexts, rather than in improved expressiveness or learning capabilities\\n\\n**[RW1]** The reviewer has provided an interesting finding regarding Mamba in the NLP domain. Similarly, we have demonstrated Mamba's advantage in handling long sequences through experiments with increasing lookback windows in the $\\\\underline{\\\\text{Appendix I of original paper}}$. We reiterate that there is currently a lack of research on Mamba's unique advantages in the time series domain, rather than merely transferring the already validated advantages of Mamba in NLP. Our study focuses on Mamba's distinctive strengths in time series, and our experimental results strongly demonstrate that Mamba can effectively capture both order dependency and semantic dependency. This finding highlights the need for the time series field to reassess the potential and advantages of Mamba.\\n\\n>[W2-1] I do not see a clear necessity for preserving order dependency to achieve accurate forecasts. For instance, there have been significant advancements in improving time-series Transformers since then, such as reversible instance normalization (https://openreview.net/forum?id=cGDAkQo1C0p) and PatchTST (https://arxiv.org/abs/2211.14730), which substantially enhance vanilla Transformer performance in time-series forecasting without relying on order sensitivity. \\n\\n**[RW2-1]** We reiterate that **cross-time dependency encompasses both order and semantic dependencies, and performance analysis should not isolate one from the other.** The reviewer mentioned the success of PatchTST, which highlights the importance of semantic dependency. Meanwhile, **the superior performance of many simpler MLP-based models like TimeMixer [1] underscores the significance of order dependency.** Mamba's ability to simultaneously capture both order and semantic dependencies showcases its potential as a backbone for time series tasks. This also explains why Patch+Mamba outperforms Patch+Transformer. \\n\\n>[W2-2] A simple recurrent neural network (e.g., GRU, LSTM) may show higher sensitivity to order perturbations, but this does not imply superior learning or generation capabilities compared to Transformers.\\n\\n**[RW2-2]** The weakness of RNNs in capturing semantic dependency accounts for their underperformance compared to Transformers. In summary, **the success of MLP-based models demonstrates the importance of order dependency, while the strong performance of Transformer-based models highlights the significance of semantic dependency.** The discovery that Mamba can capture both types of dependencies underscores its potential as a backbone for time series tasks.\"}" ] }
9VMW4iXfKt
R-Sparse: Rank-Aware Activation Sparsity for Efficient LLM Inference
[ "Zhenyu Zhang", "Zechun Liu", "Yuandong Tian", "Harshit Khaitan", "Zhangyang Wang", "Steven Li" ]
Large Language Models (LLMs), while demonstrating remarkable capabilities across various applications, present significant challenges during inference due to their substantial model size, especially when deployed on edge devices. Activation sparsity offers a promising solution to reduce computation and memory movement, enabling more efficient inference, particularly for small-batch on-device applications. However, current approaches face limitations with non-ReLU activation function, which are foundational to most advanced LLMs, or require heavy continual training. Additionally, the difficulty in predicting active channels and limited achievable sparsity ratios constrain the effectiveness of activation sparsity-based methods. In this paper, we introduce R-Sparse, a training-free activation sparsity approach capable of achieving high sparsity levels in advanced LLMs. We conducted two preliminary investigations into how different components contribute to the output within a single linear layer and found two key observations: (i) the non-sparse components of the input function can be regarded as a few bias terms, and (ii) The full computation can be effectively approximated by an appropriate combination of input channels and weight singular values. Building on this, we replace the linear layers in LLMs with a rank-aware sparse inference method that leverages the sparsity of input channels and singular value components, eliminating the need for active channel prediction like the output sparsity based approaches. Experiments on Llama-2/3 and Mistral models across ten diverse tasks demonstrate that R-Sparse achieves comparable performance at 50\% model-level sparsity, resulting in a significant 43\% end-to-end efficient improvements with customized kernels.
[ "Large Language Model; Efficient Inference; Activation Sparsity" ]
Accept (Poster)
https://openreview.net/pdf?id=9VMW4iXfKt
https://openreview.net/forum?id=9VMW4iXfKt
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xwYOPJAyM0", "x1SNMV6oAm", "wI8W3DaRaH", "nz98To5maM", "nQqZPeHMB4", "id0zt2QSeO", "iYxXs2Qj2q", "iBHoaFFoo1", "gxbgq5S2Rf", "gmn2XGZaKc", "gBVyIquGo7", "fz96M2LtoQ", "dimJkVmAaN", "dhmvKS8PFJ", "b9nwTqdiGP", "b12KVuiUJL", "RrZiKgHY21", "QSvIejOapq", "P0dWaoDEmq", "Oltx3ZOJmc", "KeYvRnbvDi", "GkaIofw0LF", "FPepYJqiPT", "E8zk1iGQzj", "8cPblcMtGM", "58SZNEJC1K", "0gOnAqXXJg", "0QlSD2ANfh" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732294057510, 1730584611439, 1732814086184, 1733101912459, 1732293590120, 1732536044386, 1732611188290, 1732292742051, 1737524159860, 1732552239628, 1733159135770, 1732551897310, 1730722066128, 1730722231098, 1732552331614, 1732906307044, 1732551551863, 1733270371434, 1734860322336, 1732293021413, 1732525819044, 1732673579921, 1732680830563, 1732293869664, 1730700684023, 1733095291585, 1732639517319, 1732639758140 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12008/Authors" ], [ "ICLR.cc/2025/Conference/Submission12008/Reviewer_sWLw" ], [ "ICLR.cc/2025/Conference/Submission12008/Authors" ], [ "ICLR.cc/2025/Conference/Submission12008/Authors" ], [ "ICLR.cc/2025/Conference/Submission12008/Authors" ], [ "ICLR.cc/2025/Conference/Submission12008/Reviewer_T8xk" ], [ "ICLR.cc/2025/Conference/Submission12008/Reviewer_zFFb" ], [ "ICLR.cc/2025/Conference/Submission12008/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12008/Authors" ], [ "ICLR.cc/2025/Conference/Submission12008/Authors" ], [ "ICLR.cc/2025/Conference/Submission12008/Authors" ], [ "ICLR.cc/2025/Conference/Submission12008/Reviewer_zFFb" ], [ "ICLR.cc/2025/Conference/Submission12008/Reviewer_T8xk" ], [ "ICLR.cc/2025/Conference/Submission12008/Authors" ], [ "ICLR.cc/2025/Conference/Submission12008/Authors" ], [ "ICLR.cc/2025/Conference/Submission12008/Authors" ], [ "ICLR.cc/2025/Conference/Submission12008/Area_Chair_qRzt" ], [ "ICLR.cc/2025/Conference/Submission12008/Area_Chair_qRzt" ], [ "ICLR.cc/2025/Conference/Submission12008/Authors" ], [ "ICLR.cc/2025/Conference/Submission12008/Reviewer_sWLw" ], [ "ICLR.cc/2025/Conference/Submission12008/Reviewer_T8xk" ], [ "ICLR.cc/2025/Conference/Submission12008/Authors" ], [ "ICLR.cc/2025/Conference/Submission12008/Authors" ], [ "ICLR.cc/2025/Conference/Submission12008/Reviewer_LhWB" ], [ "ICLR.cc/2025/Conference/Submission12008/Reviewer_LhWB" ], [ "ICLR.cc/2025/Conference/Submission12008/Authors" ], [ "ICLR.cc/2025/Conference/Submission12008/Authors" ] ], "structured_content_str": [ "{\"title\": \"Responses to Reviewer sWLw\", \"comment\": \"Many thanks to Reviewer sWLw for the constructive suggestions, which have helped us further improve the quality of our work. Below, we provide detailed responses to address the concerns.\\n\\n**[Q1: Connection between the motivation and the method]** Thanks for the question. We\\u2019d like to clarify that the motivation for Case I (Non-sparse components as biases) primarily serves as an initial investigation for Case II (Rank-aware activation sparsity). The observation of rank-aware activation sparsity then directly supports the R-Sparse method.\\n\\nAs discussed in Line 200, obtaining the non-sparse bias terms is computationally expensive, while each individual bias term only contributes a rank-1 component. This motivated us to investigate the overall rank across multiple bias terms from different inputs. We discovered that the space spanned by these biases across thousands of tokens exhibits a low-rank structure, highlighting the relationship between low-rank decomposition and activation sparsity.\\n\\nFurther, we examined the importance patterns of each input channel and singular value, as illustrated in Figure 3. These rank-aware sparsity patterns validate and underpin the core methodology of our framework, demonstrating its efficiency and effectiveness.\\n\\n\\n**[Q2: Clarifying contribution]** We respectfully disagree that our contribution is incremental. First, as discussed in Lines 51\\u201364, one significant challenge in prior activation sparsity methods is the difficulty of predicting active channels. For instance, CATS utilizes the output of mlp.gate to identify active channels for sparsifying mlp.up and mlp.down, while GRIFFIN determines active channels based on output features from the pre-filling stage. In contrast, our method observes and exploits the inherent low-rank and sparse structures within the input features. This enables us to sparsify all linear layers without relying on prediction mechanisms, overcoming the limitations of prior methods. Specifically, CATS is restricted to sparsifying mlp.gate and mlp.down, and GRIFFIN applies sparsity only to MLP blocks. Our approach provides a broader and more effective solution to activation sparsity, making a substantial advancement over existing methods.\\n\\nAdditionally, the inherent low-rank and sparse structures within the input features contribute to significantly enhancing the achievable sparsity levels for non-ReLU-based LLMs. In comparison, methods like CATS and GRIFFIN are limited to achieving model-level sparsity ratios of only 22% to 33%, whereas our R-Sparse method achieves approximately 50% sparsity. Thus, we believe our R-Sparse makes a significant contribution to alleviate the main challenges of previous activation sparsity approaches, that is: (i) feasibility for non-ReLU based LLMs; (ii) difficulty in predicting active channels; and (iii) limited sparsity levels.\\n\\n\\n**[Q3: Experiments with GELU activation]** Great suggestion! We conducted additional experiments to evaluate our method on the Gemma-7B model. The results, presented in Table R1, show that our method significantly outperforms both CATS and GRIFFIN, achieving an average accuracy improvement of 21.76% and 4.62% at 40% and 50% sparsity, respectively. \\n\\nTable R1 Comparison between R-Sparse and other baselines on common-sense reasoning tasks.\\n\\n| Gemma-7B | WG | PIQA | SCIQ | OBQA | HS | BOOLQ | ARC-E | ARC-C | AVG |\\n|----------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|\\n| CATS 40% | 58.01 | 66.54 | 51.9 | 19.8 | 37.64 | 62.11 | 42.97 | 27.9 | 45.86 |\\n| R-Sparse 40% | 72.22 | 78.89 | 94.3 | 33.2 | 59.68 | 74.04 | 80.39 | 48.21 | 67.62 |\\n| GRIFFIN 50% | 67.56 | 74.81 | 77.1 | 25.2 | 54.14 | 62.29 | 67.97 | 40.44 | 58.69 |\\n| R-Sparse 50% | 69.46 | 78.35 | 87.6 | 29.6 | 57.3 | 63.03 | 75.81 | 45.31 | 63.31 |\"}", "{\"summary\": \"This paper proposes a rank-aware activation sparsity, including applying input sparsification and weight decomposition. Experiments show that the proposed R-Sparse improves end-to-end efficiency while maintaining comparable sparsity.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed methods are easy to understand.\\n2. The experimental results are good.\", \"weaknesses\": \"1. The connections between motivations and the proposed methods are weak. For example, the first motivation claims that the outputs for the non-sparse inputs can be regarded as biases. However, it is not sure which part of the proposed methods is motivated by this. Please explicitly explain how the observation about non-sparse inputs being treated as biases directly informs specific components of their R-Sparse method.\\n\\n2. The contributions are incremental, which only directly apply existing techniques, such as CATS for sparsity, SVD for weight decomposition, and genetic algorithm for hyperparameters searching. Please more clearly articulate the novelty of your approach. Is there any specific design in SVD?\\n\\n3. The paper claims that existing non-ReLU activations such as SiLU and GELU introduce less sparsity. However, the experiments lack generality as Llama 2/3 and Mistral all adopt SiLU as activation functions. For example, adding results of models using GELU activation functions such as Gemma would be helpful.\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer LhWB,\\n\\nWe sincerely appreciate the time you have taken to review our work. We have carefully addressed all your comments, which primarily focused on writing improvements. Could you kindly review the updates and let us know if you have any further questions? Thank you!\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Thanks for the responses\", \"comment\": \"Thank you for the detailed feedback. The sparsity definition we use aligns with the spirit of previous network pruning works [1-3] where the sparse sub-network refers to the remaining sub-network, and the corresponding parameter values are unchanged after the pruning operation. The sparsity ratio is defined as the proportion of parameters that their values are changed after pruning (set to zero). Intuitively, the sparse components represent the critical parts of the network, while the non-sparse components correspond to the non-critical parts.\\n\\nIn our definition, the non-sparse components are those with $H_k < T_0$, as these typically have a lesser impact on the original functionality, making them the non-critical part. Conversely, the sparsity ratio measures the proportion of elements that are altered after applying the multi-phase ReLU (i.e., $x<T_0$). Thus, the two concepts are not contradictory.\\n\\nRegarding the question, \\u201cx actually takes on the nearest discrete value rather than 0 in order to maintain the original functionality. Is it correct to define this as \\u2018sparsity\\u2019?\\u201d \\u2014 Yes, this definition differs from the strict interpretation of sparsity, where non-critical values are explicitly set to zero. However, it aligns with a broader concept of sparsity. Here, sparsity refers to the effective reduction of non-critical elements, even if these values are adjusted to discrete values rather than being completely zeroed out, to preserve functionality.\\n\\nFor further clarification, we provide a conceptual example in Table R1. The table shows an input feature of size 10, with the values of each element reported before and after applying the multi-phase activation function.\\n\\nTable R1, examples of values before and after multi-phase activation function. Where $T_0=0$, $T_1=-1$ and $T_2=-2$ and the minimum values of input features is -2. $s$ represents elements that belong to the sparse components, while (non-$s$) denotes the non-sparse components.\\n\\n| Category | s | s | non-s | non-s | s | non-s | non-s | non-s | s | s |\\n| ------ | ---- | ---- | ----- | ----- | ---- | ----- | ---- | ----- | ---- | ---- |\\n| Values Before | 1.73 | 3.55 | -0.43 | -0.27 | 7.59 | -1.05 | -2 | -1.77 | 9.43 | 6.82 |\\n| Values After | 1.73 | 3.55 | -0.5 | -0.5 | 7.59 | -1.5 | -1.5 | -1.5 | 9.43 | 6.82 |\\n\\n[1] The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks.\\n\\n[2] Rethinking the Value of Network Pruning.\\n\\n[3] What is the State of Neural Network Pruning?\"}", "{\"title\": \"Response to Reviewer zFFb\", \"comment\": \"We sincerely thank Reviewer zFFb for the positive feedback and constructive suggestions. To address Reviewer zFFb\\u2019s concerns, we provide point-by-point responses below.\\n\\n**[Q1: Importance observation across different datasets and varying number of samples.]**: Thanks for the question. We conducted additional experiments to assess the contributions of each input channel and singular value components across various datasets and different numbers of samples. The results are detailed in Section B.2 of the updated draft. Our findings consistently show that the primary contributions of importance are concentrated in the lower-right corner of the matrix. This supports the use of R-Sparse for accelerating inference without losing performance.\\n\\n**Details:** To validate these observations across datasets, we examined diverse domains from the RedPajama dataset, including ArXiv, GitHub, StackExchange, and the original C4 dataset, ensuring a wide range of data diversity. Additionally, we evaluated varying sample sizes, ranging from 1 to 1024, and observed consistent patterns across all configurations.\\n\\n\\n**[Q2: Explanation of varying importance observed across linear layers.]**: Thanks for the suggestion. The varying importance mainly comes from the intrinsic low-rank properties of different linear layers. When the linear layer are more low-rank, the importance tend to centrarate only on the top-singular value components, results in a more sparse patterns in Figure 3 (i.e., the self_attn.k_proj) while for relatively higher rank, the importance patterns tend to distribute more uniformly across the vertical axis, (e.g., the mlp.up_proj). The layerwise low-rank properties aligns with previous investigations, such as Figure 4 in [1] and related with the learning dynamics of transformers(e.g., Section 4 in [2]).\\n\\n[1] From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients.\\n\\n[2] JoMA: Demystifying Multilayer Transformers via JOint Dynamics of MLP and Attention.\\n\\n**[Q3: Relationship between sparsity ratio and final inference acceleration.]**: Thanks for the question. Ideally, the acceleration equals to $\\\\frac{1}{1-s}$ where s is the sparsity ratio. That is, 50% sparsity theoretically allows for a 2x speedup. However, in practice, due to the detailed memory access pattern, such as memory blocks not aligning perfectly with rows of the weight matrix, there is overhead associated with processing zero values. Additionally, activation sparsity doesn\\u2019t change the complexity of the scaled dot product in attention, as well as the KV cache overhead. Consequently, our implementation achieves a 1.4x speedup for 50% sparsity. For a more detailed analysis, we evaluate the \\nlatency of a single MLP block across different sparsity ratios. As shown in Table R1, the acceleration is gradually improved as the sparsity ratio increases. We evaluate on a single MLP block with input dimension of 4096 and hidden dimension of 11008, the latency is averaged by 80 runs after warm up running.\", \"table_r1\": \"Comparison of the latency across different sparsity ratios.\\n\\n| Sparsity Ratio | 0 | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 |\\n| ------------------ | ------ | ------ | ------ | ------ | ------ | ------ |\\n| Latency (ms) | 0.8190 | 0.7386 | 0.5792 | 0.4172 | 0.2567 | 0.1233 |\\n| Reduction (%) | 0 | 0.0982 | 0.2928 | 0.4906 | 0.6866 | 0.8495 |\\n\\n**[Q4: Explanation of generation speed in Figure 6.]**: Thanks for the question. For dense inference, the generation latency is primarily composed of three components: (i) the latency of prefilling, denoted as $t_{prefill}$, (ii) (decoding phase) the latency for computation and memory access of model parameters, $t_{weight}$, and (iii) (decoding phase) the latency for memory access of the KV cache, $t_{kv}(n_{prompt} + n_{generation})$ that linearly grows with the sequence length ($n_{prompt} + n_{generation}$). Therefore, the reported generation speed can be expressed as: $\\\\frac{t_{prefill} + n_{generation}t_{weight}+n_{generation}t_{kv}(n_{prompt} + n_{generation})}{n_{generation}}$. In the early stages of decoding, the memory overhead of the KV cache is relatively small compared to that of the model parameters. As a result, the $\\\\frac{t_{prefill}}{n_{generation}}$ term has significant impact on the end-to-end latency, but its effect diminishes as $n_{generation}$ increases (For generation length from 128 to 512), while for longer sequence length, the memory access cost of KV cache becomes more significant, as its size grows with the sequence length, leading to a reduction in the end-to-end generation speed.\"}", "{\"title\": \"Thank you for the response\", \"comment\": \"I would like to thank the authors for the replies, which solve most of my concerns. One remaining concern is that by referring to the sensitivity analysis of hyperparameters, I mean the hyperparameters used in the evolutionary search algorithm (i.e., use a population size of 32, a generation count of 5, and 16 samples for perplexity evaluation). Is the choice heuristic or optimized within the search space?\"}", "{\"comment\": \"Thank you for the detailed responses. My concerns have been addressed and I maintain my positive score.\"}", "{\"title\": \"Responses to Reviewer T8xk (Q1-Q4)\", \"comment\": \"We thank Reviewer T8xk for the valuable questions and suggestions. We elaborate more details of our method and provide point-wise responses in the following:\\n\\n**[Q1: Explanation of Table 1]** Thanks for the question. R-Sparse 40% is not compared with CATS 22% and GRIFFIN 33%. Instead, we compare R-Sparse with other baselines under the same sparsity ratios (As discussed in Line 392), that is R-Sparse 40% v.s. CATS 40% and R-Sparse 50% v.s. GRIFFIN 50\\\\% where R-Sparse consistently outperforms both CATS and GRIFFIN. Additionally, we choose sparsity ratio of 22% for CATS and 33% for GRIFFIN for the reason that is the sparsity reported by the original publications where 50% mlp-level sparsity are applied and CATS only sparsifies mlp.gate and mlp.down while GRIFFIN sparsify the whole mlp modules. We\\u2019ve included this sparsity selection details in Line 370. And We\\u2019ve adjusted the lines in Table 1 in the updated draft to avoid misleading where R-Sparse 40% and CATS 40% are placed in adjacent lines as well as R-Sparse 50% and GRIFFIN 50%.\\n\\nAdditionally, we report the performance of various methods across different sparsity ratios in Figure 5, that provides a more thorough comparison and demonstrates our method consistently outperforms other baselines by a clear margin.\\n\\nIn certain cases, higher sparsity leads to improved performance. That\\u2019s because appropriate sparsity may help mitigate overfitting and enhance model generalization. This phenomenon has been demonstrated in lots of prior studies. Such as Figure 1 in SparseGPT[1], Figure 2 in Essential Sparsity[2] and Figure 4 in GRIFFIN[3].\\n\\n[1] SparseGPT: Massive Language Models Can be Accurately Pruned in One-Shot.\\n\\n[2] The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter.\\n\\n[3] Prompt-prompted Adaptive Structured Pruning for\\nEfficient LLM Generation.\\n\\n\\n**[Q2: Sensitivity analysis of hyperparameters]**: Thank you for pointing this out. We conducted a sensitivity analysis of the hyperparameters in Section 4.4 (A2 and A3), focusing specifically on the sparse-rank ratio $\\\\rho$ for each linear layer. As discussed in Section 4.4 (A2), we compared R-Sparse with its sparse ($\\\\rho=1$) and low-rank ($\\\\rho=0$) counterparts, observing significant improvements, as presented in Table 3. In Section 4.4 (A3), we further compared the layer-wise $\\\\rho$ values obtained through Algorithm 1 and the corresponding uniform $\\\\rho$ assignment. As shown in Table 4, the adaptive $\\\\rho$ consistently outperformed the uniform assignment across various sparsity ratios and tasks. These ablation studies provide a comprehensive evaluation of the effectiveness of R-Sparse.\\n\\n\\n**[Q3: Comparison with Deja Vu]**: Thanks for the question. Deja Vu requires training an additional predictor to identify active channels, whereas our comparison focuses exclusively on training-free approaches. To further address Reviewer T8xk\\u2019s concern, we conducted a comparison with Deja Vu using an oracle predictor\\u2014i.e., a predictor that perfectly identifies active channels with 100% accuracy. The results, presented in Table R2, demonstrate that our method outperforms Deja Vu (Oracle), achieving an average accuracy improvement of 0.51. Note that in realistic settings, the inaccuracy of Deja Vu\\u2019s predictor would further degrade its performance.\", \"table_r2\": \"Comparison of R-Sparse with Deja Vu (Oracle) under 50% sparsity on Llama-2-7B.\\n\\n| Method | WG | PIQA | SciQ | OBQA | HS | BoolQ | Arc-E | Arc-C | Average |\\n| ---------------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ------- |\\n| Llama-2-7B | 69.14 | 78.07 | 93.80 | 31.40 | 57.13 | 77.71 | 76.35 | 43.43 | 65.88 |\\n| R-Sparse | 67.40 | 77.31 | 93.90 | 31.40 | 54.26 | 72.84 | 74.58 | 40.78 | 64.06 |\\n| Deja Vu (Oracle) | 67.09 | 76.66 | 92.80 | 29.80 | 55.14 | 72.87 | 73.40 | 40.61 | 63.55 |\\n\\n\\n**[Q4: Complexity analysis and running time comparison of R-Sparse]**: Good suggestions. R-Sparse is a training-free approach, with its primary computational overhead stemming from the evolutionary search algorithm. The time cost of this search is linearly proportional to the population size ($P$), the total number of generations ($G$), and the number of samples used for perplexity evaluation ($N$), resulting in a complexity of $O(PGN)$. As discussed in Line 322, our experiments use a population size of 32, a generation count of 5, and 16 samples for perplexity evaluation. In a specific case, the search process requires approximately one hour on a single A6000 GPU for the Llama-2-7B model, which is negligible compared to the training-level computational cost.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"We are keen to discuss further with you\", \"comment\": \"Dear Reviewer LhWB,\\n\\nThank you for taking the time to review our work. Your suggestions have been incredibly helpful in improving the writing quality and ensuring our work is more easily understood. We have carefully addressed each of the writing issues to avoid misunderstandings. The revisions have been updated in the PDF, with the modified content marked in red.\\n\\nCould you please review the responses and let us know if there are any additional questions or concerns? Thanks again for your valuable feedback.\\n\\nBest,\\n\\nAuthors\"}", "{\"comment\": \"Dear Reviewer LhWB,\\n\\nThank you again for your careful review. With the discussion period ending in less than 24 hours, could you kindly review our responses and let us know if you have any further questions? Thank you!\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"We are keen to discuss further with you\", \"comment\": \"Dear Reviewer zFFb,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our work. Your insightful suggestions have been invaluable in improving the quality of our work. We have carefully addressed each of your concerns and made the necessary revisions in the updated draft.\\n\\nAs the discussion period deadline approaches, we would be grateful if you could review our responses and let us know if there are any additional questions. Thank!\\n\\nBest,\\n\\nAuthors\"}", "{\"summary\": \"This paper uses the inherent sparsity and low-rank properties of input activations in LLMs to accelerate the inference of LLMs. It sparsifies the input activations to remove unnecessary activations, and interprets unnecessary activations as a bias term, using SVD to compensate for this bias. The proposed method achieves higher sparsity than the baseline while maintaining overall performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper proposes a training-free activation sparsification method to speed up LLM inference. The proposed method can maintain performance at the same level as full parameters at sparsity of about 50%.\", \"This paper proposes to design different sparsity ratios for different layers to improve overall performance.\", \"This paper is very clearly written and the proposed method is easy to follow.\"], \"weaknesses\": [\"The analysis of \\\"the contribution of each input channel and singular value component\\\" in this paper is mainly focused on the C4 dataset (Figures 1 and 3). What are the similarities and differences between the analysis on other datasets and the C4 dataset? Especially on datasets that are very different from the C4 dataset. In addition, these analyses are mainly based on 16 randomly sampled training samples. When the number of samples increases or decreases, what changes will occur in the analysis results?\", \"As shown in Figure 3, there is a clear difference in the importance of different linear layers (such as self_attn.k_proj vs. self_attn.up_proj), what this mainly stems from, the authors can give more comments on this.\", \"What is the relationship between the sparsity ratio in the proposed R-Sparse and the final inference acceleration? For example, what is the corresponding acceleration for a certain sparsity ratio?\", \"In Figure 6, why under Dense, when the Generation Length becomes longer (1024->2048), the generation speed slows down, while when it is 128->256->512, the generation speed is accelerated.\"], \"questions\": \"See the Weaknesses section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents R-Sparse, a training-free activation sparsity approach for large language models (LLMs). Current activation sparsity methods face limitations with non-ReLU activation functions and have difficulties in predicting active channels and achieving high sparsity ratios. R-Sparse overcomes these challenges by leveraging the sparsity of input channels and singular value components. The authors conduct investigations and find that non-sparse components can be regarded as bias terms and full computation can be approximated by a combination of input channels and weight singular values. R-Sparse is applied to both attention and MLP modules of LLMs and an evolutionary search algorithm is used to find optimal sparse component ratios. Experiments on Llama-2/3 and Mistral models across ten tasks show that R-Sparse achieves 50% model-level sparsity with comparable performance and up to 43% end-to-end speed improvement with a customized kernel.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The training-free method does not require extensive pre-training, making it more efficient and easier to implement compared to methods that need continual training.\", \"R-Sparse achieves high sparsity levels (50% model-level sparsity) without sacrificing performance, leading to significant improvements in efficiency.\", \"R-Sparse is compatible with weight quantization for further efficiency gains and can be applied to different LLM families and a variety of tasks.\"], \"weaknesses\": [\"Table 1 is difficult to interpret. From the description of the authors, R-Sparse40% is compared with CATS22% and GRIFFIN33%, if my understanding is correct. Therefore, R-Sparse does not consistently outperform CATS across all tasks (e.g., PIQA 78.24 vs 79.00). The authors are suggested to refine the claim to avoid the misleading. Meanwhile, for certain cases, the performance increases with an even higher sparsity ratio (e.g., 79.49 vs 79.92 for R-Sparse40% and R-Sparse50%) on PIQA. Could the authors provide some insights into this phenomenon?\", \"The sensitivity analysis of hyperparameters should be added for a more thorough investigation of the effectiveness of R-Sparse.\", \"As [1] is discussed by the authors in the related works, why do the authors choose not to compare with [1] in the experiments?\", \"The authors are suggested to include the complexity analysis and running time comparison of R-Sparse, especially regarding the evolutionary search algorithm.\", \"The writing can be further improved. For example, the optimal sparse-rank $\\\\alpha$ is not formally defined. From Algorithm 1, $\\\\alpha$ seems to be fixed, how is it optimized?\", \"[1] Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time, ICML 2023\"], \"questions\": \"Please kindly refer to the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"We sincerely appreciate all your constructive feedback and positive evaluations. Thanks for your time and support!\"}", "{\"title\": \"We are keen to discuss further with you\", \"comment\": \"Dear Reviewer LhWB,\\n\\nThank you again for your time and efforts in reviewing our work. As the discussion period deadline is approaching, could you kindly review our responses and let us know if you have any further questions?\\n\\nThank you!\\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"Thank you for the follow-up question\", \"comment\": \"Thanks for the follow-up question. These hyperparameters are heuristically selected based on some preliminary experiments. Specifically: (i) The population size primarily controls the diversity of individual solutions. A larger population reduces the risk of premature convergence to local optima but slows down the convergence speed. In our preliminary experiments, we tested population sizes of $\\\\{8, 16, 32, 128\\\\}$ and observed final perplexity values of \\\\{9.85, 7.15, 6.32, 6.33\\\\}, respectively. Based on these results, we selected a population size of 32, as increasing it further provided no significant improvement. (ii) Then, we evaluated a generation count of 10 and found that convergence is nearly stable by generation 5, as shown in Table R2. Thus, we chose a generation count of 5. (iii) Note that the time cost for evolutionary search grows linearly with the number of samples used. To balance the trade-off between search quality and practical time constraints, we selected 16 samples to evaluate each individual solution. And this configuration (32 population size, 5 generation count and 16 samples) takes approximately one hour of overhead on a single A6000 GPU, that is far less than training-level time cost.\", \"table_r2\": \"Best perplexity achieved at different generation steps\\n\\n| Generation Step | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |\\n| ---------------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ------- |------- |\\n|Llama-2-7B|13.58|12.42|11.44|7.85|6.32|6.32|6.32|6.20|6.20|6.19|\"}", "{\"title\": \"End of reviewer-author discussion phase\", \"comment\": \"Dear reviewers,\\n\\nAs we near the conclusion of the reviewer-author discussion phase, I wanted to kindly follow up to see if you\\u2019ve had a chance to review the author responses on your comments. Could you confirm that you\\u2019ve read it and, if needed, update your review and scores accordingly?\\n\\nThank you for your time and effort!\\n\\nYour AC\"}", "{\"metareview\": \"a) Scientific Claims and Findings:\\nThe paper introduces R-Sparse, a training-free activation sparsity method for efficient LLM inference. Key findings show that non-sparse input components can be treated as bias terms, and computation can be approximated using input channels and weight singular values. The method achieves 50% model-level sparsity while maintaining performance, with claimed 43% efficiency improvements using customized kernels.\\n\\n(b) Strengths:\\nThe paper presents a novel training-free approach that achieves high sparsity levels while maintaining performance. It's broadly applicable to both attention and MLP modules, works across different LLM architectures, and is compatible with weight quantization. The experimental validation is comprehensive across diverse tasks.\\n\\n(c) Weaknesses:\\nThe paper has some presentation issues around key definitions and concepts. There are questions about hyperparameter sensitivity and incomplete comparisons with some baselines. The relationship between sparsity ratio and actual speedup needs better clarification. I recommend the authors to work on these for the camera ready version.\\n\\n(d) Reasons for Acceptance:\", \"the_paper_warrants_acceptance_based_on_three_key_factors\": \"It presents a novel technical solution to an important problem in LLM inference optimization, achieving significant sparsity while maintaining performance.\\nThe method has broad practical applicability across different architectures and modules.\\nThe experimental validation is thorough, with clear improvements demonstrated across multiple tasks.\\n\\nWhile there are some presentation issues, these can be addressed in the camera-ready version. The core technical contribution, novelty of of viewpoint to sparsity, and practical utility make this paper a valuable addition to the field.\", \"additional_comments_on_reviewer_discussion\": \"The review process featured extensive and constructive discussion between authors and reviewers. Throughout the discussion period, the authors demonstrated strong engagement and responsiveness to reviewer concerns. Reviewer T8xk initially raised questions about result interpretation and methodology, to which the authors provided additional experiments and clarifications that addressed most concerns. Reviewer zFFb's questions about dataset analysis and architectural variations were thoroughly addressed with additional experiments, leading to a positive recommendation. While Reviewer LhWB raised important points about clarity and definitions, the authors made earnest efforts to clarify these issues, and the remaining concerns are primarily about presentation rather than technical substance. Reviewer sWLw's initial concerns about novelty were effectively addressed through detailed responses and additional experimental validation.\\nThe authors' thorough responses and willingness to conduct additional experiments demonstrate their commitment to scientific rigor. While some presentation issues remain, these can be addressed in the final version. The positive recommendations from multiple reviewers, combined with novel viewpoint to sparsity and efficiency and broad practical applicability, support accepting this paper for publication at ICLR 2025.\"}", "{\"title\": \"Responses to Reviewer T8xk (Q5)\", \"comment\": \"**[Q5: Writing improvement]**: Good suggestions. The optimal sparse-rank ratio is defined as $\\\\rho^* = \\\\mathrm{argmin}{\\\\rho} \\\\mathcal{L}(f, \\\\rho)$ where the loss function $\\\\mathcal{L}$, represents the average perplexity computed over 16 randomly selected samples from the C4 training set, and $f$ denotes the original LLMs. To solve this optimization problem, we employ an evolutionary search algorithm, as outlined in Algorithm 1 of the PDF. In Algorithm 1, the variable $\\\\alpha$ is a random variable used to control the crossover of individual solutions. It does not require optimization; instead, the sparse rank ratio $\\\\rho$ will be optimized during each generation as shown in Line 10.\"}", "{\"comment\": \"Thanks for your responses, which address my concerns. I will raise my score.\"}", "{\"comment\": \"Thank you for the additional results. I have no further questions and will increase my rating accordingly.\"}", "{\"title\": \"Thanks for responses\", \"comment\": \"Thank you for your responses. We\\u2019re glad that our responses addressed your concerns. However, the adjusted rating suggests our work is not yet suitable for acceptance. Could you kindly provide further suggestions or questions for improvement so that we can enhance our work to meet the required standards for acceptance? Thank you!\"}", "{\"title\": \"Responses to Reviewer LhWB\", \"comment\": \"We sincerely thank Reviewer LhWB for the detailed and insightful suggestions. To enhance the writing quality and address the concerns raised, we provide point-by-point responses below:\\n\\n**[Q1: Explanation about Figure 1.]**: Thanks for the suggestion. We have revised the caption of Figure 1 in the updated draft to improve clarity and ensure it is easier to understand. The new caption is: \\u201cContributions of each input channel and singular value components. The measurement metric is detailed in Section 3.3. Results are obtained from Llama-2-7B with 16 training samples from C4. Both the input channel and SVD components are sorted from small to large for better visualization.\\n\\n\\n**[Q2: Explanation about the \\u201cBias\\u201d term in Section 3.2.]**: The bias term includes all the non-sparse components where $H_k < T_0$. Its definition depends on the value of $T_0$ rather than being strictly tied to $O$. If $T_0$ is greater than $0$, the non-sparse components include values that are both greater than and less than $O$.\\n\\nWe refer to this term as \\u201cbias\\u201d because the output $Y=HW_{down}^T$ can be interpreted as a weighted linear combination of the columns in $W_{down}^T$, where the coefficient of column $k$ equals $H_k$. For $H_k \\\\geq T_0$, the calculation cannot be simplified, as each column is weighted by a different coefficient, and these are therefore defined as sparse components.\\n\\nIn contrast, for $H_k < T_0$, the columns satisfying $T_{i+1} \\\\leq H_k < T_i$ share the same coefficient after the multi-phase ReLU activation function. These components can be viewed as a bias term, where the weighting is given by $\\\\frac{T_j + T_{j+1}}{2}$. \\n\\n**[Q3: Appropriateness of paper title.]**: Thanks for pointing this out. We\\u2019d like to clarify that the reason for using rank-aware activation sparsity is because for different linear layers, the activation sparsity is affected by its low-rank properties, thus we need to co-design the sparsity and low-rank decomposition to better preserve original functionality. We achieve this through an evolutionary search approach, which effectively balances these factors for optimal performance. We\\u2019re willing to adjust the title if Reviewer LhWB has further suggestions.\\n\\n**[Q4: Modification of Figure 3.]**: Thanks for the careful reading. The modified Figure 3 is updated in the draft.\\n\\n\\n**[Q5: More reference.]**: Thanks for the suggestion. This work provides an interesting and solid hardware solution for sparsity-aware memory loading, that is orthogonal and provides a useful tool to further implement our methods on edge-devices. We\\u2019ve included the reference in the updated PDF.\\n\\n\\n**[Q6: Definition of multi-phase ReLU.]**: Great Catch! In our investigation, we set $T_{l-1}$ as the minimum value of input, to avoid scenarios when $x < T_{l-1}$. And the sparsity is defined as the ratios of $x < T_0$. In this way, for all $x < T_0$, the output will be the nearest discrete value of $\\\\frac{T_i + T_{i+1}}{2}$, to better preserve the original functionality. We have clarified this explanation in the updated draft to avoid misleading.\"}", "{\"summary\": \"The authors propose a novel training-free activation sparsity method called R-sparse. This method is applicable to non-ReLU-based large language models (LLMs) and eliminates the need for prediction by utilizing input activation sparsity. Furthermore, it is a method that can be applied not only to MLP modules but also to attention modules.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The authors successfully identified existing challenges and, in connection to these, suggested a training-free, prediction-free activation sparsity method. Furthermore, their method is applicable to attention modules.\", \"Through various experiments, they demonstrated performance improvements.\"], \"weaknesses\": [\"difficult to read (clearity)\", \"In introduction, how should Figure 1 be interpreted? It is difficult to understand the meaning of the figure until reviewing the explanation in Section 3.3.\", \"In Section 3.2, what does the term \\\"bias\\\" mean? Is it interpreted as \\\"bias\\\" in the sense that performance is maintained even after replacing non-sparse values smaller than 0 with a constant? Does the term also include non-sparse components greater than 0? In Lines 183-185, why is it defined as \\\"sparse\\\", when $H_k \\\\geq T_0$?\", \"the appropriateness of the title.\", \"The proposed method appears to be \\\"activation sparsity, then low-rank decomposition for non-important activations\\\" rather than using both approaches simultaneously (rank-aware activation sparsity).\", \"==== After Rebuttal ====\", \"I understand. However, the terms in Section 3.2 should be clarified more explicitly.\", \"It might be beneficial to add an explanation using examples, when $T_0 = 0$.\", \"positive = sparse components = not pruned values\", \"negative = non-sparse components = to be pruned values\", \"In fact, the most confusing term is \\\"sparse components.\\\" It would be beneficial to clearly indicate, as suggested in the response, that it originates from prior research.\"], \"questions\": [\"(Writing) Please switch the positions between mlp.up_proj and mlp.gate_proj of layer 0 in Figure 3.\", \"Recommended reference\", \"Alizadeh, Keivan, et al. \\\"Llm in a flash: Efficient large language model inference with limited memory.\\\"\", \"Definition of multi-phase ReLU.\", \"Is the multi-phase ReLU expressed on line 170 correctly defined? Where is sparsity defined? Shouldn't there be a definition for when x $< T_{L}$? Moreover, the output should get closer to zero as it becomes more negative, but it is defined in the opposite direction.\", \"For instance, $T_0 = 0, T_1=-1, T_2=-2$, please explain it.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thank you for your kind response. There is one unclear part, and I would like to ask a question about it.\", \"**[Q2 and Q6]**\", \"The authors said that \\\"the sparsity is defined as the ratios of $x < T_{0}$\\\", in Q6.\", \"However, within this range, $x$ actually takes on the nearest discrete value rather than 0 in order to maintain the original functionality. Is it correct to define this as \\\"sparsity\\\"?\", \"As requested in the original question, please provide an example to explain the situation. I understand the case where the minimum of $x$'s input is -2.\", \"Futhermore, the authors said that \\\"all the **non-sparse** components where $H_{k} < T_{0}$\\\", in Q2.\", \"Is not this expression contradictory to the definition provided above?\", \"Please clarify it. Thanks.\"]}", "{\"title\": \"We are keen to discuss further with you\", \"comment\": \"We sincerely thank Reviewer T8xk for their efforts throughout the review and discussion period. We have provided a detailed explanation of the hyperparameter choices in evolutionary search, supported by several preliminary ablation studies. We would greatly appreciate it if reviewer 78xk could review our responses and let us know if there are additional questions. Thanks\"}", "{\"title\": \"We are keen to discuss further with you\", \"comment\": \"Dear Reviewer LhWB,\\n\\nThank you again for your careful reading and thoughtful feedback on our work. As your comments primarily focused on improving the writing, we have made every effort to address them thoroughly.\\n\\nAt your convenience, could you please review our responses and let us know if you have any additional questions or concerns? Thank you!\\n\\nSincerely,\\n\\nThe Authors\"}" ] }
9VGTk2NYjF
The Complexity of Two-Team Polymatrix Games with Independent Adversaries
[ "Alexandros Hollender", "Gilbert Maystre", "Sai Ganesh Nagarajan" ]
Adversarial multiplayer games are an important object of study in multiagent learning. In particular, polymatrix zero-sum games are a multiplayer setting where Nash equilibria are known to be efficiently computable. Towards understanding the limits of tractability in polymatrix games, we study the computation of Nash equilibria in such games where each pair of players plays either a zero-sum or a coordination game. We are particularly interested in the setting where players can be grouped into a small number of teams of identical interest. While the three-team version of the problem is known to be PPAD-complete, the complexity for two teams has remained open. Our main contribution is to prove that the two-team version remains hard, namely it is CLS-hard. Furthermore, we show that this lower bound is tight for the setting where one of the teams consists of multiple independent adversaries. On the way to obtaining our main result, we prove hardness of finding any stationary point in the simplest type of non-convex-concave min-max constrained optimization problem, namely for a class of bilinear polynomial objective functions.
[ "algorithmic game theory", "Nash equilibrium", "minmax optimization" ]
Accept (Oral)
https://openreview.net/pdf?id=9VGTk2NYjF
https://openreview.net/forum?id=9VGTk2NYjF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "etIPeZlxbR", "c1AB3zRmUS", "XBJnsy2ciW", "R0kYSuzOlM", "Qvq0HHj90X", "P9JTVKBHnU", "MSfWu4Hlrp", "AXSf2eOPmA", "5BXxR7BsCU", "56JQWo4vj9", "0moYbsvlCw", "0K9YGI2FRh" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "decision" ], "note_created": [ 1730697020145, 1732786652064, 1730348429485, 1732311238339, 1732311667628, 1730843928062, 1732311545968, 1734686618648, 1732311574500, 1732632494076, 1732570905315, 1737523830134 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7295/Reviewer_xMtH" ], [ "ICLR.cc/2025/Conference/Submission7295/Authors" ], [ "ICLR.cc/2025/Conference/Submission7295/Reviewer_XrcJ" ], [ "ICLR.cc/2025/Conference/Submission7295/Authors" ], [ "ICLR.cc/2025/Conference/Submission7295/Authors" ], [ "ICLR.cc/2025/Conference/Submission7295/Reviewer_m3Yt" ], [ "ICLR.cc/2025/Conference/Submission7295/Authors" ], [ "ICLR.cc/2025/Conference/Submission7295/Area_Chair_jr6Y" ], [ "ICLR.cc/2025/Conference/Submission7295/Authors" ], [ "ICLR.cc/2025/Conference/Submission7295/Reviewer_XrcJ" ], [ "ICLR.cc/2025/Conference/Submission7295/Reviewer_xMtH" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"summary\": \"The paper studies the problem of finding Nash Equilibria in two-team polymatrix games. Polymatrix games are a special class of n-player games with a succinct representation of the payoff functions. Each player's payoff is a sum of payoffs resulting from two-player games played with all the other players. This problem is known to be tractable when all interactions are zero-sum, and to be PPAD-hard in general. A special subclass of these games are team games, where each pair of interactions are either zero-sum (different teams) or coordination games (same team). Three team games are known to be PPAD complete. The main result of the paper is in showing that two team games are CLS hard (CLS is a structured subclass of PPAD). This result holds even when one of the team consists of independent adversaries (their games consist of the zero matrix for all the payoffs). They also show that computing the minimax/ KKT point of a bilinear polynomial is also CLS hard.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper solves a well formulated problem about the complexity of finding Nash equilibria. This problem is a natural continuation of prior results about team polymatrix games. The technical proofs and reductions are interesting and well written.\", \"weaknesses\": \"The main weakness is in the lack of appeal to a broader ICLR audience. The paper has solid results in complexity theory and game theory but requires some connection to the machine learning audience. That such a connection exists is not in itself in question, there are a plethora of papers about learning equilibria in team games, but the paper offers no discussion about the broader significance of studying team games. The open problems section also mentions gradient based methods that converge to equilibria in time poly(1/epsilon), but there is not further discussion.\", \"questions\": \"Could you add some discussion about the broader landscape of team games, why we might care about them (if not necessarily the two-player polymatrix team games), and about the best-known algorithmic results in this space, particularly in the context of learning dynamics?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Revised Draft\", \"comment\": \"We once again thank the reviewers for their positive comments. We have uploaded a revised draft including these clarifications and fixed typos (the changes are indicated in red).\"}", "{\"summary\": \"This paper studies the complexity of finding a Nash equilibrium in two-team polymatrix zero-sum games. They show that this problem is CLS-hard, and is in CLS if the adversaries are independent (thus establishing CLS-completeness in the latter case).\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"I think this is a good paper and vote to accept. The paper is clearly written and presents an interesting result. The hardness result about minimax KKT points is also rather clean, and may be of independent interest as a CLS-complete problem that may be relatively easy to make reductions from. The concerns below are very minor.\", \"weaknesses\": \"The section about ex-ante coordination contains some strange choices of phrasing. For example, all of the papers in that paragraph study extensive-form games (not just the last one), and the paper that shows \\\"efficient algorithms exist under some assumptions about the players\\u2019 information set\\\" is Zhang and Sandholm (2022), not Zhang et al. (2021).\\n\\nTo get parenthetical citations like (Lastname et al. 2023) instead of Lastname et al. (2023), use \\\\citep.\", \"questions\": \"Perhaps the most obvious gap in this paper is the CLS-membership without the independent adversaries assumption. Do you think there is any hope to extend your techniques to that case?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their positive comments.\", \"regarding_the_concern_about_the_simplicity_of_the_proof_and_the_lack_of_novel_ideas\": \"Although the proof turns out to be quite simple, coming up with the construction in Lemma 3.1 is not trivial. This part required a novel construction that takes advantage of the max-variables to impose an additional constraint on the min-variables, which is crucial to get a bilinear objective and make the reduction work. This is the main novel technical contribution of our paper.\\n\\nThank you for telling us about the typo on line 386. We will fix it in the updated version.\"}", "{\"comment\": \"We thank the reviewer for their positive comments.\", \"regarding_the_question_about_the_possibility_of_extending_the_cls_membership_to_the_setting_without_the_independent_adversaries_assumption\": \"This is a very challenging question. It seems very unlikely that our technique for CLS-membership would extend to the setting without the independent adversaries assumption. On the other hand, the question of whether the min-max problem with uncoupled constraints is PPAD-hard is a major open question. Assuming that one could show PPAD-hardness for this general version of min-max, then we believe that our techniques could be used to extend this PPAD-hardness to two-team polymatrix games (without the independent adversaries assumption).\\n\\nWe will fix the citation displays and now use \\\\citep in appropriate locations.\"}", "{\"summary\": \"In this work, the authors investigate the computational complexity of computing a Nash equilibrium in two-team zero-sum polymatrix games where one team consists of independent players (i.e., players who do not interact with one another). Specifically, they prove that this problem is complete for the complexity class CLS. To demonstrate hardness, they first reduce from MinQuadraticKKT\\u2014the problem of computing a KKT point of a quadratic function with box constraints\\u2014to MinmaxIndKKT, a min-max problem with an independence property they define. In a second step, they reduce this problem to a two-team zero-sum polymatrix game. Membership in CLS follows fairly straightforwardly from the recent result that QuadraticKKT is complete for the class CLS, as shown in [1], and using LP duality for transforming a min-max problem into a minimization.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper is generally well-written, though there are areas where phrasing could be improved. The problem under consideration is quite interesting and represents a step forward in establishing complexity results for two-team (zero-sum) games, i.e., min-max optimization problems beyond the case of having coupled constraints as in [2]. Of course, the more general case of having dependent adversaries (or that of a single adversary) remains open, as the authors highlight in Section 5.\", \"weaknesses\": \"I cannot identify any obvious weaknesses. Although the techniques and ideas are not particularly complex\\u2014as is often the case in results of this kind\\u2014this should not in itself be considered a weakness. However, the simplicity of the proof and the lack of novel ideas makes me more skeptical about my final score.\", \"questions\": \"- **Line 386**: In the reduction from MinmaxIndKKT, the authors define the candidate KKT point $(x_i, y_i)$ for the case where neither $x_i$ nor $y_i$ is in ${0, 1\\\\}$ as $x_i = a_i$ and $y_i = d_i$. I assume that $a_i$ is simply a typo, as $a_i$ is already used to denote player $i$ on the first team. I think the authors likely intended to use $p_i$ and $q_i$ for $x_i$ and $y_i$, which would also align with the statement in line 415 indicating that these variables are close to their respective counterparts, $p_i$ and $q_i$.\\n\\nReferences \\n[1] The complexity of computing KKT solutions of quadratic programs.\\n\\n[2] Constantinos Daskalakis, Stratis Skoulakis, and Manolis Zampetakis. The complexity of constrained min-max optimization.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response Part I\", \"comment\": \"We thank the reviewer for their positive comments.\\n\\nRegarding the relevance to a broader ICLR audience, our work has a multi-fold relevance to the ML audience who are broadly interested in minmax optimization and algorithmic game theory.\\n\\n### Relevance to minmax optimization\\nAlthough we study this problem motivated by two-team polymatrix games with adversaries, the hardness results that we show also apply to the _simplest_ non-convex concave minmax problem, i.e., bilinear non-convex concave. Our results strictly rule out the possibility of obtaining linear convergence. In contrast, bilinear zero-sum games and convex-concave games admit algorithms that converge linearly to the NE, for example see [Wei et al., 2020, Sokota et al., 2022, Mingyang et al., 2022] (papers that appeared at ICLR).\\n\\nSince the advent of GANs [Goodfellow et al., 2020], there has been a natural interest to study the best algorithms for it and as a result clearly understand the complexity of non-convex non-concave minmax [Daskalakis et al., 2021, Daskalakis 2021]. The current known PPAD-hardness results for this setting require _coupled_ constraints, which is unnatural for games or GANs. Our hardness results, on the other hand, are applicable to natural settings where the constraints are uncoupled, making it a ``natural'' nonconvex-concave minmax problem with known hardness. \\nMoreover, our completeness results for the polymatrix setting poses an interesting question about the algorithms that achieve the best dependence on $(1/\\\\varepsilon)$. We show that our minmax problem can be reduced to a minimization problem and note that to compute a NE, we require convergence only to a _first-order stationary point_. \\nThis minimization problem has connections to numerous practical problems such as Quadratic Assignment problems, Power flow optimization, Portfolio Optimization etc.\"}", "{\"metareview\": \"This paper looks at the complexity of computing teams equilibria in game.\\n\\nIt is a good theoretic contribution that the reviewers and I enjoyed reading. It is well written and rather insightful.\\n\\nHappy to recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"All positive reviews and score, no discussion\"}", "{\"title\": \"Response Part II\", \"comment\": \"### Why two-team adversarial games/polymatrix games are interesting?\\n\\nThe study of team games was initiated by [von Stengel and Koller 1997], to model ``imperfect'' coordination within a company, when having to take strategic decisions in the presence of adversaries. In the field of AI agents, one can imagine such interactions are natural in settings where AI agents are trained to play team games, such as Starcraft [Vinyals et al., 2019] and DoTA [Berner et al., 2019].\\n\\nMeanwhile, polymatrix games are used to model pairwise interactions between players and these interactions can be specified as a graph. In some cases, polymatrix games offer tractable alternative models for multiplayer games, as NE in polymatrix zero-sum games are efficiently computable [Daskalakis and Papadimitriou 2009, Cai et al., 2016]. More generally, polymatrix games are used to model problems such as coordination games on graphs [Apt et al., 2017, Apt et al., 2022] and this has applications in semi-supervised learning methods such as graph transduction [Aykut and Pelillo 2012, Vascon et al., 2020].\\n\\nFinally, as we state, our lower bounds automatically apply to the multiagent reinforcement learning setting, since our games are stateless.\\n\\n### On Gradient Based Algorithms\\n\\nOur setting of two-team polymatrix games can be viewed as a special case of the two-team adversarial games studied by [Anagnostides et al., 2023]. Although, they describe their GradientDescentMAX algorithm for a single adversary, it can be applied to the case of many independent adversaries. Using their algorithm on the minmax objective gives us the $O(poly(size).1/\\\\varepsilon^4)$ convergence rate to an $\\\\varepsilon$-approximate NE.\", \"references\": \"Goodfellow, Ian, et al. \\\"Generative adversarial networks.\\\" Communications of the ACM 63.11 (2020): 139-144.\\n\\nDaskalakis, Constantinos. \\\"Non-concave games: A challenge for game theory\\u2019s next 100 years.\\\" Nobel symposium\\u201d One Hundred Years of Game Theory: Future Applications and Challenges. 2021.\\n\\nDaskalakis, Constantinos, Stratis Skoulakis, and Manolis Zampetakis. \\\"The complexity of constrained min-max optimization.\\\" Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing. 2021.\\n\\nCai, Yang, et al. \\\"Zero-sum polymatrix games: A generalization of minmax.\\\" Mathematics of Operations Research 41.2 (2016): 648-655.\\n\\nDaskalakis, Constantinos, and Christos H. Papadimitriou. \\\"On a network generalization of the minmax theorem.\\\" International Colloquium on Automata, Languages, and Programming. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009.\\n\\nErdem, Aykut, and Marcello Pelillo. \\\"Graph transduction as a noncooperative game.\\\" Neural Computation 24.3 (2012): 700-723.\\n\\nSokota, Samuel, et al. \\\"A unified approach to reinforcement learning, quantal response equilibria, and two-player zero-sum games.\\\" arXiv preprint arXiv:2206.05825 (2022).\\n\\nWei, Chen-Yu, et al. \\\"Linear last-iterate convergence in constrained saddle-point optimization.\\\" arXiv preprint arXiv:2006.09517 (2020).\\n\\nLiu, Mingyang, et al. \\\"The power of regularization in solving extensive-form games.\\\" arXiv preprint arXiv:2206.09495 (2022).\\n\\nApt, Krzysztof R., Sunil Simon, and Dominik Wojtczak. \\\"Coordination games on weighted directed graphs.\\\" Mathematics of Operations Research 47.2 (2022): 995-1025.\\n\\nApt, Krzysztof R., et al. \\\"Coordination games on graphs.\\\" International Journal of Game Theory 46 (2017): 851-877.\\n\\nVinyals, Oriol, et al. \\\"Grandmaster level in StarCraft II using multi-agent reinforcement learning.\\\" nature 575.7782 (2019): 350-354.\\n\\nBerner, Christopher, et al. \\\"Dota 2 with large scale deep reinforcement learning.\\\" arXiv preprint arXiv:1912.06680 (2019).\\n\\nAnagnostides, Ioannis, et al. \\\"Algorithms and complexity for computing nash equilibria in adversarial team games.\\\" arXiv preprint arXiv:2301.02129 (2023).\"}", "{\"comment\": \"Thank you. My opinion of the paper has not changed, and I will keep my score.\"}", "{\"comment\": \"Thanks for the considered response. It has helped me situate the significance of the hardness result, specifically about this being the easiest non-convex concave minimax problem that has a hardness result blocking linear convergence.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}" ] }
9UxC2J7Pup
Understanding Nonlinear Implicit Bias via Region Counts in Input Space
[ "Jingwei Li", "Jing Xu", "Zifan Wang", "Huishuai Zhang", "Jingzhao Zhang" ]
One explanation for the strong generalization ability of neural networks is implicit bias. Yet, the definition and mechanism of implicit bias in non-linear contexts remains little understood. In this work, we propose to characterize implicit bias by the count of connected regions in the input space with the same predicted label. Compared with parameter-dependent metrics (e.g., norm or normalized margin), region count can be better adapted to nonlinear, overparameterized models, because it is determined by the function mapping and is invariant to reparametrization. Empirically, we found that small region counts align with geometrically simple decision boundaries and correlate well with good generalization performance. We also observe that good hyper-parameter choices such as larger learning rates and smaller batch sizes can induce small region counts. We further establish the theoretical connections and explain how larger learning rate can induce small region counts in neural networks.
[ "implicit bias", "region counts", "non-linear neural network", "generalization gap" ]
Reject
https://openreview.net/pdf?id=9UxC2J7Pup
https://openreview.net/forum?id=9UxC2J7Pup
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wslpV38wqW", "r9B85GFMb9", "qqW3HnJsFm", "qMA0sk7xnT", "oRDq5q2Cut", "jDH7GDdoT9", "igTZZX2bPf", "gBh93sDNka", "dlsFp7MjPA", "c8RgBrqqvF", "VQqKFGAzYh", "V9BXGyyeuQ", "UtR6DRCqzo", "PBjJcYuJZe", "KC1ipnPQ73", "Im3Wt5mKo2", "IZuGLyyRuv", "HzsvBjTP62", "GVSrC8KpWE", "E9r0ZEbeaM", "ALbHSp2VVa", "5IeBfWhUGO", "4equugHxyt" ], "note_type": [ "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730763611793, 1730664840965, 1733097261123, 1731834365364, 1732364660632, 1731835145159, 1733158444929, 1732188748179, 1731834147966, 1730646693317, 1731835228581, 1737523702152, 1732149676092, 1732117677594, 1731834202802, 1732825457068, 1734882849554, 1733097240530, 1732842037481, 1730353181435, 1732842053567, 1732842230477, 1732186317212 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5366/Reviewer_o6TY" ], [ "ICLR.cc/2025/Conference/Submission5366/Reviewer_SnZS" ], [ "ICLR.cc/2025/Conference/Submission5366/Authors" ], [ "ICLR.cc/2025/Conference/Submission5366/Authors" ], [ "ICLR.cc/2025/Conference/Submission5366/Authors" ], [ "ICLR.cc/2025/Conference/Submission5366/Authors" ], [ "ICLR.cc/2025/Conference/Submission5366/Reviewer_SnZS" ], [ "ICLR.cc/2025/Conference/Submission5366/Authors" ], [ "ICLR.cc/2025/Conference/Submission5366/Authors" ], [ "ICLR.cc/2025/Conference/Submission5366/Reviewer_LVqU" ], [ "ICLR.cc/2025/Conference/Submission5366/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5366/Authors" ], [ "ICLR.cc/2025/Conference/Submission5366/Reviewer_o6TY" ], [ "ICLR.cc/2025/Conference/Submission5366/Authors" ], [ "ICLR.cc/2025/Conference/Submission5366/Reviewer_LVqU" ], [ "ICLR.cc/2025/Conference/Submission5366/Area_Chair_FjWa" ], [ "ICLR.cc/2025/Conference/Submission5366/Authors" ], [ "ICLR.cc/2025/Conference/Submission5366/Authors" ], [ "ICLR.cc/2025/Conference/Submission5366/Reviewer_V8HX" ], [ "ICLR.cc/2025/Conference/Submission5366/Reviewer_V8HX" ], [ "ICLR.cc/2025/Conference/Submission5366/Authors" ], [ "ICLR.cc/2025/Conference/Submission5366/Reviewer_V8HX" ] ], "structured_content_str": [ "{\"summary\": \"This paper motivates and studies a novel generalization measure for neural networks: the number of connected regions in the input space. The paper first describes current challenges in connecting generalization to geometric properties of neural networks, and then introduces the proposed generalization measure. Extensive experiments assess the correlation between a small number of connected regions and generalization. Finally, the paper concludes by providing a theoretical link between the region count (in the training data) and the learning rate of (S)GD used during training.\", \"update\": \"changed my score to 6.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This is an interesting paper to read. I would like to highlight the following strengths:\", \"Novel (empirical) measure for generalization: To the best of my knowledge, the proposed measure (input space connected region count) has not been studied theoretically or experimentally before, and the experimental findings are interesting. Namely, the paper finds that, for various configurations, the proposed measure (or, better, an estimate of it) seems to correlate well with the generalization gap.\", \"Extensive experiments: The experimental analysis is very thorough, including many datasets, models and hyperparameter choices.\", \"Theoretical result on the relation between region count and learning rate: Theorem 1 (partially) explains the empirical finding that a larger learning rate is associated with a smaller region count (and, as a result, smaller generalization gap). I found the result simple yet creative.\"], \"weaknesses\": [\"I believe that the community will find the findings and observations of this paper interesting. However, I identified a few areas where the paper could improve substantially:\", \"Omission of important details from introduction: The paper almost exclusively studies a specific approximation of the input space region count, which measures the number of prediction changes along *a line that connects two points from the training data*. However, this detail is not mentioned in the introduction, but only much later. Furthermore, many readers, while going through the paper for the first time, might confuse the introduced measure with the number of linear regions of the neural network. While this is clarified later in the paper (pg. 3), a short explanation in the introduction would be helpful.\", \"Weak/misleading section on \\\"Motivation\\\" (Section 3): I found Section 3 to be entirely misleading. The norm and margin quantities considered have **no reason** to be correlated with generalization for a ResNet18. First, they are not properly normalised, which is acknowledged later, so I do not see the reason for considering them in the first place. Second, there is no clear understanding that SGD on a ResNet will necessarily increase a specific notion of margin or minimise a specific norm. If someone wants to assess such connections, they should probably focus on simpler models (such as homogeneous neural networks) and control for many confounders. I understand and agree with the general point of this section (i.e., that accurate generalization measures based on the parameters of a \\\"practical\\\" network may be challenging to find in practice), but I found the motivating experiments problematic. I insist on this, since this section is the starting point of the paper and might mislead many readers. I would suggest being more precise in this section. Further concrete comments on this section: 1) Equation in line 172 is missing the distribution with respect to the expectation is taken. This is crucial, as it is not clear whether it applies to train or test data (it should be train). 2) It would be good to define the input-space margin which you mention in line 188 for the first time.\", \"Insufficient theoretical link to generalization: While I understand this is mainly an experimental paper, I was disappointed that there is no discussion on how the proposed measure can perhaps be related to improved generalization. For example, there is no mention of the self-evident property that a very small region count is undesirable (for region count equal to 1, we obtain a trivial predictor). An example of a satisfying result would be that (S)GD biases the model to implicitly minimise $R(\\\\theta)$ under the constraint that all the train points are classified correctly together with perhaps more constraints from the hyperparameters (akin to results that exists for gradient descent on homogeneous neural networks and margin maximization). Should we hope to prove such a result? Do you believe that such result could be true? While this point alone is not brought up to dissuade acceptance of the paper, I would appreciate any thoughts the authors have on this.\"], \"questions\": [\"line 362: why is there a norm in the definition of the $\\\\ell_2$ loss? The neural network is defined to have real output.\", \"line 366: remove the word \\\"the\\\" before \\\"assumption\\\".\", \"line 381: the term \\\"Hessian $\\\\ell_2$ norm\\\" sounds strange.\", \"Table 2: what are the reported results? correlation? This is not mentioned.\", \"There seems to be a typo in line 797 (\\\"Table 4\\\" appears twice).\", \"proof of Lemma 2, line 988: shouldn't the condition for the inner products be for $i$ and $i+1$, instead of $i$ and $i+2$?\", \"line 042, \\\"results from linear regime can be extended to linear neural networks\\\": this sentence is confusing. Similarly in line 037. Linear neural networks are in the linear regime. You can just mention that results can be extended for the deep case.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces *region count* as a new metric for understanding generalization in neural networks by examining the consistency of predictions across connected regions in the input space. The authors suggest that region count, which reflects the complexity of a model\\u2019s decision boundaries, is a more effective generalization indicator than traditional parameter-based metrics. Empirical results across various architectures and hyperparameter settings show that models with lower region counts tend to generalize better. Additionally, the authors' theoretical analysis links large learning rates to simpler decision boundaries, which may enhance generalization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. **Innovative Metric**: The introduction of region count to measure implicit bias in non-linear neural networks addresses limitations in parameter-dependent metrics, offering a fresh perspective on generalization.\\n3. **Hyperparameter Insights**: Findings around learning rate and batch size impacting region count offer practical value, guiding hyperparameter choices for improved generalization.\\n4. **Theoretical Basis**: Theoretical analysis, albeit in a simplified setup, provides a foundation for understanding the implicit bias induced by large learning rates, which aligns with the empirical findings.\", \"weaknesses\": \"### Weaknesses\\n\\n1. **Limited Scope of Theoretical Analysis**: The theoretical section mainly focuses on a two-layer ReLU model, which limits its generalizability. Extending this analysis to more complex architectures would strengthen the theoretical contributions.\\n\\n2. **Scalability Challenges in High Dimensions**: Estimating region count in high-dimensional spaces can be computationally intensive. While the proposed low-dimensional approximation is valuable, further analysis of its scalability is needed.\\n\\n3. **Narrow Experimental Domain**: The experiments are primarily conducted in the image domain, which restricts the broader applicability of the region count metric. Extending validation to tasks like NLP would enhance the metric\\u2019s relevance across varied contexts.\\n\\n4. **Limited Comparative Analysis with Complexity Metrics**: Although norm-based metrics are discussed, deeper comparisons with other complexity measures\\u2014such as sharpness- and margin-based metrics\\u2014would offer a more comprehensive view of the advantages and limitations of region count.\\n\\nThe work of Andriushchenko et al. (2023) highlights how sharpness correlates positively with ResNet architectures but is less effective on Transformers and ViTs. For example, CIFAR and ResNet benchmarks alone may not sufficiently clarify the relationship between sharpness and generalization. Similarly, flatness measures perform well on CNNs, a setting similar to the one used here. Extending the method to NLP datasets and Transformer architectures, however, could better demonstrate its broader utility.\\n\\nThe empirical study\\u2019s impact is limited because the evaluated networks belong to the same family, raising questions about scalability to modern architectures like Transformers and ViTs. Andriushchenko\\u2019s findings also indicate that other flatness metrics struggle to adapt to these modern settings, suggesting that additional validation on ResNet and Transformer models could yield stronger insights. Though the method is claimed to be effective, the evidence provided lacks sufficient experimental depth. A more robust evaluation could include comparisons with sharpness metrics where they perform well or tests on edge cases, such as out-of-distribution generalization, to better establish the method\\u2019s efficacy.\", \"questions\": \"1. **Link to Certified Radius and Generalization Measures**: The connection between certified radius, margin, and norm-based measures may not be sufficient. Could metrics such as the ratio of a neural network\\u2019s margin to its Lipschitz constant provide additional insight? This ratio is directly related to the certified radius, representing the region where a classifier\\u2019s predictions remain unchanged (see Tsuzuku et al., 2018).\\n\\n2. **Scalability**: How does the computational cost of estimating region count scale with model size and dimensionality, especially in NLP domains?\\n\\n3. **Extensions to Complex Architectures**: Can the theoretical framework be feasibly extended to deeper or more sophisticated architectures, and would insights about learning rate and region count remain applicable?\\n\\n4. **Generalization Beyond Classification**: Could region count metrics be adapted to other types of tasks, such as regression or structured prediction?\\n\\n5. **Impact of Additional Hyperparameters**: Beyond learning rate and batch size, how do other hyperparameters\\u2014like weight decay, optimizer choice, and normalization layers such as BatchNorm or LayerNorm\\u2014affect region count and generalization?\\n\\n6. **Comparison with Other Generalization Metrics**: How does the generalization potential of region counts compare with other metrics such as Lipschitz continuity, flatness, etc.? (See Jiang et al., 2020).\\n\\n7. **Role of Implicit Regularization**: High learning rates (linked to weight decay effects with normalization) and small batch sizes (as noted by Keskar et al., 2017) are already associated with implicit regularization. The paper suggests that this is explained through sharp minima, but a more precise explanation could clarify this link.\\n\\n8. **Connection to Certified Radius in Adversarial Robustness**: Discuss the relationship between region count and certified radius, particularly regarding Tsuzuku\\u2019s work, which uses the ratio of margin and Lipschitz constant as a robustness measure.\\n\\n9. **Comparison to Sharpness**: Sharpness measures are data-dependent, yet the proposed method is also data-dependent. Would a comparison with sharpness metrics provide valuable insights?\\n\\n10. **Scalability to Larger Models**: Does the slicing method scale effectively to larger models, and how does it behave as the number of parameters increases? Testing it on MLP layers with an increasing number of features could offer insights.\\n\\n11. **Effectiveness of Scaling Techniques**: The scaling technique to invalidate the measure does not seem effective for the margin-to-Lipschitz-based ratio. Are there alternative approaches to validate or refine this measure?\\n\\n12. **Region Count Interpretation**: Could you clarify what is meant by region count when it is not an integer?\\n\\n### References\\n\\n- Tsuzuku, Y., Sato, I., & Sugiyama, M. (2018). *Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks*. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS).\\n\\n- Jiang, Y., Neyshabur, B., Mobahi, H., Krishnan, D., & Bengio, S. (2020). *Fantastic Generalization Measures and Where to Find Them*. In Proceedings of the 8th International Conference on Learning Representations (ICLR).\\n\\n- Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., & Tang, P. T. P. (2017). *On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima*. In Proceedings of the 34th International Conference on Machine Learning (ICML).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks again for your valuable feedback! Could you please let us know whether your concerns have been addressed? We are happy to make further updates if you have any other questions or suggestions.\"}", "{\"title\": \"Comments by Authors\", \"comment\": \"We extend our gratitude to the reviewer for the comments and suggestions. Below, we address the primary concerns that have been raised.\\n\\n>Q1: Including Table 5 in Appendix B to the main text.\\n\\n**A1:** Thank you for the suggestion. In the next version, we will move Table 5 from Appendix B to the main text to highlight the robustness of our conclusions to the number of sampling trials, further validating the method\\u2019s reliability.\\n\\n>Q2: It is necessary to show that a large learning rate or small batch size leads to better generalization.\\n\\n**A2:** Extensive prior research [1][2][3] has demonstrated that large learning rates and small batch sizes typically lead to better generalization, both in terms of generalization gap and top-1 accuracy. However, this is not the primary focus of our paper. Our study aims to explore the relationship between model generalization and region count, providing a novel perspective that complements existing findings.\\n\\n>Q3: The range of generalization gap appears large compared to typical CIFAR-10 performance. \\n\\n**A3:** We appreciate the reviewer\\u2019s observation. To clarify, **data augmentation is not used during the training of the neural networks in our study**, whereas [4][5] applied data augmentation when training ResNet models, likely contributing to their higher test accuracy. In Section 7, we **also conduct experiments with data augmentation**. As shown in Figure 7, with optimal hyperparameters and data augmentation, the test accuracy reaches approximately 93%. Importantly, our correlation between region count and generalization gap **remains consistent** regardless of whether data augmentation is applied.\\n\\n>Q4: Does region count maintain a consistent correlation after PCA?\\n\\n**A4:** We thank the reviewer for suggesting this direction. In our paper, the subspace is generated using convex combinations of training data, allowing each point on the subspace to **be expressed as convex combinations of the training datapoint coordinates**. These coordinates can then be fed into the neural network for label prediction, enabling us to compute the region count based on the classification of labels across the subspace.\\n\\nHowever, when using PCA to generate a subspace, we can determine the coordinates of points within the reduced-dimensional space but **cannot map these points back to their original coordinates in the input space**. Without the original coordinates, it is **not possible to input these points into the neural network for label prediction**. Therefore, we have not yet identified a direct method for calculating the region count on a PCA subspace.\\n\\n>Q5: Dose the variation in the dimensionality $R^d$ affects the results of sampled subspace in region count?\\n\\n**A5:** The region count increases with subspace dimensionality. However, as shown in Table 2, the correlation between region count and generalization remains consistently high, unaffected by dimensionality changes.\\n\\n>Q6: Is this paper first to understand the correlation between generalization gap and hyper-parameters?\\n\\n**A6:** This paper does not directly analyze the correlation between generalization gap and hyperparameters. Instead, it studies the relationship between the generalization gap and region count, as well as region count and hyperparameters.\\n\\n>Q7: How is 'Connectedness' validated in practice? \\n\\n**A7:** Algorithm 1 in Appendix B details the computation of region count. The input space is divided into smaller grids, and **Breadth-First Search (BFS) is used to traverse adjacent grids with the same label**, identifying and counting connected components.\\n\\n>Q8: It is unclear how to interpret the results of the random flip. \\n\\n**A8:** Figures 6 and 7 present observations rather than interpretations, showing that mixup and data augmentation affect region count in different ways. These observations suggest that the mechanisms through which mixup and data augmentation improve model generalization may differ. Further exploration of this phenomenon is a promising direction for future research.\\n\\n>Q9: In Appendix B, the third paragraph mentions Table 4 twice.\\n\\n**A9:** Thank you for catching this typo. It has been corrected in the revised version.\\n\\nWe thank the reviewer once again for the valuable and helpful suggestions. \\n\\n**References**\\n\\n[1] Keskar, Nitish Shirish, et al. \\\"On large-batch training for deep learning: Generalization gap and sharp minima.\\\" arXiv preprint arXiv:1609.04836 (2016).\\n\\n[2] Jastrz\\u0119bski, Stanis\\u0142aw, et al. \\\"Three factors influencing minima in sgd.\\\" arXiv preprint arXiv:1711.04623 (2017).\\n\\n[3] Hoffer, Elad, Itay Hubara, and Daniel Soudry. \\\"Train longer, generalize better: closing the generalization gap in large batch training of neural networks.\\\" Advances in neural information processing systems 30 (2017).\\n\\n[4] He, Kaiming, et al. \\\"Deep residual learning for image recognition.\\\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.\\n\\n[5] https://github.com/kuangliu/pytorch-cifar\"}", "{\"comment\": \"We would like to express our sincere gratitude for the reviewer's constructive suggestions and comments. Since the deadline is approaching, we sincerely hope the reviewers can read our response. Please let us know if the reviewers have any comments about our response or any other additional concerns. We are eager to provide any further clarifications and discussions to help the evaluation.\"}", "{\"title\": \"Comments by Authors\", \"comment\": \"We thank the reviewer for the time spent on reviewing our work and for the very detailed comments. We **add experiments comparing against more metrics such as sharpness-based and Pac-Bayesian based**. Although our work aims to identify an interpretable implicit bias for nonlinear models rather than predicting the generalization gap, we are happy to add more empirical results if that could help strengthen our findings. Please find the details below.\\n\\n>Q1: Link to Certified Radius [5] and Generalization Measures.\\n\\n**A1:** We have carefully reviewed the paper on Certified Radius [5]. The margin-to-Lipschitz constant proposed in these works is primarily designed to **measure a neural network's robustness against adversarial perturbations, rather than its generalization ability**. Adversarial robustness aims to reduce vulnerability to intentionally crafted small input perturbations, while generalization focuses on the network's performance on unseen data. Since the input distributions in these two scenarios differ significantly, there is no direct connection between the two concepts.\\n\\n>Q2: How does the computational cost of estimating region count scale with model size and dimensionality, especially in NLP domains?\\n\\n**A2:** In Appendix B, we outline the method for calculating region count, which has a time complexity that does not scale with model size but instead grows exponentially with the dimension of the selected subspace. Our paper focuses on classification tasks, where regions are defined based on label consistency. Extending the definition of regions to NLP tasks remains an open challenge.\\n\\nTo explore the applicability of our approach to transformer-based models, we conduct an experiment using Vision Transformers (ViT) on CIFAR-10. The results show **a strong correlation of 0.84** between region count and the generalization gap, reinforcing the validity of our measure in this setting. Details of the hyperparameters used are summarized below:\\n\\n|Hyperparameters|Value|\\n|:---:|:----:|\\n|Learning rate|1e-4,5e-5,1e-5|\\n|Batch size|256,512,1024|\\n|Weight decay|1e-5,1e-6,1e-7|\\n\\n>Q3: Can the theoretical framework be feasibly extended to deeper or more sophisticated architectures?\\n\\n**A3:** We appreciate the reviewer\\u2019s insightful question. Computing region count for more complex architectures is indeed a challenging task, as it cannot be directly bounded using activation numbers. At present, we have not identified a feasible method for theoretical analysis in these cases. We consider this an important avenue for future research.\\n\\n>Q4: Could region count metrics be adapted to other types of tasks, such as regression or structured prediction?\\n\\n**A4:** We thank the reviewer for this thoughtful suggestion. To adapt region count metrics to regression tasks, a key step would be to define connectedness. One potential approach is to partition the regression error into small intervals and analyze the connectivity of input space points falling within each interval. We believe this is a promising direction for future exploration.\\n\\n>Q5: Beyond learning rate and batch size, how do other hyperparameters\\u2014like weight decay, optimizer choice, and normalization layers affect region count and generalization?\\n\\n**A5:** We thank the reviewer for this insightful question. Our preliminary experiments indicate that the relationships between these hyperparameters and region count are highly complex, making it challenging to capture them systematically within the scope of the current work. For this reason, we did not include detailed analyses in the main text. We consider this an important area for future research.\\n\\n>Q6: How does the generalization potential of region counts compare with other metrics such as Lipschitz continuity and flatness?\\n\\n**A6:** We thank the reviewer for this question. To compare the effectiveness of region count with other metrics, we conduct additional experiments on CIFAR-10 using ResNet-18. We measured the correlation between the generalization gap and spectral norm [1], PAC-Bayesian flatness [2], and region count:\\n\\n|Measure|Spectral Norm|PAC-Bayesian Flatness|Region Count|\\n|:---:|:----:|:----:|:----:|\\n|Correlation|0.77|-0.31|**0.98**|\\n\\nThe results demonstrate that region count exhibits a significantly stronger correlation with the generalization gap.\\n\\n>Q7: High learning rates (linked to weight decay effects with normalization) and small batch sizes are already associated with implicit regularization. The paper suggests that this is explained through sharp minima, but a more precise explanation could clarify this link.\\n\\n**A7:** Numerous prior studies [6][7][8] have investigated the effects of learning rate and batch size on generalization, showing that large learning rates and small batch sizes often lead to better generalization. We acknowledge that our explanation could be more precise, and we will include additional clarification and discussion on this topic in the next version of the paper.\"}", "{\"comment\": \"I thank the authors for their responses, but I believe they did not fully address my concern regarding the certified radius. Specifically, they considered the ratio as Lipschitz over margin, while the certified radius is typically defined as margin over Lipschitz. Additionally, it would have been beneficial to provide more theoretical insights into the proposed method.\\n\\nThat said, the authors have addressed some of my other concerns. However, I still believe the paper should not be accepted. Nevertheless, based on their clarifications, I am upgrading my score to 5.\"}", "{\"title\": \"The caption for Figure 7\", \"comment\": \"We sincerely thank the reviewer for the feedback. In our experiments, we applied both random crop and random flip techniques for data augmentation and calculated the region count and generalization error. Figure 7 reflects the effect of applying both techniques. However, due to limited space in the figure title, we mentioned only \\\"random crop\\\" and omitted \\\"flip.\\\" We apologize for the misunderstanding and have clarified this issue in the updated version of our paper. Please refer to the latest PDF for details.\"}", "{\"title\": \"Comments by Authors\", \"comment\": \"We greatly appreciate the reviewer's comments and valuable suggestions. We address the reviewer's questions in more detail as follows:\\n\\n>Q1: Omission of important details from introduction.\\n\\n**A1:** We appreciate the reviewer's feedback on the missing details. In the updated version of our paper, we have revised the introduction to include the definition of region count and its distinction from the concept of linear regions. The changes are highlighted in blue.\\n\\n>Q2: The section on 'Motivation' in Section 3 is misleading. There are also some typos in Section 3.\\n\\n**A2:** The motivation section emphasizes that commonly used measures for characterizing implicit bias, such as norm and margin, are valid in specific cases (e.g., linear or homogeneous networks) [1][2][3]. However, these measures **fail to perform well for more general nonlinear networks like ResNet-18**. Previous research [4] also shows that these measures do not correlate with the generalization gap. Our experiment further demonstrates their limitations, highlighting the need for a new approach to characterizing implicit bias in general neural networks. This led us to **introduce region count, which effectively captures implicit bias**. We have addressed the typos and revised Section 3 to improve clarity, with changes highlighted in blue.\\n\\n\\n>Q3: Insufficient theoretical link to generalization.\\n\\n**A3:** Thank you for raising this question. The question and suggested theorem are more closely related to optimization rather than generalization. A classifier with a very small region count could indeed perform poorly. however, it **does not necessarily contradict the possibility of having a small generalization gap**. For instance, a naive classifier with region count 1 will perform poorly on both the training and test datasets, resulting in a small generalization gap.\\n\\nThe suggested theorem could potentially be achieved by proving that large learning rates still allow convergence to a minimizer under the setup considered in Section 6.2. We agree with the reviewer that this paper primarily focuses on the empirical and generalization aspects. A more rigorous theoretical analysis of the tradeoff between generalization and optimization in relation to region counts is a valuable direction for future work, and we plan to explore this in subsequent research.\\n\\n>Q4: Minor typos.\\n\\n**A4:** We thank the reviewer for pointing out the typos. These have been corrected in the updated version, with changes highlighted in blue. Specifically, for line 988 in the proof of Lemma 2, the condition for the inner products involving $i$ and $i+2$ is not a typo. We have added a more detailed explanation in the appendix to clarify this point.\\n\\nFinally, we thank the reviewer once again for the effort in providing us with valuable and helpful suggestions. We will continue to provide clarifications if the reviewer has any further questions.\\n\\n**Reference**\\n\\n[1] Soudry, Daniel, et al. \\\"The implicit bias of gradient descent on separable data.\\\" Journal of Machine Learning Research 19.70 (2018): 1-57.\\n\\n[2] Ji, Ziwei, and Matus Telgarsky. \\\"The implicit bias of gradient descent on nonseparable data.\\\" Conference on learning theory. PMLR, 2019.\\n\\n[3] Lyu, Kaifeng, and Jian Li. \\\"Gradient descent maximizes the margin of homogeneous neural networks.\\\" arXiv preprint arXiv:1906.05890 (2019).\\n\\n[4] Jiang, Yiding, et al. \\\"Fantastic generalization measures and where to find them.\\\" arXiv preprint arXiv:1912.02178 (2019).\"}", "{\"summary\": \"The paper proposes to quantify implicit bias through the lens of linear regions in deep networks. The authors find some empirical relations between the number of linear regions and the ability of the architecture to generalize. Trends with respect to number of regions and hyper parameters are also highlighted.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper proposes to study an important problem which is the quantification of implicit bias with deep networks. While numerous measures have emerged, it does seems that using linear regions offers a promising direction. The paper also explores a few different explorative set of experiments showing trends with hyper parameters which can be useful to help decide about optimization or regularization settings to employ.\", \"weaknesses\": \"The major weakness of the paper is in failing to cite entire corpus of work that already explored that direction before.\\n\\nStarting with the seminal work of Montufar (On the Number of Linear Regions of Deep Neural Networks) which also studies the impact of number of linear regions on performances and its tie with the architecture. Many follow up works by the same authors delve deeper into that exact research problem as well: Sharp bounds for the number of regions of maxout networks and vertices of minkowski sums. A convolutional network specific study also comes form (On the Number of Linear Regions of Convolutional Neural Networks).\", \"in_parallel_a_whole_set_of_studies_from_baraniuk_also_look_at_that_exact_problem\": [\"A Spline Theory of Deep Networks\", \"SplineCam: Exact Visualization and Characterization of Deep Network Geometry and Decision Boundaries\", \"Deep Networks Always Grok and Here is Why\", \"all involving counting regions and relating that to test performance and generalization, as well as depicting comparisons with optimizers and architectures, as provided by the current submissions.\"], \"questions\": \"Without citing prior work that look at the exact same problem studied here, and thus without any discussion or comparisons, it is hard to assess the novelty of the current submission. A priori, it seems that the proposed findings and methods have already been studied before hence making the current submission fall below acceptance level. However I encourage the authors to precisely cite and compare those references to their submissions and specifically demonstrate how/where do they provide novelty.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Comments by Authors\", \"comment\": \">Q8: Discuss the relationship between region count and certified radius[5], which uses the ratio of margin and Lipschitz constant as a robustness measure.\\n\\n**A8:** We thank the reviewer for this insightful question. Region count and certified radius are indeed related, as both quantify changes in a neural network's predictions within the input space. The key difference lies in their scope: region count is **a global measure**, capturing the network's overall prediction behavior across the entire input space, while certified radius is **a local measure**, focusing on the extent of perturbation required to alter a prediction. A promising direction for future work could involve exploring whether certified radius provides an upper bound for region count.\\n\\n>Q9: Would a comparison with sharpness metrics provide valuable insights?\\n\\n**A9:** We thank the reviewer for the question. To address this, we conduct additional experiments on CIFAR-10 using ResNet-18, comparing sharpness metrics from PAC-Bayesian bounds, including those using the origin and initialization as reference tensors (PB-I and PB-O), as well as PAC-Bayesian Magnitude-aware Perturbation Bounds (PB-M-I and PB-M-O) [2][3][4]. The results are summarized below:\\n\\n|Measure|PB-I|PB-O|PB-M-I|PB-M-O|Region Count|\\n|:---:|:----:|:----:|:----:|:----:|:----:|\\n|Correlation|-0.35|-0.31|0.79|0.78|**0.98**|\\n\\nThe results demonstrate that region count shows a significantly stronger correlation with the generalization gap compared to these sharpness metrics.\\n\\n>Q10: Does the slicing method scale effectively to larger models? \\n\\n**A10:** In our paper, we already explored the correlation across neural networks with different parameter sizes, finding it to be largely consistent. To further address this question, we conduct additional experiments using **ResNet models with varying depths** on CIFAR-10. The results are summarized below:\\n\\n|Network|Resnet18|Resnet34|Resnet50|Resnet101|Resnet152|\\n|:---:|:----:|:----:|:----:|:----:|:----:|\\n|Correlation|0.98|0.98|0.97|0.98|0.96|\\n\\nThese results indicate that the correlation remains consistently high as the number of parameters increases.\\n\\n>Q11: The scaling technique to invalidate the measure does not seem effective for the margin-to-Lipschitz-based ratio. Are there alternative approaches to validate or refine this measure?\\n\\n**A11:** We thank the reviewer for raising this question. The Lipschitz constant of a neural network is upper-bounded by the product of the spectral norms of its weight matrices across all layers. To evaluate the effectiveness of the margin-to-Lipschitz-based ratio, we conduct experiments to verify the correlation between the sum of spectral norms over margin [4] and the generalization gap. The results are summarized below:\\n\\n|Measure|SPECTRAL/MARGIN|Region Count|\\n|:---:|:----:|:----:|\\n|Correlation|0.32|**0.98**|\\n\\nThe results indicate that the margin-to-Lipschitz-based ratio shows a weak correlation, suggesting it may not be a reliable predictor of generalization compared to region count.\\n\\n>Q12: Could you clarify what is meant by region count when it is not an integer?\\n\\n**A12:** Region count is defined **as an expectation**. As detailed in Appendix B, we calculate it by randomly sampling 100 subspaces and averaging the region counts across these subspaces. While the region count for each individual subspace is an integer, **the averaged value is not necessarily an integer**.\\n\\nWe thank the reviewer once again for the valuable and helpful suggestions. We would be happy to provide further clarifications if the reviewer has any additional questions.\\n\\n[1] Bartlett, Peter L., Dylan J. Foster, and Matus J. Telgarsky. \\\"Spectrally-normalized margin bounds for neural networks.\\\" Advances in neural information processing systems 30 (2017).\\n\\n[2] Keskar, Nitish Shirish, et al. \\\"On large-batch training for deep learning: Generalization gap and sharp minima.\\\" arXiv preprint arXiv:1609.04836 (2016).\\n\\n[3] Neyshabur, Behnam, et al. \\\"Exploring generalization in deep learning.\\\" Advances in neural information processing systems 30 (2017).\\n\\n[4] Jiang, Yiding, et al. \\\"Fantastic generalization measures and where to find them.\\\" arXiv preprint arXiv:1912.02178 (2019).\\n\\n[5] Tsuzuku, Yusuke, Issei Sato, and Masashi Sugiyama. \\\"Lipschitz-margin training: Scalable certification of perturbation invariance for deep neural networks.\\\" Advances in neural information processing systems 31 (2018).\\n\\n[6] Keskar, Nitish Shirish, et al. \\\"On large-batch training for deep learning: Generalization gap and sharp minima.\\\" arXiv preprint arXiv:1609.04836 (2016).\\n\\n[7] Jastrz\\u0119bski, Stanis\\u0142aw, et al. \\\"Three factors influencing minima in sgd.\\\" arXiv preprint arXiv:1711.04623 (2017).\\n\\n[8] Hoffer, Elad, Itay Hubara, and Daniel Soudry. \\\"Train longer, generalize better: closing the generalization gap in large batch training of neural networks.\\\" Advances in neural information processing systems 30 (2017).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Reply\", \"comment\": \"Thank you for your feedback and recognition of our efforts! We will further refine the discussion on motivation in Section 3 in the next version of the paper.\"}", "{\"title\": \"Reply\", \"comment\": \"Thank you very much for your reply and for taking the time to answer my questions.\\n\\nMost of the changes in the manuscript look very good. Specifically,\\n\\n> Specifically, for line 988 in the proof of Lemma 2, the condition for the inner products involving and is not a typo. We have added a more detailed explanation in the appendix to clarify this point.\\n\\nI see. Thank you for explaining this further. There is a minor typo in the included equation: In lines 992 and 995, it should be $\\\\tilde{x}$, instead of $x$. \\n\\nTo be honest, my concerns with the very problematic Section 3 have not been addressed, since I find the train of thought very misleading. As I stated in my review, \\\"the norm and margin quantities considered have no reason to be correlated with generalization for a ResNet18\\\". They are generalization measures for a different learning problem. It is akin to using generalization guarantees for decision trees and claiming that the theory is wrong if they do not correlate with generalization in transformers with skip connections and relative positional encoding. However, since I agree with the high-level idea of this section and the authors do cite [Jiang et al., 2019], where a more thorough discussion is included, I will not insist on this further. I leave it to the readers to think critically. I updated my score to reflect this.\"}", "{\"title\": \"Comments by Authors\", \"comment\": \"We thank the reviewer for the comments and constructive suggestions. In the following, we address the main concern raised. Please find the details below.\\n\\n>Q1: Starting with the seminal work of Montufar which also studies the impact of number of linear regions on performances and its tie with the architecture. Discuss the difference.\\n\\n**A1:** In the \\\"Related Work\\\" section, specifically under \\\"Region Counts of Neural Networks,\\\" we discuss prior works on linear regions and have already clarified the distinction between linear regions and our definition of region count.\\n\\nLinear regions, as defined in prior works [1][2][3][4][5], refer to the set of inputs **corresponding to the same activation pattern** in the network. In contrast, our definition of decision regions refers to **connected areas in the input space that correspond to the same label**. Unlike linear regions, which are independent of the labels output by the neural network and primarily focus on the network's representation capability, our approach is more closely tied to the network's generalization ability. Therefore, the analysis of linear regions fundamentally differs from our work.\\n\\n\\n>Q2: Discuss the relationship with the work of [6][7][8].\\n\\n\\n**A2:** We thank the reviewer for suggesting these papers [6][7][8], and we have carefully read them.\\n\\nPaper [6] introduces a theoretical framework representing deep networks as max-affine spline operators (MASOs) via spline functions and operators. It proposes a regularization term based on this theory, improving classification performance. Paper [7] uses spline partition geometry to characterize and analyze the geometry of neural network decision boundaries, while paper [8] investigates training dynamics, such as grokking and delayed robustness, through the number of spline regions.\\n\\nThese works provide an interesting perspective on partitioning the input space of neural networks using geometric properties, specifically leveraging the continuous piecewise linear (CPWL) activation functions. Their method **divides the input space into linear regions determined by hyperplanes derived from each network layer**, reflecting the geometric structure of the network rather than directly correlating with its output labels.\\n\\nIn contrast, our method **partitions the input space based explicitly on the network's predicted labels**, ensuring that each region corresponds to a specific label. This distinction ties our approach more directly to the network\\u2019s labeling complexity and allows us to focus on its correlation with generalization ability. We will incorporate these points into the discussion of related work in the revised paper.\\n\\nFinally, we thank the reviewer once again for the effort in providing us with valuable suggestions. We will continue to provide clarifications if the reviewer has any further questions.\\n\\n\\n**References**\\n\\n[1] Montufar, Guido F., et al. \\\"On the number of linear regions of deep neural networks.\\\" Advances in neural information processing systems 27 (2014).\\n\\n[2] Xiong, Huan, et al. \\\"On the number of linear regions of convolutional neural networks.\\\" International Conference on Machine Learning. PMLR, 2020.\\n\\n[3] Hanin, Boris, and David Rolnick. \\\"Deep relu networks have surprisingly few activation patterns.\\\" Advances in neural information processing systems 32 (2019).\\n\\n[4] Hanin, Boris, and David Rolnick. \\\"Complexity of linear regions in deep networks.\\\" International Conference on Machine Learning. PMLR, 2019.\\n\\n\\n[5] Serra, Thiago, Christian Tjandraatmadja, and Srikumar Ramalingam. \\\"Bounding and counting linear regions of deep neural networks.\\\" International conference on machine learning. PMLR, 2018.\\n\\n[6] Balestriero, Randall. \\\"A spline theory of deep learning.\\\" International Conference on Machine Learning. PMLR, 2018.\\n\\n[7] Humayun, Ahmed Imtiaz, et al. \\\"Splinecam: Exact visualization and characterization of deep network geometry and decision boundaries.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\n[8] Humayun, Ahmed Imtiaz, Randall Balestriero, and Richard Baraniuk. \\\"Deep networks always grok and here is why.\\\" arXiv preprint arXiv:2402.15555 (2024).\"}", "{\"title\": \"Questions\", \"comment\": \"I thank the authors for their answer, however I disagree with the authors comments that there is no connection between the two definitions (input space linear regions and class regions). First, at deeper layers, the inputs of similar classes will fall within the same linear regions of those deeper layers (hence making the two definition equivalent). This is happening as ultimately (last layer) if this isn't the case then separation can not occur. Second, there has been known positive relationship between the ability to have high count of input linear regions around training points and the ability of a model to classify those samples. This--to me--deserves at least a strong discussion and comparison section. See for example figure 3 of [1]. I am thus keeping my score.\\n\\n[1] NEURAL ARCHITECTURE SEARCH ON IMAGENET IN FOUR GPU HOURS:\\nA THEORETICALLY INSPIRED PERSPECTIVE\\nWuyang Chen, Xinyu Gong, Zhangyang Wang\"}", "{\"metareview\": \"This paper proposes region count in input space as a measure of generalization. The paper empirically demonstrates the strong correlation between region count and generalization gap. The paper also theoretically shows that larger learning rate can lead to smaller region count. The experimental results are thorough and strong, and the theoretical result is novel and the connection to edge-of-stability is interesting. A key limitation of the work is that region count is computationally intractable in the high-dim space and the paper adopts an approximation of it in a very low dimension. While experiments show that the correlation remains robust across varying dimensions, the work lacks a principled justification for the reliability of this approximation.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer LVqU had a major concern about the lack of discussion/comparison between region count and linear regions. The authors explained their differences in the rebuttal, and the AC found the explanation sufficient.\"}", "{\"comment\": \"Thanks again for your valuable feedback! Could you please let us know whether your concerns have been addressed? We are happy to make further updates if you have any other questions or suggestions.\"}", "{\"comment\": \"We sincerely appreciate the reviewer for the feedback to our paper. We would like to clarify several points:\\n\\nFirst, we agree that **the number of linear regions provides an upper bound for the number of decision regions**, as points with identical activation patterns will fall within the same linear region and thus share the same output. However, the reverse is not necessarily true: **points with slightly different activation patterns may still yield the same label prediction, particularly in classification tasks where the output label space is limited**, despite the network\\u2019s potentially complex representations. This results in a significant difference in magnitude between the two quantities. For example, Figure 3 in [1] and Figure 4 in our paper **demonstrate a difference of roughly 1000 times**.\\n\\nSecond, regarding the correlation between linear regions and test accuracy, we would like to emphasize that the points in Figure 3 of [1] **correspond to different network architectures**, where the training hyperparameters (learning rate, weight decay, batch size) are fixed. This analysis primarily investigates the relationship between the network's representation ability and its test accuracy, rather than how specific training strategies can improve performance. While it is intuitive that increasing a network's expressive capacity could allow it to represent more information and improve accuracy, it can also lead to over-parameterization, which helps explain the relatively weak correlation (approximately 0.5) observed in [1]. In contrast, our paper focuses on **a fixed network architecture, where each point in Figure 4 represents a different combination of training hyperparameters** (as shown in Table 1). Here, we aim to explore how training strategies influence generalization. Our proposed measure, region count, **is strongly correlated with the generalization gap** (approximately 0.98), offering a potential way to describe implicit bias and a promising tool for selecting hyperparameters that optimize generalization.\\n\\nWe hope this clarifies our approach and highlights the distinction between the two analyses. Once again, we appreciate the reviewer\\u2019s thoughtful feedback. We would be happy to provide further clarifications if the reviewer has any additional questions.\\n\\n\\n**References**\\n\\n[1] Chen, Wuyang, Xinyu Gong, and Zhangyang Wang. \\\"Neural architecture search on imagenet in four gpu hours: A theoretically inspired perspective.\\\" arXiv preprint arXiv:2102.11535 (2021).\"}", "{\"summary\": \"In this paper, the authors introduce new insights of understanding implicit bias of non-linear neural networks. They find that region count, a measure of the complexity of decision boundaries, is well correlated with generalization gap, defined as the difference between training and test errors. Their empirical and theoretical analysis show that the phenomenon of appropriate hyper-parameters leading to better generalization can be explained in terms of region counts. This work encourages future work for expanding their finding on more general and practical environments, further revealing the underlying implicit bias of neural networks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper introduces a novel and useful metric for understanding implicit bias of neural network\\u2019s generalization.\", \"Region count shows not only a high correlation with the generalization gap but also robustness across diverse architectures, datasets, and optimizers.\", \"This paper is well structured and well written. The authors\\u2019 claims are strongly supported by both theoretical analysis and empirical results on practical datasets.\"], \"weaknesses\": [\"Proposed region count method assumes low-dimensional subspace of the training data. Although robustness to different dimension choices is shown in Section 7, it still relies on a substantial number of sampling trials (100 runs). However, Table 5 in Appendix B contains an ablation study on the impact of the number of experiment runs, which never been discussed in the main paper. Including these results in the main text would strengthen the method's validity.\", \"The proposed method provides an interesting explanation for the implicit bias whereby a large learning rate or small batch size facilitates superior generalization. While the authors state that this is 'typically deemed as beneficial for generalization\\u2019, but it is necessary to show that a large learning rate or small batch size leads to better generalization in terms of both generalization gap, and top-1 accuracy.\", \"Figure 4 and Table 8 report the correlation between generalization gap and region count on CIFAR-10 across different architectures. However, the range of generalization gap (y-axis) appears large compared to typical CIFAR-10 performance. For example, [1] reported that ResNet20 achieves an error rate of 8.75% on CIFAR-10, and [2] shows that ResNet18 achieves 93.02% accuracy, while in Table 8, ResNet18\\u2019s generalization gap is shown to be at least 15% (maximum 85% accuracy). Providing the top-1 accuracy for each architecture (under different hyper-parameters) would help prevent reader confusion.\", \"[1] He, Kaiming, et al. \\\"Deep residual learning for image recognition.\\\"\\u00a0*Proceedings of the IEEE conference on computer vision and pattern recognition*. 2016.\", \"[2] https://github.com/kuangliu/pytorch-cifar\"], \"questions\": [\"Does region count maintain a consistent correlation after dimensionality reduction (e.g., PCA)?\", \"Dose the variation in the dimensionality $\\\\mathbb{R}^{d}$ affects the results of sampled subspace in region count?\", \"Is this paper first to understand the correlation between generalization gap and hyper-parameters?\", \"In Definition 1, how is 'Connectedness' validated in practice for any $f(\\\\gamma(t))=c$ where$\\\\ t \\\\in [0,1]$? For example, is it achieved through a grid search?\", \"In Figure 7, it is unclear how to interpret the results of the random flip. Is the results missing?\", \"In Appendix B, the third paragraph mentions Table 4 twice.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you to the authors for their efforts. Based on their responses, I have decided to maintain my score. I hope the authors can incorporate these additional points into the final version of the paper.\"}", "{\"title\": \"Thank you!\", \"comment\": \"We thank the reviewer for acknowledging our work! We will incorporate these additional points into the final version of the paper.\"}", "{\"title\": \"Thank you for responding to my review\", \"comment\": \"I appreciate the authors' efforts to address my concerns. Most of my questions have been resolved.\\n\\nHowever, one part remains unclear to me, though I don't think it's critical:\\n\\n- Q8) I'm still unclear about the observations of 'random flip'. The caption for Figure 7 states that it includes the impact of both 'random crop' and 'random flip'. However, the figures only show observations for 'random crop' (Correlation of Random Crop: 0.96, Impact of Random Crop). Please let me know if I have misunderstood something here.\"}" ] }
9UoBuhVNh6
Applications of Modular Co-Design for De Novo 3D Molecule Generation
[ "Danny Reidenbach", "Filipp Nikitin", "Olexandr Isayev", "Saee Gopal Paliwal" ]
De novo 3D molecule generation is a pivotal task in drug discovery. However, many recent geometric generative models struggle to produce high-quality 3D structures, even if they maintain 2D validity and topological stability. To tackle this issue and enhance the learning of effective molecular generation dynamics, we present Megalodon–a family of simple and scalable transformer models. These models are enhanced with basic equivariant layers and trained using a joint continuous and discrete denoising co-design objective. We assess Megalodon’s performance on established molecule generation benchmarks and introduce new 3D structure benchmarks that evaluate a model’s capability to generate realistic molecular structures, particularly focusing on energetics. We show that Megalodon achieves state-of-the-art results in 3D molecule generation, conditional structure generation, and structure energy benchmarks using diffusion and flow matching. Furthermore, we demonstrate that scaling Megalodon produces up to 49x more valid molecules at large sizes and 2-10x lower energy compared to the prior best generative models.
[ "molecule generation", "diffusion", "flow matching", "transformer" ]
Reject
https://openreview.net/pdf?id=9UoBuhVNh6
https://openreview.net/forum?id=9UoBuhVNh6
ICLR.cc/2025/Conference
2025
{ "note_id": [ "q9ZZddmwJy", "pHXgzz0GyM", "kCbX18N6rO", "er2r95ehUp", "bB1DUAMrji", "Yo5yQROFRZ", "RCoAmWo9tw", "N0AYYKHxSt", "Ei7YaCCJTp", "EU68EolaYj", "BFB538fggN", "94rx8sNT2w", "8RqeydkaEF", "79mKe6OzQb", "3JXsOMTzXx", "2T6mipZMUZ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1731721537594, 1732206909464, 1731722335898, 1730731467173, 1730673042648, 1731721753719, 1732562836752, 1731721964332, 1732812656865, 1737523539480, 1731720943466, 1731720505675, 1730599664127, 1732825610359, 1731721103450, 1734874166895 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2898/Authors" ], [ "ICLR.cc/2025/Conference/Submission2898/Reviewer_PoXh" ], [ "ICLR.cc/2025/Conference/Submission2898/Authors" ], [ "ICLR.cc/2025/Conference/Submission2898/Reviewer_dknQ" ], [ "ICLR.cc/2025/Conference/Submission2898/Reviewer_PoXh" ], [ "ICLR.cc/2025/Conference/Submission2898/Authors" ], [ "ICLR.cc/2025/Conference/Submission2898/Reviewer_nciS" ], [ "ICLR.cc/2025/Conference/Submission2898/Authors" ], [ "ICLR.cc/2025/Conference/Submission2898/Reviewer_dknQ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2898/Authors" ], [ "ICLR.cc/2025/Conference/Submission2898/Authors" ], [ "ICLR.cc/2025/Conference/Submission2898/Reviewer_nciS" ], [ "ICLR.cc/2025/Conference/Submission2898/Authors" ], [ "ICLR.cc/2025/Conference/Submission2898/Authors" ], [ "ICLR.cc/2025/Conference/Submission2898/Area_Chair_qnBB" ] ], "structured_content_str": [ "{\"title\": \"Response to Questions\", \"comment\": \"# **Q1: Subset of MiDi metrics**\\n\\nWe want to clarify that we are using 3D distributional metrics for bond angles and torsion angles in our evaluation. In our description, we cited EQGAT-Diff because we used the implementation of these metrics from the EQGAT-Diff repository. However, these metrics are exactly the same as those used in the MiDi paper and several other studies. As for MiDi specifically, we combined connected and valid as there is little practical value in generating undesired molecule fragments when all training data is connected. **We also report diversity, novelty, and uniqueness in line 365** outside of Table 1, given that all of the values for all methods are so close.\\n\\nWe decided not to include the 3D distributional metric for bond lengths. Although Megalodon performs better on this metric, we found it was not particularly informative. The metric yielded values ranging from 0.0015 for Megalodon Large to 0.0042 for Semla Flow. Even though Megalodon outperforms Semla Flow, EQGAT-Diff, and MiDi on this data, it is challenging to interpret what these numbers actually signify. To create a more interpretable comparison, **Table 3** shows that the average bond length difference between the initially generated structures and the GFN2-xTB relaxed structures is around **0.01\\u202f\\u00c5.** This provides a much clearer understanding of the metric's scale and highlights the significance of the changes in bond lengths. By focusing on this metric, we can better assess the practical implications of bond length variations in our models.\\n\\n| Metric | Megalodon FM | Semla FM | Megalodon Large | Megalodon Small | EQGAT-Diff |\\n|------------------------------|--------------|----------|----------|----------|---------|\\n| Bond Length Distributional | 0.002804 | 0.004164 | 0.001510 | 0.004018 | 0.003955 |\\n\\nMoreover, during our previous experiments, we observed that the implementation of these metrics (3D distributional)\\u2014as used by MiDi, EQGAT-Diff, and ourselves\\u2014is not robust to outlier molecules. Specifically, a single molecule with a poor 3D structure can significantly affect the results due to how the joint support is defined when computing the cumulative distribution function (CDF) before calculating the Wasserstein distance. Because 3DMG models are on the lever where they are able to learn bond distances at a pretty good level (the Wasserstein distance between distributions is on the level of 0.003), this metric is the most sensible to these outliers. \\n\\n**One of our main goals in the paper was to develop informative and interpretable 3D benchmarks that highlight the value of our results for computational chemists**. Therefore, we focused on metrics that we found most meaningful and did not include every previously coarse metric. \\n\\n# **Q2: Self-Conditioning what is it and why?**\\n\\nSelf-conditioning (SC) was introduced by Chen et al. (https://arxiv.org/abs/2208.04202) as a way to condition the denoising network on its predictions to further refine them. Since then, SC has been used in several generative models, including prior molecule generation methods and many image and protein generation models, as discussed in line 304.\\n\\nFollowing the procedure described in Chen et al., SC during training follows Eqn. 9, in which 50% of the batches first undergo unconditional denoising x_sc = model(x_t). We then augment x_t = f(x_sc, xt) + x_t (where f is a simple MLP), a new design choice to add a residual connection between the model input and self-conditioned output. Finally, we obtain the final prediction, which is x_pred = model(x_t). In the other 50% of batches, we jump straight to x_pred, ignoring x_sc and the augmentation.\\nThe choice of using SC is a hyperparameter. This is analogous to the number of recycling steps in AlphaFold2.\\n\\nIn our experience, SC helps bump results by ~1% for validity and stability. Given that it is easy to implement and adds no inference cost, we use it as part of the main model that is in line with prior work. \\n\\n# **Q3: Why DiT**\\n\\nWe sincerely apologize for any confusion we may have caused by not distinguishing DiT from the DiT block. We have updated our paper to clarify that we refer to the DiT block, not the entire DiT model. The entire latent image DiT model does have an autoencoder, but we only use the underlying DiT block here.\\n\\nAs discussed in our response to **W2-A**, the DiT-block itself is just a standard transformer with an adaptive layer norm. We chose it to integrate a standard transformer with the conditional time tensor so that our prediction can be conditioned on the time step. DiT shares the same inductive bias as traditional multi-head self-attention and feed-forward blocks.\"}", "{\"comment\": \"Many thanks to the authors for addressing my questions and providing more details.\"}", "{\"title\": \"Response\", \"comment\": [\"We thank the reviewer for finding our work adaptable leading to promising results especially for 3D structures. We also appreciate that our new benchmarks were found to be more aligned with practical applications of molecule design and drug discovery.\", \"We thank the reviewer for their time and understand that novelty of a method can be highly debatable. For this reason we will highlight aspects of our work that we find novel below. **If there are more specific questions we are happy to answer them**\", \"# **Architecture Novelty**\", \"The standard DiT block takes in a singular input tensor, H. To enable the modeling of 3D molecules composed of several continuous and discrete data modalities, we made several critical changes to the architecture, which we refer to as fused-DiT (f-DiT).\", \"As described in Sec. B.1.2, our fused DiT blocks take in (X, H, E, C) for the 3D structure and discrete atom, bond, and charge types. These features are first fused and aggregated in a message-passing-like operation. The multi-head self-attention is then applied to these fused features. We note that if the attention is applied to the respective inputs as done in the standard DiT, the model does not converge and outputs 100% invalid molecules with unrealistic structures\", \"For the feed-forward part of the f-Dit block, the processed fused features are aggregated along all pairs of nodes to create new bond features, and they are hit with a linear layer to create new atom and charge features. From here, we apply independent feedforward and adaptive layernorm operations for each data modality (excluding structure since the structure is only used as an input to the f-DiT block) to create updated features for all discrete data modalities.\", \"In short H\\u2019, E\\u2019, C\\u2019 = f-DIT(X,H,E,C)\", \"We emphasize that the only operation maintaining equivariance and updating the structure is the series of EGNN single layers. In comparison, **if we replace the f-DiT operation with standard EGNN non equivariant feature updates, we see a drop in Validity of almost 70%, as shown in Table 1**. From our understanding, this is the first work to obtain such strong results with a simple EGNN-based architecture since EDM + Openbabel.\", \"# **Technical Novelty**\", \"Our novelty is further rooted in our applications, evaluations, and analysis, which include the introduction of new benchmarks and the reintroduction of a conditional structure generation task.**\", \"Figure 3 introduces molecule size as a new component to unconditional benchmarking in which the Megalodon significantly outperforms EQGAT-diff. We demonstrate this performance can be improved with further scaling of our architecture. **The ability to generate 49x more valid and stable molecules compared to prior SOTA** is a significant result, given that both are trained with identical data and diffusion parameterizations.\", \"**Table 2 demonstrates that off-the-shelf 3D molecule generation models cannot be used for conformer generation. In contrast, Megalodon can and surpasses strong conformer baselines due to the use of a compounded time-dependent noise scheduler** described in line 297. This is a novel and quite surprising finding, as we expected all unconditional molecule generation models to be able to conditionally generate structure as they are trained with independent structure and discrete denoising.\", \"Furthermore, when compared to GeoDiff, which also uses an EGNN-based architecture with identical diffusion parameterization, **we demonstrate that unconditional generative pretraining is extremely beneficial in generating better structures in 10x fewer sampling steps**. In other words, learning how to generate the 2D discrete components improves the ability to generate accurate conformers.\", \"We demonstrate that with the **diffusion** objective, Megalodon is capable of generating conformers\\u2014that is, molecules very close to their local minima of the ground truth energy function. The median relaxation energy drop of **3.17\\u202fkcal/mol** approaches the threshold of **2.5\\u202fkcal/mol**, which is often considered the thermodynamically relevant interval. **Furthermore this is 2-10x better than prior methods**. This proximity emphasizes the potential practical value of our method. In contrast, we showed that with the **Flow Matching** objective, the energy drop is an **order of magnitude larger**, highlighting a significant and valuable difference for readers.\", \"**Subsequently, we uncover an efficiency and accuracy tradeoff between FM and DM for 3DMG**. FM yields more valid models and can be constrained to very few sampling steps, whereas DM exhibits an order of magnitude better structure measured by the molecular energy.\", \"**Overall**, we are the first to perform a comprehensive analysis of the interplay between the 2D graph and 3D structure during molecular generation in previous methods and generative frameworks(Diffusion vs Flow Matching) and offer potential solutions to improve the dependency between the modalities.\"]}", "{\"summary\": \"In this paper, the authors propose a method for unconditional 3D molecule generation. The proposed approach, called Megalodon, represents molecules with both 3D structure and 2D topology information (atom coordinates and types, bond types and formal charge). The model uses a transformer-based architecture and either diffusion or flow-matching generative model. The proposed approach achieves positive results on experiments on GEOM-drugs dataset.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The task of molecule generation is important and worth investigating (although the utility of _unconditional_ generation can be discussed)\", \"The paper achieved good experimental results on GEOM-drugs dataset.\", \"The paper shows that having a better architecture (ie transformer) and more parameters can help on GEOM-drugs molecule generation.\"], \"weaknesses\": [\"The paper is not very well written and could be improved. In particular, there is a lot of training/evaluation details missing, making reproducibility challenging.\", \"The paper lacks novelty. The paper uses well-stablished generative models (diffusion or flow matching, already used many times on this task) on a single standard dataset (GEOM-drugs).\", \"Many of the parameters choices were made ad hoc. It would be great to see some ablation studies to justify many of the architecture choices made by the authors (eg, the self conditioning, the modifications on DiT architecture).\", \"The authors only show results in one single dataset (GEOM-drugs). It would be nice to see results in other datasets to make sure results are generalizable, eg QM9, PubChem3D, or other related tasks that relies on different dataset (eg, structure-condition generation instead of only conformer generation), etc.\", \"The paper misses a lot of references/comparison to related works: eg, GeoLDM (Xu et al, ICML23), VoxMol (Pinheiro et al, NeurIPS23), GeoBFN (Song et al, ICLR24). All these works also explore the problem of unconditional molecule generation. Moreover, the authors wrongly cite MolDiff (Xu et al 23), mentioning that they dont model bond, while they actually do (L120).\"], \"questions\": [\"Why use only a subset of the metrics proposed by the MiDi paper on Table 1, instead of all the metrics? Also, why ignore the \\\"3D distributional\\\" metrics from MiDi?\", \"Could the authors elaborate more on how the \\\"self-conditioning\\\" is applied? WHy the choice of using it vs not using it? What is the contribution of self conditioning?\", \"DiT is a model created to operate on images and much of its inductive bias operate on that data domain. Why did the authors decide to use DiT on their model? What about any other transformer-like architecture? DiT also has a autoencoder to go from pixel to latent space, and it seems that the proposed model does not have that.\", \"From my understanding, the architecture is composed of equivariant and non-equivariant layers (which end up being a non-equivarant model). Is this correct? If so, why this design choice?\", \"Could the authors elaborate on why did the proposed model is able to generate conformations from molecular graph, while EQGAT-Diff does not? From my understanding the only difference between the two models is the neural network architecture, and it seems quite surprising that this makes such a difference on teh results.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work presents a transformer-based diffusion and flow matching framework for the co-design of 2D and 3D molecular structures. The coordinate, atom and bond features of the noisy molecule are aggregated through DiT blocks and then used to reconstruct the 3D and 2D structures with an EGNN layer. The authors establish the model architecture for both diffusion and flow matching. The proposed model shows higher generation quality in both manners, especially for larger molecules.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The proposed model shows overall higher performance, especially on the more challenging task of generating larger molecules, while also having better memory efficiency than the previous models.\\n2. The authors perform comprehensive analysis of the interplay between the 2D graph and 3D structure during molecular generation in previous methods, and offer potential solutions to improve the dependency between the modalities. \\n3. This paper also attempts to build a framework adaptable to multiple training methods (diffusion and flow matching), which would be informative for future studies.\\n4. The authors also offer additional benchmark tasks and metrics for evaluating 3D molecule generation.\", \"weaknesses\": \"See Questions.\", \"questions\": \"1. How is equivariance preserved? From Appendix B.1.3, it seems the structure blocks should also take the input coordinates and combine them with the DiT block output to update the structure. Intuitively, there should be a skip connection from the input 3D coordinates to the structure blocks. Otherwise the coordinate information would be lost. However, Fig 1 indicates the DiT blocks and structure layers are sequential, where the structure blocks only take the DiT output (which is invariant) to predict the structure. Could the authors clarify on this?\\n2. For conditional generation, how is the 2D graph information provided to the diffusion model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Questions - part two\", \"comment\": \"# **Q4: Equivariance**\\n\\nTheoretically, our model is no different than the original EGNN. We take in equivariant features (structure) and invariant features (atom types). We only care for equivariance to be preserved for the equivariant features which is done with the single EGNN structure updates. \\n\\nThe invariant DiT blocks only update the invariant features analogous to replacing Eqn 6 of Satorras et al. (https://arxiv.org/pdf/2102.09844) use of a standard MLP with a DiT block. Furthermore, our model follows the same proof in Appendix A since we use identical structure updates, and the parameterization of the invariant feature update has no bearing.\\n\\nWe designed our architecture this way since molecule generation is very sensitive to the accuracy of the discrete data components which transformer models excel at. If one bond is off or the atom type is wrong, validity and connectivity are broken. Our architecture allows most of the focus to be on learning a good molecule representation and then integrating it in lightweight structure updates.\\n\\n**We stress that our model is equivariant as only equivariant updates directly update the structure data**, and we ensure zero center of mass to prevent bias in the translations following prior work.\\n\\n# **Q5: Why can\\u2019t EQGAT-Diff generate conformers?**\\n\\nWe were quite surprised that EQGAT-diff could not generate realistic conformers when prompted with the true 2D molecule graph (atom, bond, and charge types). We found that this was because, for the majority of the sampling trajectory, EQGAT generated no bonds and random atom types until the structure prediction started to converge. Although each data modality was being denoised and optimized independently, the was still a learned dependence even with the use of data-like priors. \\n\\n**We discuss in line 294 our solution in which we introduce a change in the training procedure that allows the explicit decoupling of the discrete and continuous learning objectives** by creating an independent time variable while maintaining identical diffusion variance schedules. This is what allows Megalodon to generate conformers, unlike prior 3DMG models.\\n\\n## **Thank you again for engaging in our work! If you have any further concerns or questions, we will be happy to address them.**\"}", "{\"comment\": \"Thanks for the authors' response. I am convinced the empirical performance is significant and I acknowledge the authors' claims on novelties. However, I cannot champion the paper since I still think the technical modifications are rather minor. I will keep my current positive score.\"}", "{\"title\": \"Response\", \"comment\": \"We thank the reviewer for highlighting key contributions of Megalodon, including its ability to advance 3D molecule generation (3DMG) into practical applications with improved performance on larger molecules and practical structural benchmarks. We\\u2019re glad that our in-depth analysis of the 2D-3D interplay in molecular generation was found to be thorough and valuable.\\n\\n\\nWe are happy to provide more details on the provided questions below. \\n\\n# **Q1. How is equivariance preserved?**\\n\\nThank you for pointing out the inconsistency in Figure 1, and we apologize for any confusion this may have caused. There is a skip connection seen in Eqn. 17, which we will add to the camera-ready figure. In our architecture, the DiT (Diffusion Transformer) block operates on invariant features such as atom types, charges, bond types, pairwise atom distances, and coordinate norms. The output of the DiT block consists of an updated atom (H), charge (C), and bond (E). In practice, C is concatenated to H but we write it explicitly here to be more clear.\\n\\n- H\\u2019, E\\u2019, C\\u2019 = f-DIT(X,H,E,C)\\n- X\\u2019 = EGNN_X_Only(X, H\\u2019, E\\u2019, C\\u2019) #only x update see Eqn. 17\\n\\nThe structure block then uses these updated atom and bond features to update the coordinates. Since we subtract the center of mass (CoM) from the coordinates, all features processed by the DiT block are invariant under rotations and translations. Therefore, the DiT block updates only invariant features.\\n\\nOur structure block follows the Equivariant Graph Neural Network (EGNN) architecture [Satorras et al., 2021 https://arxiv.org/pdf/2102.09844], as illustrated in the formula in Figure 1. This design ensures that the overall architecture is equivariant, meaning that any rotations or translations applied to the input coordinates result in corresponding rotations or translations in the output coordinates. The skip connections from the input 3D coordinates to the structure block are implicit in the EGNN framework.\\n\\n# **Q2. How is the 2D graph provided to the diffusion model?**\\n\\nTo provide the **2D graph** to the diffusion model, we supply the atom types (**H**), bond types (**E**), and charge types (**C**) as fixed inputs, while the coordinates (**X**) are generated by the model.\\n\\nIn standard diffusion and flow matching models that directly operate on bonds, the initial inputs H,E,C,X are generated from a prior distribution. At each diffusion step t, the model takes the noised versions of these variables Ht,Et,Ct,Xt\\u200b and predicts the ground truth values.\\n\\nTo enable conditional generation, we modified the training process so that for a fraction of the time the model is conditioned on the ground truth H,E,C, which are supplied to the model as one-hot vectors, and we use RDKit to compute adjacency matrix, bond orders, atom types and formal charges. This means that during conditional generation, the model receives the fixed 2D graph (represented by H,E,C) and the noised coordinates Xt\\u200b, and it predicts the denoised coordinates X.\\n\\nIn summary, the model takes the fixed atom types, bond types, and charges as inputs and generates the corresponding 3D coordinates, effectively performing conditional generation based on the provided 2D molecular graph.\\n\\n\\n## **If you have any future concerns or questions, we will be happy to address them.**\"}", "{\"comment\": \"I appreciate the authors' rebuttal and the clarifications. I also acknowledge the good empirical results on GEOM-drugs. Although this does not necessarily come as much of a surprise given it has more parameters than the compared baselines. I still find the paper lacks novel technical contributions: the architecture (modulo minor modifications), the data representation, and the generative models utilized have already been used in many similar applications. For these reasons, I keep my current rating.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response-2\", \"comment\": \"# **W2-B: Lacks Novelty**\\n\\n**Our novelty is further rooted in our applications, evaluations, and analysis, which include the introduction of new benchmarks and the reintroduction of a conditional structure generation task.**\\n - Figure 3 introduces molecule size as a new component to unconditional benchmarking in which the Megalodon significantly outperforms EQGAT-diff. We demonstrate this performance can be improved with further scaling of our architecture. **The ability to generate 49x more valid and stable molecules compared to prior SOTA** is a significant result, especially given that both are trained with identical data and diffusion parameterizations.\\n - **Table 2 demonstrates that off-the-shelf 3D molecule generation models cannot be used for conformer generation. In contrast, Megalodon can and surpasses strong conformer baselines due to the use of a compounded time-dependent noise scheduler** described in line 297. This is a novel and quite surprising finding, as we expected all unconditional molecule generation models to be able to conditionally generate structure as they are trained with independent structure and discrete denoising.\\n- Furthermore, when compared to GeoDiff, which also uses an EGNN-based architecture with identical diffusion parameterization, **we demonstrate that unconditional generative pretraining is extremely beneficial in generating better structures in 10x fewer sampling steps**. In other words, learning how to generate the 2D discrete components improves the ability to generate accurate conformers.\\n- We demonstrate that with the **diffusion** objective, Megalodon is capable of generating conformers\\u2014that is, molecules very close to their local minima of the ground truth energy function. The median relaxation energy drop of **3.17\\u202fkcal/mol** approaches the threshold of **2.5\\u202fkcal/mol**, which is often considered the thermodynamically relevant interval. **Furthermore this is 2-10x better than prior methods**. This proximity emphasizes the potential practical value of our method. In contrast, we showed that with the **Flow Matching** objective, the energy drop is an **order of magnitude larger**, highlighting a significant and valuable difference for readers.\\n- **Subsequently, we uncover an efficiency and accuracy tradeoff between FM and DM for 3DMG**. FM yields more valid models and can be constrained to very few sampling steps, whereas DM exhibits an order of magnitude better structure measured by the molecular energy. \\n- We further show step size ablations in Table 5, demonstrating that our model continuously outperforms baselines at reduced step sizes.\\n\\n**Overall**, we are the first to perform a comprehensive analysis of the interplay between the 2D graph and 3D structure during molecular generation, as well as the choice in generative frameworks(Diffusion vs Flow Matching) via interpretable and informative energy benchmarks.\\n\\n# **W3: parameter choices are made ad hoc**\\n\\nAs discussed in line 304, self-conditioning has been heavily explored in several generative models, including prior molecule generation methods like SemlaFlow. Given this, our base model includes self-conditioning, which we define in Equation 9. \\n\\nAs for the impact of self-conditioning, it is very subtle, accounting for roughly 1% performance boosts for molecule stability and validity. Both of these metrics, without self-conditioning, still outperform all prior methods. We conduct all further experiments, including conformer generation and the structure-energy benchmarks, with the models trained with self-conditioning as done in SemlaFlow, given they perform better and net no increase in inference cost.\\n\\nAs for the choices with the f-DiT architecture, without the specific fusing as discussed in Appendix B.1.2 the model cannot converge and cannot generate any molecules. Outside of the fusing operation, the only change from the standard DiT block is to create pairwise bond features and have parallel and independent feed-forward updates for bond and atom-type features. We stress that the architecture is very similar to the original DiT. Adaptations were required to work on multiple data types simultaneously, and these choices cannot be easily ablated without changing the task definition of a 3D molecule (i.e. removing bonds or atom type prediction). We emphasize that when we removed the fusing operation or the independent bond features updates, the model broke.\\n\\n\\nFor these reasons, we provide architecture ablations specifically with DiT vs EGNN seen in Table 1, which shows a 70% drop in validity when DiT is replaced. \\nAll other hyperparameters were selected based on prior work and neither model size or any hyperparameters were specifically optimized for. The differences between Megalodon small and large stem from choices to reduce the parameter size and were not chosen based on any benchmarks.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you very much for your effort in reviewing our paper and your feedback. We appreciate the depth of the review and give our best effort to answer all questions.\\n\\n# **W1: Missing training and evaluation details missing**\\n\\nWe provide links to the corresponding code bases from which we used identical training setups for the diffusion and flow matching models. We do this to create a true 1:1 comparison with both EQGAT-Diff and SemlaFlow, with the only change being the architecture we describe in Appendix Section B, which includes all key equations and hyperparameters. This allows us to use the same data loaders and generative model setup as each method, which is important as SemlaFlow filters out all molecules with greater than 75 atoms. We also provide all evaluation details and use standard DDPM SDE and flow matching ODE sampling as done in prior work.\\n\\nOutside of the training and evaluation details that we clarified above, are there any sections of the paper that were not clear?\\n\\n# **W2-A: Architecture Novelty**\\n\\nOur work is more than applying diffusion (DM) and flow matching (FM) for unconditional molecule generation.\\n\\nFirst, we want to clarify that DiT Block, while used for images in the original paper, does not have any specific inductive bias for images. As discussed here https://www.wpeebles.com/DiT, the DiT block is just a standard transformer block with an adapted layer norm to handle the conditioning input. It was used for latent image diffusion, but that use case has no bearing on how we augment it for molecule generation.\\n\\nWe emphasize that we deconstructed molecule generation into simultaneous equivariant structure prediction and non-equivariant discrete data prediction (atom, bond, and charge types). This is reflected in our architecture design, which uses an augmented DiT to model the discrete data and then simple EGNN layers to update the structure. Figure 1 illustrates how the model comprises N segments of a DiT block followed by a simple structure update. We chose this architecture to enable better modeling of discrete data via the multi-head self-attention as the discrete data accuracy determines molecule stability and validity.\\n\\nThe standard DiT block takes in a singular input tensor, H. To enable the modeling of 3D molecules composed of several continuous and discrete data modalities, we made several critical changes to the architecture, which we refer to as fused-DiT (f-DiT).\\n - As described in Sec. B.1.2, our fused DiT blocks take in (X, H, E, C) for the 3D structure and discrete atom, bond, and charge types. These features are first fused and aggregated in a message-passing-like operation. The multi-head self-attention is then applied to these fused features. We note that if the attention is applied to the respective inputs as done in the standard DiT, the model does not converge and outputs 100% invalid molecules with unrealistic structures.\\n- For the feed-forward part of the f-Dit block, the processed fused features are aggregated along all pairs of nodes to create new bond features, and they are hit with a linear layer to create new atom and charge features. From here, we apply independent feedforward and adaptive layernorm operations for each data modality (excluding structure since the structure is only used as an input to the f-DiT block) to create updated features for all discrete data modalities. \\n - In short H\\u2019, E\\u2019, C\\u2019 = f-DIT(X,H,E,C)\\n - We emphasize that the only operation maintaining equivariance and updating the structure is the series of EGNN single layers. In comparison, **if we replace the f-DiT operation with standard EGNN non equivariant feature updates, we see a drop in Validity of almost 70%, as shown in Table 1**. From our understanding, this is the first work to obtain such strong results with a simple EGNN-based architecture since EDM + Openbabel.\"}", "{\"summary\": \"This paper proposes Megalodon, a transformer-based model for 3D molecule generation. Megalodon is a modular approach with both diffusion and flow matching objectives that aim to improve 3D structure prediction and validity. The authors conducted experiments on existing benchmarks such as GEOM Drugs and introduced new metrics such as xTB relaxation error. The results indicate Megalodon outperforms existing methods in molecule stability, validity, and energy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed Megalodon is an adaptive architecture that can be adapted with both diffusion and flow matching objectives.\\n\\n2. The newly introduced benchmarks including energy-based and 3D structure-based assessments are more aligned with practical applications in molecular design and drug discovery.\\n\\n3. The experimental results are promising, especially along the metrics related to 3D structures.\", \"weaknesses\": \"My major concern is the limited technical novelty. The network architecture of Megalodon uses standard DiT models, with only minor modifications such as the structure layer. While the authors introduce a combination of diffusion and flow matching objectives, this integration alone does not constitute a major advancement, as flow matching is a theoretically more general framework than diffusion. There is no surprise that these two objectives can be used in one framework.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank you for your feedback. While we disagree that our work lacks novelty due to enabling multi variable message passing into the DiT block which was not a minor modification, high quality unconditional generation and high quality conformer generation in and same model for the first time, and also introducing new interpretable benchmarks, we thank you for asking us questions and making our work stronger.\\n\\nAs for diffusion and flow matching we know they have been used before but we are the first to study them together using the same architecture and biological task at the same time. We find our findings of a physical accuracy vs efficiency trade off between diffusion and fm models quite interesting and something potentially meaningful for other applications of these frameworks outside of small molecule design.\\n\\nOverall we have answered all questions and addressed all concerns that can be addressed in the rebuttal period. We understand novelty is highly debated and value your feedback.\"}", "{\"title\": \"Response-3\", \"comment\": \"# **W4: Single Dataset**\\n\\nWe agree that structure-conditioned generation is interesting future work and something we can fine-tune the existing Megalodon models in the future. \\n\\nOne of our primary research focuses was to deeply understand the challenges of modeling the GEOM-Drugs dataset, which has been proven to be significantly more difficult than QM9 in several prior conformer and molecule generation tasks discussed in Sec. 2.3. For these reasons, we chose to exclude QM9 and concentrate on the more complex and realistic GEOM-Drugs dataset to push the boundaries of current methodologies.\\n\\nRegarding PubChem3D, although it is an extensive resource, its conformers are generated using OpenEye OMEGA, which rapidly produces 3D structures (approximately 0.1 seconds per conformer on a single core compared to GEOM\\u2019s CREST 90 core hours) through algorithmic methods. However, these conformers are not necessarily at energy minima and may not represent stable forms. Since we focus on generating conformers and estimating their stability\\u2014by measuring changes when relaxing to the closest local minima\\u2014the PubChem3D dataset is less suitable for our purposes than the GEOM generation procedure. Consequently, the structures in PubChem3D are less accurate for our objectives and would not provide meaningful energy measurements for our analyses.\\n\\nGiven that our work introduces chemically grounded and interpretable structural benchmarks, we plan to extend our approach to applicable datasets beyond GEOM in the future. This will further validate the robustness and generalizability of our models across diverse molecular datasets. Overall, this work provides a strong initial understanding of what models can and cannot do well.\\n\\n# **W5: Missing related Work**\\n\\nWe thank the reviewer for bringing this to our attention. We have adapted our related work to discuss these methods as we agree they are valuable in unconditional molecule generation.\\n\\nWe have fixed our typo in citing MolDiff and have included comparisons below from the values taken from their paper. We also note these comparisons are not directly 1:1 as MolDiff removed five element types. MolDiff also does not report stability for their method with hydrogens. We focus on generation with explicit hydrogens as done in prior baselines.\\n\\n| Metric | MolDiff | Megalodon FM | Megalodon Large |\\n|---------------------|---------|--------------|------------------|\\n| Connected validity | 0.739 | 0.948 | 0.927 |\"}", "{\"metareview\": [\"The paper proposes a molecular generation framework that utilizes diffusion and flow matching on transformers to generate molecular structures. The empirical results presented focus on the GEOM-drugs dataset, where the model reportedly outperforms existing baselines.\", \"***Strengths:***\", \"The proposed model shows superior performance in generating larger molecules and demonstrates better memory efficiency compared to prior models.\", \"The paper provides an analysis of the interaction between 2D graph and 3D structure in molecular generation, proposing improvements on existing methods.\", \"The framework is adaptable to diffusion and flow matching, which could be beneficial for future research in the field.\", \"Introduction of additional benchmarks and metrics for 3D molecule evaluation.\", \"***Weaknesses:***\", \"The technical novelty is limited as the architecture largely employs established models with minor modifications.\", \"The study's findings are based solely on the GEOM-drugs dataset, limiting the generalizability of the results.\", \"The paper suffers from writing issue and lacks sufficient training and evaluation details.\", \"Lack of comparisons with relevant recent works.\", \"We appreciate the authors' efforts in submitting their rebuttal and providing additional explanations. Some of the issues, e.g. writing clarity, is alleviated. However there are two major concerns are not addressed well form the further replies of the reviewers:\", \"The architecture and method used are not sufficiently novel.\", \"Broader testing across varied datasets is expected to test the generalization ability of the method.\", \"In conclusion, while the paper demonstrates some strengths in performance and analytical depth, these do not outweigh the significant issues related to novelty and generalizability. These factors are critical for a contribution to be considered as significant enough in ICLR. After careful discussion and consideration, we regret to inform that this paper is not accepeted in this form.\"], \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the primary points of discussion centered around the novelty of the paper and the scope of its experimental validation. All of the three reviewers share concern that novelty is not significant enough. Both Reviewer *dknQ* and Reviewer *PoXh* highlighted the importance of extending the empirical results to more datasets and tasks to strengthen the paper's claims. The authors\\u2019 responses focused on justifying their method choices and clarifying the implications. However, they did not substantially address the limitations regarding dataset diversity and empirical breadth, which is a key factor in the overall decision to recommend rejection.\"}" ] }
9UGfOJBuL8
Conditional Diffusion with Ordinal Regression: Longitudinal Data Generation for Neurodegenerative Disease Studies
[ "Hyuna Cho", "Ziquan Wei", "Seungjoo Lee", "Tingting Dan", "Guorong Wu", "Won Hwa Kim" ]
Modeling the progression of neurodegenerative diseases such as Alzheimer’s disease (AD) is crucial for early detection and prevention given their irreversible nature. However, the scarcity of longitudinal data and complex disease dynamics make the analysis highly challenging. Moreover, longitudinal samples often contain irregular and large intervals between subject visits, which underscore the necessity for advanced data generation techniques that can accurately simulate disease progression over time. In this regime, we propose a novel conditional generative model for synthesizing longitudinal sequences and present its application to neurodegenerative disease data generation conditioned on multiple time-dependent ordinal factors, such as age and disease severity. Our method sequentially generates continuous data by bridging gaps between sparse data points with a diffusion model, ensuring a realistic representation of disease progression. The synthetic data are curated to integrate both cohort-level and individual-specific characteristics, where the cohort-level representations are modeled with an ordinal regression to capture longitudinally monotonic behavior. Extensive experiments on four AD biomarkers validate the superiority of our method over nine baseline approaches, highlighting its potential to be applied to a variety of longitudinal data generation.
[ "neurodegenerative disease", "conditional diffusion model", "longitudinal data analysis" ]
Accept (Spotlight)
https://openreview.net/pdf?id=9UGfOJBuL8
https://openreview.net/forum?id=9UGfOJBuL8
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qya5YA6trZ", "qNpcNOGNiU", "ogbZ5Vb2KF", "dvVCnIJqWU", "ca6W9MvMxR", "VWewotCps5", "Sjr5xLsQDx", "RFdWyv7p1Z", "MCM4P1jfE9", "K7S1InyAQJ", "A5ZkBBguIa", "88sgOjGAIn", "5QhZPynWxj", "4wuXiQ5rZ7", "3KnppdPQtO", "1Pv8xmVgMM" ], "note_type": [ "decision", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1737523494346, 1732694918107, 1734591575016, 1733158493495, 1732178120168, 1730551692637, 1732179426830, 1732178471548, 1732178898194, 1732180941737, 1733105424636, 1732182450185, 1742277625694, 1730676153078, 1732181157111, 1730706234193 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2260/Authors" ], [ "ICLR.cc/2025/Conference/Submission2260/Area_Chair_LSRm" ], [ "ICLR.cc/2025/Conference/Submission2260/Reviewer_xa3Q" ], [ "ICLR.cc/2025/Conference/Submission2260/Authors" ], [ "ICLR.cc/2025/Conference/Submission2260/Reviewer_jkdK" ], [ "ICLR.cc/2025/Conference/Submission2260/Authors" ], [ "ICLR.cc/2025/Conference/Submission2260/Authors" ], [ "ICLR.cc/2025/Conference/Submission2260/Authors" ], [ "ICLR.cc/2025/Conference/Submission2260/Authors" ], [ "ICLR.cc/2025/Conference/Submission2260/Reviewer_jkdK" ], [ "ICLR.cc/2025/Conference/Submission2260/Authors" ], [ "~Hyuna_Cho1" ], [ "ICLR.cc/2025/Conference/Submission2260/Reviewer_xa3Q" ], [ "ICLR.cc/2025/Conference/Submission2260/Authors" ], [ "ICLR.cc/2025/Conference/Submission2260/Reviewer_rUXZ" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"comment\": \"Dear Reviewers,\\n\\nWe hope that our rebuttal has addressed your concerns and now provides a clearer understanding of our work based on your reviews. \\nWe appreciate your comments, and should you have any additional questions or require further clarification, please do not hesitate to let us know. \\nWe hope to contribute to the ICLR and broader ML/Neuroscience community by having this paper published.\"}", "{\"metareview\": \"The paper proposes ConDOR, a novel conditional generative model for synthesizing longitudinal sequences and present its application to neurodegenerative disease data generation conditioned on multiple time-dependent ordinal factors, such as age and disease severity. The synthetic data are curated to integrate both cohort-level and individual-specific characteristics, where the cohort-level representations are modeled with an ordinal regression to capture longitudinally monotonic behavior. Extensive experiments are conducted. In sum: the algorithm is innovative, the problem comes from real-world challenges from the broader ML/Neuroscience community, and the experimental results are convincing. After the rebuttal stage, all the reviewers have unanimously supported this paper.\\n\\nPC/SAC/AC and all reviewers will monitor this submission to ensure the reproductivity of this work - the authors promised to release codes of their methods and baseline methods along with their pre-trained models publicly. Raw data and/or descriptions of how to access the data in Appendix A are also required by the ICLR community.\", \"additional_comments_on_reviewer_discussion\": \"The authors have sufficiently addressed most comments from all the reviewers. Reviewers have increased their scores accordingly during the discussions.\"}", "{\"comment\": \"Thank you for answering my questions and I have increased my score.\"}", "{\"title\": \"Response to Reviewer rUXZ (Part 1)\", \"comment\": \"We thank the reviewer for your effort in reviewing the manuscript and highly valuing our research.\\n\\nW1 & Q2 & Q3) Theoretical comparison of the proposed method with existing generative models. Comparison of the dual-sampling method with baseline models. \\n\\nA) (1) To the best of our knowledge, existing studies for longitudinal degenerative disease analyses consider only observed conditions at static time points, which may fail to represent the irregular intervals and irreversible characteristics of neurodegenerative data. However, our method introduces fine-grained, interpolated time-dependent conditions (e.g., age $a^d_t$ and diagnostic labels $y^d_t$) that dynamically adapt to the irregular sampling of time points. \\n\\n(2) Also, we introduced a novel method of incorporating the ordinal regression model into the diffusion model, which allows the model to effectively handle the ordinality of conditions inherent in the data. These mechanisms make the model robust in dealing with temporal complexity and generating realistic and biologically plausible disease trajectories.\\n\\n(3) Moreover, unlike existing methods that primarily model either global trends (cohort-level information) or individual-specific features (subject-level information), our dual-sampling approach explicitly integrates both. This design ensures that the model captures the nuanced interplay between shared population-level dynamics and unique individual trajectories, which is critical for representing longitudinal data. \\n\\nSpecifically, the dual-sampling approach in our method is designed to $\\\\textit{balance}$ individual-specific features and general trends within longitudinal neurodegenerative data. Unlike our method, the nine baseline approaches (e.g., CTGAN, TabDDPM, SMOTE, etc) we used in our comparisons do not explicitly separate individual features from the global-level features during their feature extraction or generation processes. Instead, these methods primarily focus on either global distribution modeling or sample-level characteristics, which may limit their ability to represent the nuanced interplay between individual variability and population-level patterns. For example, CTGAN and TabDDPM focus on balancing multimodal distributions within tabular data (i.e., brain regional features in our experiments) across the whole dataset, without considering personalized variability. On the other hand, SMOTE considers only sample-wise features during generation as it synthesizes new data by combining a real data point and its $k$-nearest neighbors. \\n\\nW2) More detailed descriptions of the dual-sampling method are needed. \\n\\nA) We thank the reviewer for pointing out this aspect. We acknowledge that the description of the integration of the cohort-level and subject-level samples in Section 2.3.2 could benefit from further details. Due to the page limit, it was challenging to provide a more extensive explanation in the main text without compromising other critical aspects of the paper. However, to address this, we plan to make the official code publicly available, including detailed documentation and step-by-step instructions to ensure reproducibility. \\n\\nW3) Lack of statistical analysis (e.g., confidence intervals or significance testing).\\n\\nA) Thank you for the suggestion. While we have reported the means and standard deviations of all results from multiple runs to demonstrate model generalizability, we understand that adding additional statistical analyses could further enhance the robustness of the results. However, we had difficulty in determining which specific statistical tests would be most appropriate to further analyze the results, so we would appreciate any specific recommendations from the reviewer regarding statistical techniques that could best address this concern.\"}", "{\"summary\": \"The paper presents ConDOR, a novel conditional diffusion model for generating longitudinal neurodegeneration data with ordinal disease progression factors. The model's architecture integrates both cohort-level and subject-level characteristics through a dual-component approach. At the cohort level, it employs Bayes' Theorem combining an ordinal regression model (capturing disease stage relationships) with a kernel-based conditional distribution. For data generation, ConDOR utilizes two diffusion models: a Regional Diffusion Model (RDM) for generating baseline measurements across brain regions, and a Temporal Diffusion Model (TDM) for generating subsequent longitudinal data. The model also incorporates a domain conditioning mechanism to integrate data from multiple sources. The authors evaluate ConDOR on multiple biomarkers (Amyloid, Cortical Thickness, and Fluorodeoxyglucose) from two prominent neurodegenerative disease datasets (ADNI and OASIS), comparing against nine baseline methods including GANs, VAEs, and other diffusion-based approaches.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The proposed model captures both spatial and temporal features through a combination of a Regional Diffusion Model and a Temporal Diffusion Model.\\n2. This new generative model addresses challenges associated with sparse, irregular, and widely spaced intervals in medical data.\\n3. The model strikes a balance between cohort-level and individual-level fitting, capturing generalized population trends while accommodating individual variability.\\n4. It introduces a novel integration of ordinal regression with diffusion models.\\n5. The experiments are comprehensive, with comparisons to nine baseline models, including GANs, VAEs, and other diffusion-based models, evaluated across three metrics. Implementation time is also compared.\\n6. The model is extended to a multi-domain setting, enhancing its generalizability and applicability to different data sources.\", \"weaknesses\": \"1. The ordinal regression model might oversimplify the disease progression process. Additionally, the temporal diffusion relies on linear interpolation for temporal transitions, which may not accurately capture realistic disease dynamics.\\n2. There is a lack of comparison with traditional longitudinal baseline models commonly used in medical literature.\\n3. The model evaluation has not been clearly described. Did the authors split subjects into training and test sets, keeping all observations from each subject together, or did they split individual observations, potentially placing different time points from the same subject in both training and test sets?\\n4. The reproducibility of this work is not guaranteed, as the code has not yet been made available.\", \"questions\": \"1. The model evaluation lacks clarity regarding whether the authors performed a subject-level or observation-level split. Specifically, did they keep all observations from each subject together, or did they split individual observations, potentially including different time points from the same subject in both training and test sets? It would be valuable to see how well the model predicts follow-up scans based on data from earlier time points, given that the Temporal Diffusion Model is a novel component. Additionally, for baseline models like DDPM that lack a temporal component, it would be interesting to understand how the authors utilize these models to generate follow-up scans over time.\\n2. The Temporal Diffusion Model uses linear interpolation to model progression in age and labels, which may not be ideal, as transitions between disease states are often abrupt or follow complex patterns. Furthermore, it would be beneficial to see theoretical proof that such linear interpolation preserves the properties of diffusion models.\\n3. Including some directions for future work in the conclusion would be beneficial for the research community.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No concerns\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer xa3Q (Part 2)\", \"comment\": \"Q2) How to choose $\\\\lambda$? No validation set to optimize $\\\\lambda$.\\n\\nA) As shown in Table 3, we performed a grid search to find optimal $\\\\lambda$ in {0.1, 0.3, 0.5, 0.7, 0.9} for all datasets. The best $\\\\lambda$ for each dataset was chosen based on the smallest Wasserstein Distance (WD). Upon closer inspection of the results in Table 3, we believe that the reviewer can easily see that the overall performance differences are marginal depending on the choice of $\\\\lambda$. While slight variations exist, the results of the smallest (0.1) and largest (0.9) $\\\\lambda$ still consistently outperform all baseline methods in the single-domain learning experiments. \\n\\nWe acknowledge the reviewer\\u2019s concern about not using the validation set, as we had the same consideration before conducting experiments. However, due to the limited size of data (e.g., around 150 ~ 700 data are available), we prioritized maximizing the training data by splitting the whole data into an 8:2 ratio for training and testing. To mitigate the potential for biased results, we trained our model and all baseline models 3 times from scratch on the training set and evaluated each on the test set, and the final results of each method are averaged across the 3 replicates. This experimental setting with 8:2 data split and reporting the average performance of three runs is well-established in the literature on generative methods with small samples (100 ~ 600 samples). For example, recent studies on generative models such as [2] adopt the same experimental setups to ensure model stability and reliability in small-sample scenarios.\\n\\n[2] Jo et al, \\u201cScore-based Generative Modeling of Graphs via the System of Stochastic Differential Equations\\u201d, ICML 2022\\n\\nQ3) Are RDM and TDM trained separately or jointly? How was $D$ chosen?\\n\\nA) The RDM and TDM are trained separately by using separate loss functions $L_\\\\text{RDM}$ and $L_\\\\text{TDM}$, respectively. After training both models, RDM first generates baseline samples in the sampling process. These generated baseline samples are inputted to TDM and TDM yields follow-up samples by estimating the difference between the baseline and follow-up samples. \\n\\nRegarding the selection of $D$, we clarify the rationale for choosing $D$ through this rebuttal, while all hyperparameter settings including $D$ are provided in Table 6. For the RDM, $D$ was set to 1000, following the number of diffusion steps used in DDPM (Ho et al., NeurIPS 2020). In contrast, $D$ was set to 100 for the TDM due to the different relationship between noise and $D$. Unlike RDM whose noise $\\\\epsilon^d_t \\\\sim N(0, 1)$ is independent of the diffusion step $D$, the $\\\\epsilon_\\\\phi \\\\approx \\\\Delta x_t^d$ of TDM highly depends on the $D$ as a larger $D$ results in smaller $\\\\Delta x_t^d$. Due to this dependency, the large $D$ (i.e., small $\\\\Delta x_t^d$) of TDM leads to a gradient explosion as the $\\\\epsilon_\\\\phi$ has to estimate excessively tiny values that are close to zero. To address this issue, we progressively reduced the $D$ of TDM until the training stabilized, and empirically observed that $D=100$ was sufficient to avoid the gradient explosion of TDM.\"}", "{\"title\": \"Response to Reviewer rUXZ (Part 2)\", \"comment\": \"W4 & Q5) Lack of discussion of limitations and future works. Are there any specific scenarios under which the proposed method might underperform?\\n\\nA) Thank you for your valuable feedback. We added the \\u2018Limitation and Future Work\\u2019 section in Appendix C, so please refer to it for a detailed discussion of potential improvements and future directions. We hope this addition addresses the reviewer\\u2019s concerns and contributes to advancing medical data analysis.\\n\\nAs outlined in the appendix, we acknowledge several weaknesses and scenarios where our method might underperform. For example, compared to one-shot generative methods such as TVAE and GANs, our autoregressive approach has a longer generation time since samples are generated sequentially. The gap in the sampling time becomes significantly larger as the sequence length (i.e., the number of time points) becomes extended. \\n\\nAdditionally, our method may face challenges in scenarios where labels are abruptly reversed over time. In such cases, the interpolated label $y^d_t$ may not represent a feasible disease severity and could deviate from the range of given observed labels, which may highly likely aggravate the training stability and model performance. In our experiments, we confirmed that all sequence labels are monotonically deteriorating, so that the $y^d_t$ in Eq. 10 is defined under the strict assumption of ordinal transitions.\\n\\nQ1) How does ConDOR theoretically ensure the accurate representation of disease progression, considering complex dynamics and irregular intervals?\\n\\nA) By using fine-grained and interpolated time-dependent conditions (i.e., age $a^d_t$ and diagnostic label $y^d_t$) for each diffusion step $d=1, ..., D$ and time point $t=1, ..., T$, our model effectively captures realistic disease progression with complex dynamics and irregular intervals. These conditions, defined at unobserved time points (i.e., at diffusion step $d$ between observed $t-1$ and $t$), enable the method to estimate unobserved sample points from the conditional PDF $f_{X|A, Y}(x_t|a_t, y_t)$. For example, consider a subject with two time points where labels change from Cognitively Normal (CN) to Alzheimer\\u2019s Disease (AD). Existing baseline methods only utilize these two observed labels, overlooking the gradual transition of diagnostic labels between them. In contrast, our method accounts for all intermediate labels, reflecting the irreversible and progressive deterioration from CN to AD over time. Additionally, our method incorporates intermediate ages spanning the interval from $t-1$ to $t$. This mechanism allows our model to capture biologically plausible disease trajectories across arbitrary intervals, addressing the limitations of existing methods in handling complex and irregular temporal dynamics of neurodegenerative diseases. \\n\\nQ3) Any potential biases introduced by the dual-sampling method?\\n\\nA) While the dual-sampling approach in our method effectively balances both cohort and subject-level information, it may introduce biased results if the trade-off hyperparameter $\\\\lambda$ is not appropriately tuned. The ablation studies on the $\\\\lambda$ in Table 3 indicate that the bias is not critical in general; however, considering the inefficiency of fine-tuning for every dataset, we plan to develop adaptive mechanisms that dynamically adjust the $\\\\lambda$ based on dataset characteristics, ensuring optimal balance between cohort and subject-level contributions.\\n\\nQ4) How representative are the four biomarkers for a broader disease population? Any limitations in the experimental design?\\n\\nA) The four AD biomarkers used in the experiments (i.e., cortical thickness, SUVR of Amyloid, FDG, and Tau) are widely recognized and clinically validated biomarkers for identifying AD [1, 2]. However, these biomarkers may not fully represent the broader spectrum of neurodegenerative diseases, such as Parkinson\\u2019s or Huntington\\u2019s disease, which involve different biological mechanisms. Additionally, the AD datasets utilized in our experiments primarily consist of participants from the US, potentially limiting the model\\u2019s applicability to populations with different genetic, environmental, or lifestyle factors. This lack of population diversity could affect the generalizability of the findings across broader demographics.\\n\\n[1] Querbes et al. \\\"Early diagnosis of Alzheimer's disease using cortical thickness: impact of cognitive reserve.\\\" Brain, 2009\\n\\n[2] Jack et al, \\\"Biomarker modeling of Alzheimer\\u2019s disease.\\\" Neuron, 2013\"}", "{\"title\": \"Response to Reviewer xa3Q (Part 1)\", \"comment\": \"We thank the reviewer for your time and thoughtful comments. We address all concerns raised by the reviewer, and we hope that the reviewer reconsiders the score favorably towards acceptance.\\n\\nW1) Why are the variances of DDPM results higher than those of other baselines?\\n\\nA) We acknowledge that the reported standard deviations of DDPM are generally higher compared to other methods, as shown in Table 1. This higher standard deviation likely arises from the sensitivity of DDPM to hyperparameter tuning, particularly the learning rate and parameter initialization, which can impact its convergence behavior. To find the optimal hyperparameters for DDPM, we performed a grid search to select the optimal learning rate of DDPM from {0.002, 0.015, 0.001, 0.0008, 0.0005} as we did for ConDOR. For each learning rate, we ran three independent trainings from scratch with different parameter initializations and reported the best average results. \\n\\nWhile the reported mean represents the best overall performance, the accompanying standard deviation reflects the variability across these multiple training runs at the optimal learning rate (0.0005). In contrast, for other learning rates, the standard deviation was considerably smaller, but the mean performance was not as strong. For example, in the Amyloid experiment with learning rate=0.001, the DDPM results were 17.69 ($\\\\pm$ 1.199), 1.02 ($\\\\pm$ 0.050), and 0.07 ($\\\\pm$ 0.009) for WD, RMSE, and JSD, respectively, showing significantly lower standard deviations but poorer mean performance compared to the optimal setup.\\n\\nMoreover, the standard deviation can be affected by metrics and datasets. For example, in the FDG experiment, the standard deviation of TVAE on RMSE (0.019) and GOGGLE on JSD (0.002) were higher than those of DDPM. Also, in the Tau experiment, CRow and CTGAN showed 3.2 and 1.8 times larger standard deviation on JSD than that of DDPM. These results suggest that other baseline methods can also have high standard deviations depending on the selected metrics and data characteristics.\\n\\nW2) The authors did not mention how to use the model for sampling new data. The proposed model cannot use conditions during sampling. \\n\\nA) We are sorry but we think there is a misunderstanding. In the sampling process, we used a trained $\\\\textbf{conditional}$ U-Net ($\\\\mu_{\\\\theta}$) to generate unseen samples. In Eq. 8 of the original manuscript (in line 229), we mentioned that the $\\\\mu_{\\\\theta}(x^d_t, a_t, y_t, d)$ takes conditions (i.e., age $a_t$ and disease severity $y_t$). Using a conditional U-Net to implement a conditional diffusion model is the convention [1], and we followed this existing method for conditional sample generation. All quantitative and qualitative results in the paper were derived from the trained $\\\\mu_{\\\\theta}(x^d_t, a_t, y_t, d)$. To enhance clarity, we revised line 233 by explicitly mentioning \\u2018conditional U-Net\\u2019 instead of \\u2018U-Net\\u2019. \\n\\n[1] Rombach et al, \\u201cHigh-Resolution Image Synthesis With Latent Diffusion Models\\u201d, CVPR 2022\\n\\nQ1) Why $x_t (t=1, \\u2026, T)$ were used for training RDM? Will this setting cause the RDM to be biased?\\n\\nA) To secure sufficient training data, we used all $t=1,...,T$ to train the RDM although the RDM aims to generate baseline time point samples. If only $x_1$ samples were used for training, only 142, 549, 542, and 132 training samples could be used from the ADNI cortical thickness, Amyloid, FDG, and Tau datasets, respectively, and 25 samples could be used for the OASIS dataset. However, if all time point samples are used, at least double samples are secured for training the RDM. \\nRegarding the concern about biased training, we had the same question at the initial data preprocessing stage. Therefore, we investigated the number of time points of all subjects and assumed that the effect of bias would be marginal as most subjects have two time points (70%) or three time points (20%) samples.\"}", "{\"title\": \"Response to Reviewer jkdK (Part 1)\", \"comment\": \"We thank the reviewer for the thoughtful comments. We hope our rebuttal below addresses all your concerns and questions.\\n\\nW1 & Q2) The ordinal regression model and the linear interpolation may not accurately capture realistic disease dynamics. \\n\\nA) We acknowledge that ordinal regression and linear interpolation may not be the best methods for fully capturing real-world disease characteristics. However, given the irreversible characteristics of degenerative diseases, we believe the ordinal regression model is one of the most reasonable approaches for handling the phased and categorized nature of disease severity, which is a shared feature across the entire population. \\n\\nAlso, although abrupt changes may occur for some individuals, such changes can be captured by linear interpolation if (1) they are observed and (2) follow the order of disease severity, as the linear interpolation is performed between every pair of the observed adjacent time points. For example, if a subject with three time points has an abrupt change at the second time point, the linear interpolation captures monotonic differences between (1) the first and second time points samples and (2) the second and third time points samples. Therefore, the linear interpolation considers the abrupt and sequential changes within individuals, thereby allowing the model to learn such changes effectively. \\n\\nThere are some corner cases where the linear interpolation may not be suitable. For example, if abrupt changes are unobserved, identifying such hidden outliers and pinpointing their exact occurrence are challenging. Moreover, if labels are abruptly reversed over time, the interpolated label $y^d_t$ does not represent a feasible disease severity, which may highly likely aggravate the training stability. However, given that we fully utilized every observed sample and confirmed that all labels are monotonically deteriorating, we believe that linear interpolation is a practical approach in our experimental setting.\\n\\nW2) Lack of comparison with longitudinal studies used in the medical domain.\\n\\nA) We thank the reviewer for pointing out this limitation. For baseline methods, we made an effort to include as many conditional generative models as possible whose official codes are publicly available, ensuring diversity across different generative approaches (e.g., GANs, VAEs, diffusion models, normalizing flow, etc). If the reviewer has recommendations for additional baseline studies for conditional tabular data generation in the medical domain, we would be happy to review their code and add their results to our comparisons.\\n\\nW3 & Q1) How were subjects split?\\n\\nA) We performed a subject-level split, keeping all observations from each subject together. Each subject has a sequence of samples {$x_t$}$^T_{t=1}$. After preprocessing the ADNI dataset, data (i.e., sequences) from 178, 687, 678, and 166 subjects were obtained for CT, Amyloid, FDG, and Tau, respectively. For the OASIS dataset, 32 subject data were obtained after preprocessing. The stratified training/test data splits were performed with an 8:2 ratio on these preprocessed data based on the baseline time point labels.\\n\\nW4) Concern about reproducibility.\\n\\nA) We understand that implementing ConDOR from scratch may not be easy. Therefore, all codes of ConDOR and baseline methods along with their pre-trained models will be released online once the paper is accepted.\\n\\nQ1) How well does TDM generate follow-up scans? How DDPM was used for follow-up data generation?\\n\\nA) In Figures 2 and 3, we presented qualitative results demonstrating the effectiveness of TDM in generating follow-up samples based on the baseline scans. Specifically, the results at $t=2$ and $t=3$ of the second rows in Figures 2 and 3 visualize the generated follow-up samples by TDM, showing general consistency with the ground truth sequences in the first rows. These visualizations of realistic follow-up samples highlight the effectiveness of TDM in capturing temporal dynamics while accounting for the initial characteristics of the baseline sample.\\n\\nSince some baseline generative methods, such as DDPM, are not inherently designed for longitudinal data generation, they handle temporal dynamics differently during the sampling process. Specifically, while our model takes a sequence of conditions (i.e., ages {$a_t$}$^T_{t=1}$ and labels {$y_t$}$^T_{t=1}$) to generate a sequence of samples, baseline methods like DDPM process each time point independently. For example, DDPM takes the same cross-sectional conditions (i.e., age $a_t$ and label $y_t$) repeatedly for $T$ iterations. These $T$ independently generated data points are then aggregated to form a sequence, and the evaluation is performed based on these resulting sequences.\"}", "{\"title\": \"Further response\", \"comment\": \"I appreciate the responses from the authors and would like to increase the score.\"}", "{\"title\": \"General response to all reviewers\", \"comment\": \"We sincerely thank all reviewers for their thoughtful and constructive feedback. We have addressed all concerns raised by the reviewers and kindly request that the reviewers consider our rebuttals.\\n\\nAlso, we appreciate the recognition of our contributions, practical value, and experimental rigor. Notably, the reviewers highlighted:\\n\\n$\\\\textbf{(1) Technical Novelty}$:\\n\\n$\\\\bullet$ The design of the proposed model's structure design is innovative, e.g., the combination of two diffusion models and the use of both cohort and subject-level interpolation for training. ($\\\\textbf{reviewer xa3Q}$)\\n\\n$\\\\bullet$ This new generative model addresses challenges associated with sparse, irregular, and widely spaced intervals in medical data. ($\\\\textbf{reviewer jkdK}$)\\n\\n$\\\\bullet$ It introduces a novel integration of ordinal regression with diffusion models. ($\\\\textbf{reviewer jkdK}$)\\n\\n$\\\\bullet$ This paper introduces a novel conditional generative model for synthesizing longitudinal sequences to study neurodegenerative diseases such as Alzheimer\\u2019s disease. ($\\\\textbf{reviewer rUXZ}$)\\n\\n$\\\\textbf{(2) Model Strengths and Capabilities}$:\\n\\n$\\\\bullet$ The proposed model captures both spatial and temporal features through a combination of a Regional Diffusion Model and a Temporal Diffusion Model. ($\\\\textbf{reviewer jkdK}$)\\n\\n$\\\\bullet$ The proposed model combines the cohort-level trend and subject-level trend for longitudinal data generation. ($\\\\textbf{reviewer rUXZ}$)\\n\\n$\\\\bullet$ The model strikes a balance between cohort-level and individual-level fitting, capturing generalized population trends while accommodating individual variability. ($\\\\textbf{reviewer jkdK}$)\\n\\n$\\\\bullet$ The model is extended to a multi-domain setting, enhancing its generalizability and applicability to different data sources. ($\\\\textbf{reviewer jkdK}$)\\n\\n$\\\\textbf{(3) Extensive Experiments}$: \\n\\n$\\\\bullet$ Extensive validation on four Alzheimer's Disease biomarkers demonstrates the model's superiority over nine baseline approaches. ($\\\\textbf{reviewer rUXZ}$)\\n\\n$\\\\bullet$ This paper provides clear comparisons to baseline models across multiple metrics and good visualizations. ($\\\\textbf{reviewer xa3Q}$)\\n\\n$\\\\bullet$ The experiments are comprehensive, with comparisons to nine baseline models, including GANs, VAEs, and other diffusion-based models, evaluated across three metrics. Implementation time is also compared. ($\\\\textbf{reviewer jkdK}$)\"}", "{\"title\": \"Code Release\", \"comment\": \"We share the code of ConDOR and all baseline models we used in our study, along with the pretrained models for all experimental settings, in this repository: https://github.com/Hannah37/ConDOR-ICLR25/tree/main\"}", "{\"summary\": \"This paper proposes a conditional generative model for synthesizing longitudinal sequences.\\nIt first uses ordinal regression and kernel density estimation to model the conditional PDF and then interpolate gaps between consecutive observations. \\nTwo diffusion models are then trained to model baseline data and changes in follow-up samples. \\nThese diffusion models generate disease progression data by sequentially sampling.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The design of the proposed model's structure design is innovative, partsuch as the combination of two diffusion models and the use of both cohort and subject-level interpolation for training.\\n\\nThis paper provides clear comparisons to baseline models across multiple metrics and good visualizations.\", \"weaknesses\": \"Some abnormal results are not well discussed or explained.\\nFor example, in Table 1, the variance in DDPM's performance is very high compared to other methods. \\nSince DDPM is one of only two diffusion-based comparison methods, it would be helpful to provide an explanation for this abnormal performance.\\n\\n\\nThe authors do not mention how to deploy the model. For example, how to use the trained model to generate new data. \\nIf my understanding of the generative process is correct, we only need to use these two diffusion models with random noise as the input when generating new data.\\nThere is no option to allow the two diffusion models to generate samples conditioned on specific ages and disease severity, \\ne.g., we can't use age as an input for these diffusion models when generating, \\nalthough the authors claim that this model can generate data conditioned on these factors.\\n\\nSome important technical details are also missing. Please refer to the questions below\", \"questions\": \"RDM is designed to geneate the baseline sample $x_1$. However, during training, the authors use all the samples $x_t$ (t = 1, .., t) as independent cross-sectional data.\\nThe justification for using such setting is not presented in the paper. \\nFor example, why not use only $x_1$ to train the RDM? \\nWill this setting cause the RDM to be biased as some longitudinal samples have longer records or are recorded more frequently\\n\\nAcoording to Table 3, the choice of hyperparameter $\\\\lambda$ significantly impacts the performance of the proposed method. \\nHowerver, how to select $\\\\lambda$ is unclear, for example, which dataset and what metric do the author use to choose $\\\\lambda$.\\nThe author used 80% of the whole data for training and the rest 20% for testing for all experiments,\\nand it seems there is no validation dataset to optimize $\\\\lambda$.\\nTherefore, the results shown in Table 3 is less convincing to me since they are propbably derived from either training or test set, \\nand the results in Table 1 and 2 are also less convincing since we may not able to get the best $\\\\lambda$ in practice.\\n\\n\\nThe training strategies for the generative model are somewhat unclear to me. For example do the authors train RDM and TDM seperatedly or jointly?\\n$D$ seems to be another important hyperparameter, but how the authors chose $D$ is also unclear.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer jkdK (Part 2)\", \"comment\": \"Q2) Does linear interpolation preserve the properties of diffusion models?\\n\\nA) Given that the linear interpolation between two adjacent time points {$x_{t-1}, x_t$} yields an interim sequence of pseudo-samples {$x_t^1, x_t^2, \\u2026, x_t^D$} with small incremental changes $\\\\Delta x^d_t$, estimating these tiny differences aligns with the principles of a diffusion model, which inherently estimates small noises at each step of the diffusion process. These interpolated pseudo-samples serve as intermediate noisy samples for stepwise diffusion without altering the fundamental structure of the diffusion process itself.\\nNotably, in conventional conditional diffusion models, conditioning variables are typically static and not directly paired with the noisy samples generated during the forward diffusion process. In contrast, our method introduces a novel conditioning mechanism where the interim pseudo-samples are explicitly paired with linearly interpolated time-dependent conditions (e.g., age $a^d_t$ and diagnostic label $y^d_t$). This pairing allows the model to explicitly learn temporal dynamics while maintaining the progressive property of conditions, ensuring consistency with the overall design of the diffusion framework. Overall, the linear interpolations on the samples and conditions preserve the key properties of diffusion models while introducing an innovative mechanism to handle longitudinal neurodegenerative data generation.\\n\\nQ3) Adding future works would be beneficial.\\n\\nA) Thank you for suggesting this valuable addition. We added the \\u2018Limitation and Future Work\\u2019 section in Appendix C, so please refer to it for a detailed discussion of potential improvements and future directions. We hope this addition addresses the reviewer\\u2019s concerns and contributes to future advancements in medical data analysis.\"}", "{\"summary\": \"This paper introduces a novel conditional generative model for synthesizing longitudinal sequences to study neurodegenerative diseases such as Alzheimer\\u2019s disease. The method uses ordinal regression and a diffusion model to generate realistic disease progression imaging data.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Combing the cohort-level trend and subject-level trend for longitudinal data generation.\\n2. Extensive validation on four Alzheimer's Disease biomarkers demonstrates the model's superiority over nine baseline approaches.\", \"weaknesses\": \"1. The paper lacks a comprehensive theoretical justification for the proposed method. While the method is innovative, a deeper theoretical comparison with existing models could strengthen the argument for its necessity and effectiveness.\\n2. The description of the methodology, particularly the integration of cohort-level and subject-level samples, is somewhat convoluted. The paper could benefit from clearer explanations and more detailed algorithmic steps to enhance reproducibility.\\n3. The paper does not provide a thorough statistical analysis to support these claims. The lack of confidence intervals or significance testing weakens the robustness of the reported findings.\\n4. The discussion section is relatively weak in terms of interpreting the results and their implications. The paper does not adequately address the potential limitations of the proposed method or suggest directions for future research, which are crucial for a comprehensive understanding of the study\\u2019s impact.\", \"questions\": \"1. How does the proposed method theoretically ensure the accurate representation of disease progression, especially considering the complex dynamics and irregular intervals in longitudinal data?\\n2. How does the proposed method theoretically improve upon existing generative models for longitudinal data? Are there any theoretical limitations or assumptions that need further clarification?\\n3. The paper introduces a dual-sampling approach combining cohort-level and subject-level samples. How does this method compare to other state-of-the-art techniques in terms of capturing individual-specific features and general trends? Are there any potential biases introduced by this approach?\\n4. The experiments are conducted on four AD biomarkers from MRI and PET images. How representative are these biomarkers and datasets of the broader neurodegenerative disease population? Are there any limitations in the experimental design that could affect the generalizability of the results?\\n5. The paper claims superiority over nine baseline approaches. How robust are these results across different metrics and datasets? Are there any specific scenarios or conditions under which the proposed method might underperform or fail?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
9U8IwSewJy
Mixture-of-Queries Transformer: Camouflaged Instance Segmentation via Queries Cooperation and Frequency Enhancement
[ "Weiwei Feng", "Nanqing Xu", "Tengfei LIU", "Weiqiang Wang" ]
Due to the high similarity between camouflaged instances and the surroundings and the widespread camouflage-like scenarios, the recently proposed camouflaged instance segmentation (CIS) is a challenging and relevant task. Previous approaches achieve some progress on CIS, while many overlook camouflaged objects’ color and contour nature and then decide on each candidate instinctively. In this paper, we contribute a Mixture-of-Queries Transformer (MoQT) in an end-toend manner for CIS which is based on two key designs (a Frequency Enhancement Feature Extractor and a Mixture-of-Queries Decoder). First, the Frequency Enhancement Feature Extractor is responsible for capturing the camouflaged clues in the frequency domain. To expose camouflaged instances, the extractor enhances the effectiveness of contour, eliminates the interference color, and obtains suitable features simultaneously. Second, a Mixture-of-Queries Decoder utilizes multiple experts of queries (several queries comprise an expert) for spotting camouflaged characteristics with cooperation. These experts collaborate to generate outputs, refined hierarchically to a fine-grained level for more accurate instance masks. Coupling these two components enables MoQT to use multiple experts to integrate effective clues of camouflaged objects in both spatial and frequency domains. Extensive experimental results demonstrate our MoQT outperforms 18 state-of-the-art CIS approaches by 2.69% on COD10K and 1.93% on NC4K in average precision.
[ "image segmentation", "transformer" ]
https://openreview.net/pdf?id=9U8IwSewJy
https://openreview.net/forum?id=9U8IwSewJy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zV67bAJkSG", "tY4cpyBHB1", "pPmK8oem0W", "kTWfcaHege", "jwgIbEN3Xz", "h9JwdVYCLc", "TL7NMrQJ99", "S5XdOPodl0", "C29gI53muS", "AfijSZUXuj", "AXGXr77tfL", "8g9UpQkfln", "7T4wQbyKly", "5JhvevHBqk", "174fkH8iVe" ], "note_type": [ "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732949750157, 1732720828706, 1732724653605, 1737028071066, 1730430563925, 1732777726958, 1733191048335, 1733190922743, 1730199553710, 1730681151504, 1730036213750, 1732724859431, 1732720329955, 1733093349836, 1733104866538 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3331/Area_Chair_Se4N" ], [ "ICLR.cc/2025/Conference/Submission3331/Authors" ], [ "ICLR.cc/2025/Conference/Submission3331/Authors" ], [ "ICLR.cc/2025/Conference/Submission3331/Authors" ], [ "ICLR.cc/2025/Conference/Submission3331/Reviewer_ApvJ" ], [ "ICLR.cc/2025/Conference/Submission3331/Reviewer_ApvJ" ], [ "ICLR.cc/2025/Conference/Submission3331/Authors" ], [ "ICLR.cc/2025/Conference/Submission3331/Authors" ], [ "ICLR.cc/2025/Conference/Submission3331/Reviewer_LLn2" ], [ "ICLR.cc/2025/Conference/Submission3331/Reviewer_rYM3" ], [ "ICLR.cc/2025/Conference/Submission3331/Reviewer_fKVN" ], [ "ICLR.cc/2025/Conference/Submission3331/Authors" ], [ "ICLR.cc/2025/Conference/Submission3331/Authors" ], [ "ICLR.cc/2025/Conference/Submission3331/Reviewer_rYM3" ], [ "ICLR.cc/2025/Conference/Submission3331/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewers,\\n\\nThank you again for your efforts in reviewing this submission. It has been some time since the authors provided their feedback. We kindly encourage you to review their responses, verify whether they address your concerns, and submit your final ratings. If you have additional comments, please initiate a discussion promptly. Your timely input is essential for progressing the review process.\\n\\nBest regards,\\n\\nAC\"}", "{\"title\": \"response\", \"comment\": \"**Answer for Question1 and Weakness1**: Thanks for your comments, and the answer is no.First we present **18 baselines in the paper**, including some typical segmentation methods, like Mask2Former and MaskFormer. Second, for fair comparison, **all the compared methods only use masks as supervision, and the backbone is pre-trained on ImageNet 1K, which are the same as previous works**. But your so-called standard segmentation models (e.g., Mask DINO, SAM) are not like this, even Mask DINO is not a standard segmentation model. **Mask DINO is a unified framework for detection and segmentation**, which adopts the same architecture design for detection as in DINO with minimal modifications (adopting a key idea from Mask2Former to construct a pixel embedding map). Further, Mask DINO is trained under the **supervision of boxes and masks, which is different from the 18 baselines**. Mask DINO is **not fair** for comparison in CIS task. And we have present the comparison of Mask2Former in the paper, which is the one origin of Mask DINO. Besides, **SAM is pre-trained on large-scale datasets**, so it is also **unfair to show the comparison results with other baselines**. Therefore, we do not consider providing **extra unfair comparison results** with your so-called standard segmentation models.\\n\\n**Answer for Question2**: We prefer Fourier transform rather than Wavelet transform based on: **(1)** In general, **Fourier transform is more suitable for stationary signals and Wavelet transform for non-stationary signals**. With a given input, we consider that the distribution of the boundary of the camouflaged instances and the surroundings is stable with varying positions, which is adequate for stationary signals. **(2)** Fourier transform is **more convenient than Wavelet transform**. Fourier transform can transform natural images into the frequency domain with only a few parameters and excellent performance. However, Wavelet transform needs much more hand-craft work (e.g., the selection of basic wavelet functions). **(3)** Fourier transform holds **a more tolerant attitude towards the structure of feature extractors**. On the one hand, Fourier transform performs consistently on frequency, and each frequency band\\u2019s contribution is equal in its output. The traditional feature extractors are convenient for processing such stationary signals. On the other hand, Wavelet transform has adaptive policies on high-frequency and low-frequency components, leading to unequal contributions of various frequency bands in its output. \\n\\n**Answer for Question3 and Weakness3**: As suggested, we provide visualizations of various experts at https://anonymous.4open.science/r/iclr2025_imgs-744F/moe.jpg. It can be found that with MoQ, the predicted masks are more accurate, and various experts can combine them for better masks.\\n\\n**Answer for Question4**: Thanks for your question. (1) Our MoQT can also perform segmentation in traditional instance segmentation tasks. However, its mechanisms pay more attention to CIS, and its performance may not achieve state-of-the-art performance. Most previous works on CIS share the same characteristics since their improvement are task-specific. (2) Our MoQT is more likely to hold the SOTA performance in camouflaged-like segmentation tasks. The camouflaged-like segmentation means to split the target instance where the instance and surroundings are hard to distinguish. Since carefully designed mechanisms like FEFE and MoQ decoder are not only specific to the classical camouflaged scenarios, our MoQT probably performs well on these camouflaged-like instance segmentation tasks.\\n\\n**Answer for Weakness2**: Thanks for your question. We illustrate the originality of our FEFE in the following aspects: **(1) The purposes of are different**. As mentioned in your question, these works focus on COD, and our MoQT pays attention to CIS. **(2) The processes are different**. On the one hand, our FEFE applies Fourier transform to the whole input and treats the amplitude and phrase components separately. Further, the contour enhancement on the phrase component and the color remove on the amplitude component are parallel. On the other hand, the previous works utilize discrete cosine transform (DCT) to the pre-processed 8 \\u00d7 8 patches and split the frequency clues into several frequency bands. Besides, it performs band-wise and spatial-wise enhancement to these clues successively. **(3) The insights are different**. The previous works state that frequency clues are needed in COD and try to exploit the frequency clues for better performance in COD, while they reveal few explanations on why frequency clues can work. Our FEFE further points out that the phrase and amplitude components are responsible for high-level semantics (e.g., contour) and low-level semantics (e.g., color) in the camouflaged samples, providing valuable guidance to CIS. In summary, the originality of our FEFE is different from that of the previous works.\"}", "{\"title\": \"response\", \"comment\": \"**Answer for Question1**: We further discuss the difference between our method and other CIS methods, the comparison details\", \"are_presented_in_https\": \"//anonymous.4open.science/r/iclr2025_imgs-744F/table.jpg. Difference from existing 4 methods (OSFormer, DCNet, UQFormer and CamoFourier), our proposed MoQT adopts color removal and contour enhancement in FEFE for mining camouflaged clues. Besides, the MoQ decoder in our method is used to imitate the human habit of segmenting camouflaged instances, where in each layer we initialize new experts for cooperation and queries refining with MoE mechanism. In summary, our method reveals the relationship of Frequency and camouflage, and it is the first attempt of using MoE mechanism in query-based transformer for segmentation.\\n\\n**Answer for Question2 and Weakness1**: Thanks for your question. Although our MoQT and the CamoFourier apply Fourier transform in the CIS task, they hold different insights on the frequency clues. CamoFourier is based on the classical conditional GAN (c-GAN) structure and utilizes Fourier transform to synthesize transformed images for further object detection or instance segmentation. However, our MoQT uses Fourier transform to enhance contours and remove color information. Their differences are listed as follows: **(1) Different Purposes**: **CamoFourier uses Fourier transform for data augmentation** (also mentioned in its title \\\"A Learnable Fourier-based Augmentation for ...\\\"), but our **MoQT applies Fourier transform for color removal and contour enhancement**. **(2) Different Network Structures**: CamoFourier conducts a c-GAN structure with Fourier transform for data augmentation, but our MoQT simply utilizes a feature extractor (e.g., ResNet-50) with Fourier transform for feature enhancement. **(3)Different Insights**: CamoFourier aims to manipulate the amplitude information to enhance the visibility of camouflaged objects in the image, but our MoQT further considers both high-level semantics like contour (phrase information) tend to preserve more camouflaged characteristics and remove low-level statistics (amplitude information) like color contain more information from the surroundings.\\n\\n**Answer for Question3**: Thanks for you suggestion. For a fair comparison, we compare our method with GLNet as follows: with a swin'-tiny backbone, GLNet reaches 40.8@AP and 44.0@AP, and our method achieves 51.4@AP and 58.1@AP, which is much better than GLNet's performance. We will add the comparison results in the revised version. By the way, we find that in the original paper of GLNet, they compare the large backbone P2V (their method) with Resnet50 (baselines), which we think is unfair.\\n\\n**Answer for Question4**: Thanks for you comments. We have provide the results in Figure 5 in the submitted manuscript.\\n\\n**Answer for Weakness2**: The Mixture-of-Queries Decoder is proposed to imitate the human habit of segmenting camouflaged instances, and we initialize some experts for cooperation and queries refining hierarchically at each layer. In previous transformer-based methods, the decoder updates queries for final prediction. There is only one set of queries initialized before feeding into decoder layers, so they can only optimize the implicit relationship between the decoder and queries in an end-to-end manner. However, we initialize some experts at each layer to explicitly learn how to update queries. From the parameters learning aspect, our decoder can be regarded as a vanilla query-based decoder with some extra explicit parameters at each layer, which also play the same role as the parameters of original queries. Therefore, it does not hurt cross-attention but can provide more accurate refining. To our knowledge, the MoE mechanism of queries in transformer decoder has not been studied before, so we think this mechanism is novel.\\n\\n**Answer for Weakness3**: We are so sorry for your misunderstanding of our method due to the unclear presentation. In line 306, ''MoQ Layer'' should be ''a layer of MoQ Decoder''. First, our MoQ decoder layer includes a vanilla query-based decoder layer and a MoQ Layer, and Eq(4) presents the process of a vanilla query-based decoder layer and a MoQ Layer. \\n- The link of detailed illustration of MoQ Decoder: https://anonymous.4open.science/r/iclr2025_imgs-744F/moe_loc_eq.png\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper aims to explore the influence of contour and color for discovering camouflaged instances and utilizes MoE technology to localize the multiple instances of camouflaged objects. First, the Frequency Enhancement Feature Extractor is proposed to capture the camouflaged clues in the frequency domain. To expose camouflaged instances, the extractor enhances the effectiveness of contour, eliminates the interference color, and obtains suitable features simultaneously. Second, a Mixture-of-Queries Decoder utilizes multiple experts of queries for spotting camouflaged characteristics with cooperation. The proposed MoQT achieves SOTA performance and outperforms 18 camouflaged instance segmentation methods on COD 10K and NC4K datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper attempts a new respective to explore the concealed attribute of camouflaged instances: the visual cues of camouflaged objects are concealed, but the other domain (like frequency) cues of camouflaged objects are not completely hidden, which allows deep parsing of camouflage detection and segmentation. This idea is interesting and worth emulating.\\n2. The ablation experiments are sufficient.\\n3. The presentation of the paper is easy to understand, including some visual comparisons, etc.\", \"weaknesses\": \"1. The proposed MoQT employs the Fourier transform to obtain color and contour features. But introduction of Fourier transform is similar to the previous work CamoFourier:\", \"unveiling_camouflage\": \"A Learnable Fourier-based Augmentation for Camouflaged Object Detection and Instance Segmentation, arXiV, 2023\\n2. Compared with the vanilla query-based decoder, the MoQ decoder introduces a Mixture-of-Queries (MoQ) layer with initialized M experts. Then, the M+1 outputs of the MoQ layer are aggregated via an adaptive weight. The novelty of MoQ is limited. The improvement of the query mechanism is a novel approach, but the proposed MoQ is only a token aggregation method. Besides, each layer of the MoQ decoder introduces initialized experts, which should hurt cross-attention enhanced query tokens. That sounds unreasonable.\\n3. The structure of Fig. 4 is not consistent with Eq. (4). The aggregated query tokens of multiple experts are input the vanilla query-based decoder in Eq. (4). But, the Fig. 4 does not present this process. Actually, the layer of MoQ decoder is twice the one of vanilla query-based decoder.\", \"questions\": \"1. I suggest that the author clearly compares this method with existing query-based transformer methods, and explicitly states the advantages and innovations of the method proposed in this paper.\", \"osformer\": \"One-stage camouflaged instance segmentation with transformers. In European conference on computer vision, 2022\\nCamouflaged instance segmentation via explicit de-camouflaging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023\\nA unified query-based paradigm for camouflaged instance segmentation, ACM International Conference on Multimedia, 2023\\n2. I suggest that authors provide a more in-depth comparison with CamoFourier, highlighting any specific differences in how they utilize Fourier transforms and discussing how their approach advances beyond CamoFourier's techniques.\\n3. Authors should add some CIS methods from 2024 for comprehensive evulation, such as GLNet. Authors are suggested to directly compare their results with GLNet\\u2019s.\\nCamouflaged Instance Segmentation From Global Capture to Local Refinement, IEEE Signal Processing Letter, 2024.\\n4. The author should clearly articulate the number of query tokens used on each dataset and verify the impact of varied query token counts on different datasets.\\n\\nIn summary, the author should clearly elucidate the contributions of FEFE and MoQ to assess whether the paper meets the quality standards for acceptance at ICLR. For FEFE, simply using Fourier transform is not sufficient. For MoQ, aggregating multiple experis with query tokens and then inputing transformer decoder layers, this technology is not novel enough.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The authors have explained the differences between the Fourier transform in CamoFourier and MoQT. They also clarified the novelty of the Mixture-of-Queries Decoder, which is proposed to imitate the human habit of segmenting camouflaged instances. Additionally, they have resolved the detailed issues raised. Therefore, I have raised my score.\"}", "{\"title\": \"Response to AC\", \"comment\": \"Dear AC,\\n\\nThe participants in the discussion are not very active. It is really impossible to give a satisfactory response to the reviewers, and even impossible to have a complete conversation. I am also helpless.\\n\\nEven if there is a response, it is also some vague words, such as worrying about novelty. We cannot know the specific problems of the paper. We can only guess the focus of the reviewers and give answers, which is not conducive to discussion.\"}", "{\"title\": \"response\", \"comment\": \"We found that you lowered your score from 6 to 5. After reading our rebuttal, can you tell us which part of the response increased your doubts?\\n\\nWe can only guess where your doubts come from, which makes it difficult for us to give you a satisfactory answer. Just relying on one sentence of innovation concerns.\\n\\nAccording to your statement, you lowered the score based on our rebuttal and other people's opinions? This makes us also confused. Does your score depend on other people's opinions? If so, it is difficult for us to make a targeted response to your opinions, and it is not conducive to our in-depth discussion of the problems of the paper.\"}", "{\"summary\": \"The paper proposes the Mixture-of-Queries Transformer (MoQT), a new model for camouflaged instance segmentation (CIS). The main contributions include: 1) Frequency Enhancement Feature Extractor (FEFE): This module leverages frequency-domain transformations to emphasize object contours and minimize color interference, aiding in detecting camouflaged instances by focusing on contour details rather than color; 2) Mixture-of-Queries Decoder (MoQ Decoder): This component employs multiple groups of object queries in a hierarchical framework, enhancing segmentation precision by refining masks at each layer. The model was benchmarked against 18 state-of-the-art CIS models and showed improved performance on the COD10K and NC4K datasets, with gains of 2.69% and 1.93% in average precision, respectively.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces a novel approach to camouflaged instance segmentation (CIS) with the Frequency Enhancement Feature Extractor (FEFE) and Mixture-of-Queries Decoder (MoQ Decoder). These components creatively combine frequency-domain analysis and hierarchical query collaboration, offering a unique solution to the challenge of segmenting camouflaged objects.\\n\\n2. The approach is rigorously validated, outperforming 18 state-of-the-art methods on key datasets. The paper's ablation studies and parameter analyses reinforce the model\\u2019s robustness and effectiveness, showcasing thorough and high-quality experimentation.\\n\\n3. The paper is well-structured, clearly explaining its methods and their significance. Visual aids, including performance tables and diagrams, enhance understanding, and the rationale for each component is presented logically.\", \"weaknesses\": \"1. Insufficient Baseline Comparisons: While the paper includes comparisons with several CIS methods, it does not fully explore benchmarks with generic instance segmentation methods that could also apply to camouflaged segmentation. Including results for generic transformers or non-CIS-specific models with adaptations for camouflage (e.g., baseline Mask DINO [1] with FEFE added) would clarify the advantage of MoQT over generalized solutions.\\n\\n2. The originality of Frequency Enhancement Feature Extractor (FEFE): The paper asserts that frequency domain-based contour enhancement is effective for CIS, but frequency domain analysis for camouflage object detection has already proposed by previous works like [2][3]. Although these works are dedicated for COD, the methodology of locate the camouflage objects is similar to CIS task.\\n\\n3. Interpretability of the MoQ Decoder: Although the multi-expert query mechanism is innovative, the paper lacks insight into how each \\u201cexpert\\u201d in the MoQ Decoder contributes uniquely to segmentation refinement. Visual or quantitative analysis of the individual contributions of each expert group in the MoQ Decoder would help to illustrate why this design is optimal and inform future work on multi-query designs.\\n\\n[1] Mask DINO: Towards A Unified Transformer-based Framework for Object Detection and Segmentation, CVPR 2023\\n\\n[2] Detecting Camouflaged Object in Frequency Domain, CVPR 2022\\n\\n[3] Detecting Camouflaged Object in Frequency Domain, TOMM 2023\", \"questions\": \"1. Did the authors consider adapting standard segmentation models (e.g., Mask DINO, SAM) for CIS by incorporating frequency-domain enhancements like FEFE? If so, how did MoQT compare?\\n\\n2. Why did the authors choose Fourier transforms over other frequency-based methods, such as wavelet transforms, for capturing contour information?\\n\\n3. Can the authors visualize or quantitative analysis of the individual contributions of each expert group in the MoQ Decoder ?\\n\\n4. Did the authors consider MoQT\\u2019s applicability to other segmentation tasks where objects are not necessarily \\u201ccamouflaged\\u201d in the traditional sense?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors tackle the problem of camouflaged instance segmentation (CIS). To this end, the authors proposed Mixture-of-Queries Transformer (MoQT). The experiments on COD10K and NC4K show that the MoQT outperforms other CIS baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-organized and easy to follow.\", \"The proposed work outperforms 18 CIS baselines on COD10K and NC4K.\", \"The authors did perform the ablation study to show the effectiveness of each component in MoQT.\"], \"weaknesses\": [\"There is a concern about the novelty. The authors explore frequency domain for feature extraction which is not new. The idea of using experts (several queries comprise an expert) is not new either.\", \"The number of decoder layers, L, is questionable. There is a huge gap between 4 and 12. Why did the authors choose 6?\", \"The visualization is not clear. How about the failure cases? How about the case of no camouflaged instance?\"], \"questions\": [\"I have question about the novelty.\", \"There is a question about the parameters such as the number of decoder layers.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a Mixture-of-Queries Transformer (MoQT) designed for camouflaged instance segmentation (CIS). It incorporates two main components: a Frequency Enhancement Feature Extractor (FEFE) and a Mixture-of-Queries Decoder. The FEFE captures camouflaged clues in the frequency domain by enhancing contours, eliminating interference colors, and extracting suitable features. The Mixture-of-Queries Decoder uses multiple experts of queries to spot camouflaged characteristics cooperatively, refining outputs hierarchically for accurate instance masks. Experimental results show that MoQT outperforms 18 state-of-the-art CIS approaches on COD10K and NC4K in average precision.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The approach shows significant improvement over existing methods, as demonstrated by extensive experimental results on COD10K and NC4K datasets, highlighting its practical applicability and potential impact in the field.\", \"weaknesses\": \"1. Lack of Innovation: The proposed frequency domain feature extraction method closely resembles those in \\\"Unveiling Camouflage: A Learnable Fourier-based Augmentation for Camouflaged Object Detection and Instance Segmentation\\\" and \\\"Camouflaged Instance Segmentation via Explicit De-camouflaging.\\\" The Mixture-of-Queries Mechanism is also similar to the Multi-scale Unified Query Learning mentioned in \\\"A Unified Query-based Paradigm for Camouflaged Instance Segmentation,\\\" without adequately explaining the main differences.\\n\\n2. Writing and Expression Errors: There are several grammatical and expression errors in the manuscript. For example, lines 156-157 contain mistakes where \\\"combines\\\" should be \\\"combine\\\" and \\\"camouflaged objection detection\\\" should be \\\"camouflaged object detection.\\\"\", \"questions\": \"1. On line 296 of the manuscript, when you mention initializing M experts E, are you referring to Positional Embeddings?\\n\\n2. In line 320, it is stated that \\\"our MoQ Decoder does not contain just one group of queries for capturing various instances but multiple groups of queries in each MoQ Layer.\\\" How are these groups designed and divided? The number of groups is not specified.\\n\\n3. It is suggested to provide the code to help readers better understand the novelty of the proposed method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"response\", \"comment\": \"**Anwser for Question1** : Thanks for your question. In line 296, our paper introduces the initialization of the experts $E_i (i = 1, \\\\cdots, M) $. These experts are groups of positional embeddings (i.e., queries in our manuscript) in the classical transformer. Referred from DETR, learned positional embeddings can also be named object queries. In summary, the experts are formed by the learnable positional embeddings (queries), which provide diverse information for camouflaged instance segmentation.\\n\\n**Answer for Question2**: Thanks for your question. We have specified the number of groups in Section A.1 (Implementation Details) of our Supplement Material. According to our supplement material, Mixture-of-Queries\\u2019s expert number (group number) is 2 in each decoder layer.\\n\\n**Answer for Question3**: Thanks for your suggestion. The novelty of our method is illustrated in our paper and reply, and we propose to release our source code to the public after publication.\\n\\n**Answer for Weakness 1**: To clarify the novelty of our method, we further discuss the difference between our method and other CIS methods, the comparison details are presented in https://anonymous.4open.science/r/iclr2025_imgs-744F/table.jpg. Difference from existing 4 methods (OSFormer, DCNet, UQFormer and CamoFourier), our proposed MoQT adopts color removal and contour enhancement in FEFE for mining camouflaged clues. Besides, the MoQ decoder in our method is used to imitate the human habit of segmenting camouflaged instances, where in each layer we initialize new experts for cooperation and queries refining with MoE mechanism. In summary, our method reveals the relationship of Frequency and camouflage, and it is the first attempt of using MoE mechanism in query-based transformer for segmentation.\\n\\n**Answer for Weakness 2**: Thanks for your correction. We will carefully check our manuscript to avoid such mistakes in the final edition.\"}", "{\"title\": \"response\", \"comment\": \"**Question1 (Weakness1)**: There is a concern about the novelty. The authors explore frequency domain for feature extraction which is not new. The idea of using experts (several queries comprise an expert) is not new either.\\n\\n**A1**: For the novelty, we clarify the following two aspects. **First**, we analyze in the introduction that our idea to solve CIS is based on **the characteristics of camouflage** (The priori of camouflage principles: zoologists discovered that animals can camouflage themselves by matching their colors or patterns with the background, but it is believed that there are some clues in contours for recognizing the camouflaged instances.) and **how humans distinguish camouflage objects** (The human habit of segmenting camouflaged instances: human visual system instinctively sweeps across the scene and gradually searches for valuable clues to find out camouflaged instances. And for some heavily camouflaged scenes it is believed that combining the masks labeled by multiple experts is much helpful.). The idea of solving CIS by **color removal and contour enhancement with the cooperation of multiple queries has not been proposed before**, and it makes sense according to the above analysis. **Second**, we propose a Mixture-of-Queries Transformer, which includes a Frequency Enhancement Feature Extractor (FEFE) and a Mixture-of-Queries Decoder (MoQ Decoder), where FEFE is used for color removal and contour enhancement. The MoQ Decoder aims to mix multiple groups of queries hierarchically to provide more accurate predictions. Although some previous methods introduce Fourier transform in camouflage scenes, **they do not reveal the relationship between the frequency domain and the camouflage scene**. Our proposed FEFE can **explicitly reveal the help and significance of frequency domain information in color removal and contour enhancement to expose the camouflaged object**, which is consistent with our analysis of the priori of camouflage principles in the introduction and not discussed before. Besides, the Mixture-of-Queries Decoder is proposed to imitate the human habit of segmenting camouflaged instances, and we initialize new experts for cooperation and query refining at each layer. In previous transformer-based methods, the decoder updates queries for final prediction. There is only one set of queries initialized before feeding into decoder layers, so they can only optimize the implicit relationship between the decoder and queries in an end-to-end manner. However, we initialize some experts at each layer to learn how to update queries. **Not only can each decoder layer optimize the candidates (refer to queries), but also the newly initialized experts can explicitly refine them hierarchically**, which is helpful for final predictions. To our knowledge, the MoE mechanism of queries in transformer decoder has yet to be studied. Moreover, we would appreciate it if you could find some similar works.\\n\\n**Question2 (Weakness2)** : There is a question about the parameters such as the number of decoder layers.\\n\\n**A2**: We have added extra detailed ablation results on decoder layers, as presented in the table (the metrics of AP are reported). It can be found that the best performance is achieved when the number of layers is 6. When the number of layers increases to 8, 10, and 12, the parameters become more, but the performance is not further improved, so we choose 6 as the decoder layers by default.\\n| Decoder Layers | COD10K-Test | NC4K-Test | Params(M) |\\n| ---- | ---- | ---- | ---- |\\n| 2 | 46.50 | 53.47 | 54.62 |\\n| 4 | 47.02 | 53.61 | 57.93 |\\n| 6 | **47.99** | **54.73** | 61.68 |\\n| 8 | 47.45 | 53.73 | 65.43 |\\n| 10 | 46.89 | 53.35 | 68.36 |\\n| 12 | 47.20 | 53.82 | 71.18 |\\n\\n**Weakness3**: The visualization is not clear. How about the failure cases? How about the case of no camouflaged instance?\\n**A3**: We present some extra visualizations of the failure cases and scenes with no camouflaged instance. Our method fails to segment the camouflaged instances from the surroundings when the scene is very complex. Because the camouflaged instances hide themselves heavily, our model and even humans find it difficult to distinguish them. In the test set, some scenes, recognized at a glance, are not enough to be called camouflaged. Our model performs well on these scenes with no camouflaged instances.\\n- anonymized link of failure cases: https://anonymous.4open.science/r/iclr2025_imgs-744F/fail.jpg\\n\\n- anonymized link of no camouflaged instance: https://anonymous.4open.science/r/iclr2025_imgs-744F/no.jpg\"}", "{\"comment\": \"After carefully considering authors' rebuttal and other reviewers' comments, I remain concerned about the paper's novelty. As a result, I have updated my score downward.\"}", "{\"title\": \"response\", \"comment\": \"According to your replies, I think the other problems have been solved by our rebuttal, but you still have some concerns about the novelty. I can not agree with your comment of 'the authors explore frequency domain for feature extraction which is not new. The idea of using experts (several queries comprise an expert) is not new either'. **Please give us some evidence, we would appreciate it if you could find some similar works**.\\n\\nBesides, We further discuss the difference between our method and other CIS methods, the comparison details are presented in https://anonymous.4open.science/r/iclr2025_imgs-744F/table.jpg. Difference from existing 4 methods (OSFormer, DCNet, UQFormer and CamoFourier), our proposed MoQT adopts color removal and contour enhancement in FEFE for mining camouflaged clues. Besides, the MoQ decoder in our method is used to imitate the human habit of segmenting camouflaged instances, where in each layer we initialize new experts for cooperation and queries refining with MoE mechanism. In summary, our method reveals the relationship of Frequency and camouflage, and it is the first attempt of using MoE mechanism in query-based transformer for segmentation.\\n\\nPlease **let us know where the flaws of our paper are, instead of just saying that it is not novel enough**, which is too cheap and unconvincing.\"}" ] }
9TpgFnRJ1y
Interpretable and Efficient Counterfactual Generation for Real-Time User Interaction
[ "Cesare Barbera", "Andrea Passerini" ]
Among the various forms of post-hoc explanations for black-box models, counterfactuals stand out for their intuitiveness and effectiveness. However, longstanding challenges in counterfactual explanations involve the efficiency of the search process, the likelihood of generated instances, their interpretability, and in some cases, the validity of the explanations themselves. In this work we introduce a generative framework designed to address all of these issues. Notably, this is the first framework capable of generating interpretable counterfactual images in real-time, making it suitable for human-in-the-loop classification and decision-making. Our method leverages a disentangled regularized autoencoder to achieve two complementary goals: generating high-quality instances and promoting label disentanglement to provide full control over the decision boundary. This allows the model to sidestep expensive gradient-based optimizations by directly generating counterfactuals based on the adversarial distribution. A user study conducted on a challenging human-machine classification task demonstrates the effectiveness of the approach in improving human performance, highlighting the critical role of counterfactual explanations in achieving this advantage.
[ "Explainable AI", "Generative AI", "Human-Machine interaction" ]
https://openreview.net/pdf?id=9TpgFnRJ1y
https://openreview.net/forum?id=9TpgFnRJ1y
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zDZB22KLhN", "xVD68z6gn8", "vcMefCghAY", "rmqgaXniM2", "rc9LIET5iB", "rB7oVqSE4L", "npYNC9yB5p", "ZVRQ55lTHD", "Wfbjnu53y5", "Wezh6H4ETC", "PDYlee4DCO", "OFmFOHB7HW", "Mm6nery2nl", "IRx6RFCVp3", "G7tW2Mtxvw", "FufH5z9E8z", "FI2gkixYJl", "CAQVFiv85Y", "BNF8UYrRLo", "AbqR740rhS", "9YTezElBJR", "6W6v0Vzw5r", "6AhlbK4JWK", "5JYaofMYvQ", "4pLbs4XnTP" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732704331497, 1732641286700, 1730323171323, 1730122665089, 1731954221247, 1732751638867, 1733150808971, 1732529929001, 1732271946206, 1732271984839, 1732557537822, 1732750703075, 1732529944582, 1730655804729, 1731954114155, 1731953928871, 1732529953152, 1732697456611, 1730194244851, 1732271975981, 1731954339644, 1733226805751, 1732271959981, 1732529960417, 1731954008796 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11578/Reviewer_3rie" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Reviewer_Y1ZA" ], [ "ICLR.cc/2025/Conference/Submission11578/Reviewer_egRZ" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Reviewer_Y1ZA" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Reviewer_3rie" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Reviewer_vmA4" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Reviewer_egRZ" ], [ "ICLR.cc/2025/Conference/Submission11578/Reviewer_3rie" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ], [ "ICLR.cc/2025/Conference/Submission11578/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you addressing my comments. Now, I feel better informed regarding how the paper approaches the notion of concepts and its relationship to approaches based on other generative models like diffusion models. However, the above responses still do not resolve certain issues.\\n\\n**Scalability.** I understand the motivation behind evaluating the method on BloodMNIST. However, simply stating that the approach can scale to larger architectures **and** datasets based on running times for increasing architecture depths is not enough. For example, there are many reasons why VAEs are not (exclusively, they can be combined with latent diffusion models for example) the main method used in today's image generation pipeline, e.g., posterior collapse. In the above, the authors make an implicit assumption that the latent space is able to 'handle everything' and only its size must be controlled. However, there are many reasons to think that, for example, concept identification would not be possible on complex data like ImageNet. Are there any examples in related works showing that specific latent dimensions of VAEs encode easily identifiable concepts on data like ImageNet? In general, I am very skeptical regarding the statement that this approach `can easily scale`. While I do not state that every method must work with every level of data complexity, this issue must be clarified here. Either by proof with results on bigger and more complex data, or by explicitly saying that this approach is meant for simpler datasets.\\n\\n**Generalization to independent models.** Once again, I would be very careful with stating that the approach is able to handle models that are not trained together with the pipeline, since no empircal proof is given. The limitation connected to the loss might not be `dramatic`, but it requires verification if the authors want to claim that this method is applicable to arbitrary DNN architectures trained through Gaussian mixture loss. Also, the paper's contributions remain limited if no external models are incorporated, since then the entire pipeline reduces to a solution that does not actually provide counterfactual explanations, but rather counterfactual examples only. This is fine if the goal of the method is concentrated on the human-machine interaction, but not enough to state that it provides a general method for counterfactual explanation generation.\\n\\n**Labeled data requirement.** The authors mention that their `approach is unsupervised with regard to concepts as during training our model does not have supervision with regard to which concepts to encode in the learned representations. Some datasets like CelebA provide additional supervision which can be used to guide the training process but our technique does not leverage this information and we drop this requirement because most real-world datasets do not provide such information.` I think that here the relationship between concepts and labels is once again very vague. What is the additional supervision that can be used to guide the training process in, e.g., CelebA, that the authors do not leverage? If I understand correctly, the authors require labeled data for their approach to be trained. Hence, the reliance on additional supervision is there. Note that this has direct influence on the structure of the latent space (even presented pictorially, Figure 1. (b, upper part)) and the 'knowledge' gained by the autoencoder.\"}", "{\"title\": \"Additional repsonse to 3rie\", \"comment\": \"We would like to thank the reviewer for their appreciation and willingness to take our replies under consideration. In the following we provide additional clarifications for the remaining concerns.\\n\\n\\n**concepts without supervision claim** In the context of our paper we consider latent dimension $z_i \\\\in \\\\mathcal{Z}$ which, when associated with a function $\\\\text{DEC}(\\\\cdot)$, is a concept if $(\\\\text{DEC}(z_{i,1}, z_{\\\\setminus i}), \\\\text{DEC}(z_{i,2}, z_{\\\\setminus i}))$ can be understood by and end-user as an atomic change of a property of the input (in the context of BloodMnist, an example could be changing the values of the frst latent dimension while keeping others intact leads to a change of size of the cell).\\n\\n\\nAs the reviewer correctly notices, label supervision is used by our model. In our setting we clearly distinguish between labels and concepts. Labels represent classes, or in the case of BloodMNIST the cell type. Concepts on the other hand are associated to the $z_i \\\\in Z$ of our model which are the latent representations. Our approach is unsupervised with regard to concepts as during training our model does not have supervision with regard to which concepts to encode in the learned representations. Some datasets like CelebA provide additional supervision which can be used to guide the training process but our technique does not leverage this information and we drop this requirement because most real-world datasets do not provide such information. In addition, approaches that leverage diffusion models may be concept independent in the sense that they do not require concept supervision in order to be trained but we are not aware of any work that exploits diffusion models to provide concept based explanations or that associate human interpretable concepts to the extracted explanations. We additionally rephrased the description (line 253) to make it clearer that while the training process is done without any concept supervision, after training latent representations are associated with concepts in post-hoc fashion by a human annotator.\\n\\n**Connection to related works** First, when talking about transparency we meant interpretability of the counterfactual for a user (we replaced transparency with interpretability in the paper (line 86)). Our work crucially relies on concepts (annotated post-hoc after training) as interpretable explanations for the counterfactual being provided. We are not aware of any approach with diffusion models that leverage human understandable concepts to improve interpretability of explanations. Concerning the choice of the dataset, we focused on BloodMNIST not for computational reasons, but because it is a challenging but still feasible task for a non-expert user. CelebA and ImageNet are too simple for users, while CheXpert is too difficult for laypersons. Our approach can easily scale to large architectures and datasets, as our counterfactual search exclusively depends on the size of the latent-space. Indeed Appendix C reports running times for increasing architecture depths, confirming the feasibility of the approach.\\n\\n\\n**Gaussian-mixture-loss-based models** We added in the limitations section of our paper the fact that our approach requires a Gaussian mixture loss (line 521-526). In our opinion, this limitation is not dramatic as it only affects the training loss, which can nonetheless be applied to arbitrary DNN architectures. While external models cannot be used as-is, they could in principle be incorporated by fine-tuning them using the Gaussian mixture loss. We clarified this aspect in the revised version of the manuscript.\\n\\nWe look forward to further discuss the topic or any additional doubts of the reviewer.\"}", "{\"summary\": \"This work proposes a new counterfactual explanation technique for image classification. The technique uses a regularized latent space model and searches for suitable counterfactual candidates in the learned latent space, which should have favorable interpretability characteristics. The technique is evaluated through a human-subject study.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"Strengths:\", \"Overall, the technique is well explained and the writeup is good to follow\", \"I did not discover any major flaws regarding soundness\", \"User evaluation is important, and not often considered in XAI\"], \"weaknesses\": \"Weaknesses:\\n\\n* **Related Work.** Unfortunately, there are many related techniques for counterfactual image generation that are not discussed in this work. In general, the idea of using latent-variable models to generate counterfactuals cannot be considered novel and is well covered in the literature, e.g., by Sauer & Geiger (2021). A recent work (Melistas et al., 2024, Section 2) lists more than 10 approaches to tackle the problem presented using VAEs, GANs, Deep-SCM, Diffusion models, Flows, ... Unfortunately, these related works are not mentioned here. It is not clear why they are insufficient and yet another method to tackle this problem is required.\\n\\n* **Grounding for disentanglement claims.** The authors claim that their method yields interpretable, disentangled representations. However, while regularization can help disentanglement in practice, it should be noted that there is theoretical backing for this claim (unless some rigid assumptions or knowledge of causal models is assumed). For instance, Locatello et al. (2019) prove that disentanglement without additional information is impossible, and Leemann et al. (2023) study the topic for conceptual explanations. It is therefore questionable whether the mentioned trade-off between disentanglement and reconstruction quality really exists. The references given (e.g., BetaVAE and its derivatives) do not reflect the current state of research.\\n\\n* **Evidence for claims.** It is okay to make claims like in Section 6 (l.406-407, \\\"this is the first unsupervised concept based counterfactual generating technique suited for a real time interaction\\\") but this requires evidence to back them. For instance, at this point I would have expected a run-down of runtimes of other CFE techniques for images when using encoders/decoders of the same complexity.\\n\\n* **Evaluation is insufficient and qualitative results are not convincing.** I think a user-study is a good start for evaluation, but it not sufficient on its own. I think other metrics for image quality such as FID and edit-distance (in input and latent space) should be checked and reported as well, in particular in contrast to other techniques. Unfortunately, the counterfactuals shown in the Figure look blurry and not like realistic scans. Disentanglement claims should be checked using synthetic datasets with known concepts such as 3DShapes (https://github.com/google-deepmind/3d-shapes).\\n\\n* **Ablation studies are missing.** There are no ablation studies that allows to verify the necessity of each component in the framework. For instance, I am wondering whether the complex calculation of the mean is necessary or if some point a specific distance behind the decision boundary on the segment from the input embedding and the counterfactual class embedding would be sufficient.\\n\\n* **Accuracy considerations.** The work proposes to use a specific generative image classification model that allows to directly generate counterfactuals. However, I think the accuracy of this model will be lower than that of state-of-the-art models. This trade-off is not discussed.\\n----------------\\n\\n**Summary.** Unfortunately, I don\\u2019t think the technique developed is highly innovative and an evaluation against competing techniques is missing. If there is a specific advantage of the technique that I am missing, I suggest that a comparative analysis with the techniques in Melistas et al. (2024) should be added to show this advantage. In its current state the motivation why the existing techniques are insufficient for the counterfactual generation problem is not clear at all.\\n\\n--------------------\\n\\n**References**\\n\\nLocatello, Francesco, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Sch\\u00f6lkopf, and Olivier Bachem. \\\"Challenging common assumptions in the unsupervised learning of disentangled representations.\\\" In international conference on machine learning, pp. 4114-4124. PMLR, 2019.\\n\\nTobias Leemann, Michael Kirchhof, Yao Rong, Enkelejda Kasneci, Gjergji Kasneci Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI), PMLR 216:1207-1218, 2023.\\n\\nSauer, Axel, and Andreas Geiger. \\\"Counterfactual Generative Networks.\\\" International Conference on Learning Representations, 2021\\n\\nMelistas, T., Spyrou, N., Gkouti, N., Sanchez, P., Vlontzos, A., Papanastasiou, G., & Tsaftaris, S. A. (2024). Benchmarking Counterfactual Image Generation. arXiv preprint arXiv:2403.20287.\", \"minor_points\": [\"There are a couple of issues with the writeup\", \"Please check capitalization of bullet points in lines 122-132.\", \"L.141-152 is hard to follow.\", \"Typo L.176 (caption): \\u201cregularize\\u201d\", \"L. 215 \\u201cDDPM\\u201d is not introduced\", \"L. 344 Concept-based (section title)\", \"L. 346 class-relevant\", \"When you refer to the appendix, please include a link to the exact section or figure (e.g. l. 347)\", \"L. 410 hyper-parameter configuration\", \"Table 1: Please use the same number of digits for each result\", \"I noticed that on page 27 of this submission (Figure 13), it seems to be indicated that the study was conducted at the University of Trento, potentially revealing the affiliation of the authors and thereby violating the double-blind review principle.\"], \"questions\": [\"User study: I have some questions regarding the user study: Was the study IRB approved? Was the study preregistered? The number of 50 participants divided over multiple conditions seems rather low, how was the number chosen? A survey by Rong et al. (2022) shows the average number of participants in XAI user studies with a between-subjects design to be greater than 300.\", \"The study relies on a specific classification model which performs the classification through a regularized latent space. What are the costs of explainability here, i.e., what is the performance difference of this model (91% accuracy is reported) vs. using a state-of-the-art black-box model that is trained on the dataset without any constraints?\", \"Technical Derivation: In Equation (9), how is the formula for the weights determined? If one is interested in the expected value, shouldn't the weight of each segment be the integrated density over the segment? Here it seems that only the density at the respective center is used. Suppose we have a constant density, then the length of the segment would not play a role in the weight? is this intentional?\", \"**Reference**\", \"Rong, Yao, Tobias Leemann, Thai-trang Nguyen, Lisa Fiedler, Peizhu Qian, Vaibhav Unhelkar, Tina Seidel, Gjergji Kasneci, and Enkelejda Kasneci. \\\"Towards human-centered explainable AI: user studies for model explanations.\\\", IEEE Transactions on Pattern Analysis and Machine Intelligence (2022)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a framework for counterfactual explanations based on a generative autoencoder. By refining the process of counterfactual selection, the proposed method effectively generates counterfactuals that fulfill a list of desired properties in real-time. The resultant explanations facilitate human-machine interactions and demonstrate the potential for improving user performance, as evidenced by the experimental results.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-structured and self-contained, offering a clear definition of the objectives for counterfactual construction.\\n2. The definition of counterfactual candidates is sound, significantly narrowing the search space and enabling real-time generation.\\n3. The carefully designed experiment demonstrates the potential benefits of providing complementary information in human-machine interaction.\", \"weaknesses\": \"1. Similar to other generator-based explanation frameworks, the transparency of the explanation process itself is limited due to the black-box nature of the neural-network-implemented generator.\\n2. The flexibility of the proposed method is another concern, as the delivered explanations appear to be model-specific.\\n3. The expected counterfactual violates $\\\\mathcal{P}_2$ stated in Definition 1.\\n4. The benefit of the rotation for accelerating expectation computation is unclear.\\n\\nQuestions 1 and 2 detail the concerns mentioned in points 3 and 4 respectively.\", \"questions\": \"1. The definition of counterfactual candidates is well-motivated. However, recalling that the expected counterfactual is a weighted average of $\\\\mathbb{S}_1$ and $\\\\mathbb{S}_2$, the final result seems to deviate from both segments, thereby violating $\\\\mathcal{P}_2$. This suggests that some counterfactuals are strictly better than the one selected. Could the authors provide clarification on how this should be interpreted?\\n2. How does the rotation in Section 5.2 contribute to accelerating the computation of expectation? Given two points $a$ and $b$, let $\\\\mathbb{S}=\\\\lbrace(1-t)a + tb|t\\\\in [0,1]\\\\rbrace$ be the segment connecting them, which is a one-dimensional element regardless of the dimensionality of the feature space. Finding the weighted average of $\\\\mathbb{S}$ involves determining the expected position on the segment, which only depends on the variable $t$. An estimate can be acquired with a univariate Monte-Carlo estimator by interpolating between 0 and 1 for $t$. While the rotation in the paper appears to eliminate estimation variance in the aligned dimensions, it instead concentrates the variance in the final dimension, which is later redistributed to the others during the reverse of the rotations.\\n3. Could the authors elaborate more on sparsity? Line 376 says \\\"the label-irrelevant generative factors are shared ensuring sparsity\\\". According to Appendix.C.1, $z_u$ accounts for only one-fourth of the total latent encoding $z$. With modifications applied to $z_s$, which constitutes 75% of the latent features, the resultant counterfactual seems to deviate from the intended sparsity.\\n4. What do the different variants of $\\\\mathbb{S}_1$ mean? Most of them are marked blackboard bold, with one exception at line 270 which is italicized. Some variants differ in the presence of the superscript $\\\\mathcal{C}$.\\n5. The experimental results are appealing. To support the claim that the human-machine interaction serves as \\\"a training process for the participants\\\", could the authors provide the accumulated accuracies of initial user predictions over the number of seen instances?\\n6. Some parts of writing can be polished for clarity, for example:\\n - Line 185 states \\\"They propose to **apply to** the latent representation \\u2026\\\" \\u2014 it is unclear what is being applied.\\n - The caption of Figure 6 reads \\u201cLabels are treated as a random variable to **also** sample.\\u201d \\u2014 what does \\u201calso sample\\u201d mean?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to 3rie\", \"comment\": \"**Empirical evidence** We added a quantitative evaluation of our method with FID, COUT and S3 in Appendix C. We compare with the competing approach of [1] as it is the only other counterfactual generating technique which leverages concepts without supervision that we are aware of. We also compare to our method simply returning a point in the segment connecting the instance to explain and the counterfactual class mean for different model confidence values as a mean for an ablation study. Our technique has comparable performance to our competitor while being substantially more efficient.\\n\\n\\n\\n**Applicability** Our approach is not model-agnostic. In order to apply our explanatory technique in the latent space the minimum requirement is that the classifier uses a gaussian-mixture loss. The work of [2] shows that this loss can be used to obtain equivalent performance to softmax based classification scores for a wide variety of benchmarking datasets and CNN architectures. Since one can explain any DNN that uses this loss with our method creating a counterfactual in the latent-space, in order to generate explanations in the input space a decoding model reconstruction is required. In conclusion, even though our approach does not extend to DNN with softmax classification layers, the requirement to implement our approach is very simple. Other components of our framework, such as label-relevant/label-irreleveant encoders, try to target specific desiderata of counterfactual explanations such as sparsity but the overall applicability of our method is not limited and does not imply worse classification performances. \\n\\n\\n**Running times** We added quantitative experiments to Appendix C where we also evaluate running times. We compare our method with the competing approach of [1]. Results show our approach superior performance across various architecture complexities. \\n\\n\\n\\n**Related work** We extended the related work to consider more recent and relevant approaches. Our method separates from mentioned papers because it does not leverage knowledge of a causal graph (a requirement for the other works). In addition, interest for diffusion models high quality image generation led to researchers leveraging them for counterfactual explanations. Although such approaches are able to generate realistic counterfactual images, the resulting counterfactuals lack transparency which is a crucial component of our framework. \\n\\n\\n**Strictly better counterfactual** Our optimization process focuses exclusively on the latent space, meaning its effectiveness in the input space depends on the preservation of distances. While this reliance is a limitation, strong reconstruction and classification performance suggest that the assumption is realistic. If distinct inputs were mapped too closely in the latent space, neither reconstruction nor classification would function effectively. We updated the text to make this clearer to future readers (line 245)\\n\\n\\n**Concept labelling** Indeed, learned concepts require a human annotator for labeling. We updated the paper to explicitly mention this in the main text (line 354) and in the dedicated section of the Appendix: Appendix E. \\n\\n**Effect of concepts** In our study, we directly evaluated the approach that includes both the counterfactual image and the descriptive concepts as this is the setting that is most informative for users. To keep the number of experimental conditions as low as possible we did not specifically evaluate the effect of concepts on explanations. \\nHowever, we recognize the importance of this research question and plan to explore it thoroughly in a future journal version of our work.\\n\\n\\n\\n**Expectation along a segment** We do not assume points in the segment $S$ are normally distributed. But $S$ is a collection of points in the space $\\\\mathbb{R}^d$ and points in $\\\\mathbb{R}^d$ follow a normal distribution. Finally we are interested in computing the expected value of the points that belong to $S$. We update the paper to make this clearer (line 925).\\n\\nreferences\\n\\n[1] Luss, Ronny, et al. \\\"Leveraging latent features for local explanations.\\\"\\n\\n[2] Wan, Weitao, et al. \\\"Rethinking feature distribution for loss functions in image classification.\\\"\"}", "{\"title\": \"Additional response to egRZ\", \"comment\": \"Dear reviewer, thank you for engaging in the discussion, we are pleased we were able to address some of your previous concerns. With regards to the points just made, we tackle them below:\\n\\n**Scalability** Our approach scales effectively to deeper and more complex architectures (Appendix C reports running times for different architecture sizes). In the limitation section of our manuscript we explicitly state that given the need for human annotators using compact latent spaces is very likely needed (line 528). While in the context of our experiment this was not limiting, we explicitly mention that we consider a very important avenue for future research to improve the applicability of our approach given the promising results our method yields in the interactive setting. More precisely, our technique can be applied to large scale models. For example, leveraging a latent diffusion model conditioned on the compact latent dimensions of the RAE or the RAE outputs are some of the directions we consider exploring.\\n\\n**Transparency** Without concept supervision, it is impossible to dictate which concepts the model encodes in its latent space. Instead, users infer patterns by explicitly observing the generator's behavior. In our approach (see Appendix E), users analyze latent traversal, observing how changes in a single latent dimension affect the generation while other dimensions remain fixed. his implies that if humans can correctly assign the conceptual changes to the corresponding latent dimensions no gap should be observed between the generated instances and the human inferred behaviour. We argue that any gap should rather be ascribed to the hyper-parameter controlling the number of concepts to return together with the explanation. In that regard, if the perturbations are very simple and actually correspond to a single or very few concepts, returning a too high number of concepts could lead to descriptions of the counterfactual changes which are not faithful with respect to the actual changes in the counterfactual image (this is because concepts are presented mentioning exclusively the direction in which they are altered and not the magnitude of change as well). We explicitly mentioned this in Appendix F of our manuscript. A potential solution to this issue, given a default value of concepts to be returned, could be to drop all the concepts whose relevance metric is below a certain threshold. This threshold hyper-parameter could possibly be fine-tuned or user-specified. Alternatively one could implement richer dictionaries taking into account the magnitude of the change at a concept level, although increasing reliance on annotators. For our experiment we decided to keep the number of concepts constant for each image in order to decrease the noise users were subject to during the interaction with the model. This value was set to 3 as most types of cells could be obtained with simple changes to the input image. \\n\\n**Sparsity** We thank the reviewer for the clarification as it allowed us to better understand the doubts regarding our manuscript. We realize that the term sparsity could be a bit misleading in this context. Our goal is to ensure that the counterfactual image and the input image are as similar as possible. To avoid any confusion, we now refer to this property as \\u2018proximity\\u2019 in the manuscript, and we clarified the assumptions behind the optimization of this property. With regard to concepts, even though some might be altered simultaneously to improve proximity of explanations, we specifically designed a concept relevance metric that allows us to infer which changes were most relevant to the counterfactual image. This allows the method to generate sparse explanations in terms of concepts being modified in the counterfactual.\\n\\n\\n*We updated the plot in the appendix by removing two outliers (instances correctly classified by all participants) to present a clearer pattern. From question 8 onward, the behavior expected by the reviewer becomes evident, with earlier differences likely obscured by the experiment's initial stages and task difficulty. The effect may also be mitigated by the limited number of questions and this could be a lot more evident with a longer study. We did not go in this direction because the cognitive burden on users worried us and capturing such phenomenon was not the main scope of our contribution. However, the plot clearly supports the claim of training effects. Additionally, the Label and Label+Explanation settings show similar patterns, as the main component of the training process evidently consists in providing users a ground truth to assign to images they see (the machine prediction is extremely accurate). Large differences cannot be seen because both settings provide this information. In conclusion, this aligns with our claim that training effects were present in both interactive settings.\"}", "{\"comment\": \"My apologies for the late reply.\\n\\nI have checked the rebuttal. I think the related work section has improved, thanks for adding the discussion on indentifiability. \\nHowever, while some of the works do indeed require a causal graph, many of the references in the benchmarking paper do not require such background knowledge. I think that only building an \\\"interpretable\\\" latent space (without theoretical guarantees) is insufficient to justify novelty in my opinion. While adding a comparison to [1] is a first step, I don't think the work represents the state of the art in counterfactual explainability (judging from my experience and reading of the benchmarking paper by Melistas et al. mentioned in my review).\\n\\nLooking at the other reviews, I agree with reviewer ```3rie``` that the evidence for the insufficiency of the related methods (of which there are many) is not compelling enough.\", \"my_key_suggestions_to_improve_the_paper_thus_are_as_follows\": [\"Start from state-of-the-art methods and identify deficiencies.: I agree with reviewer ```3rie``` on the point that many references and methods are a bit outdated. Instead of relying on classical latent-space models, the authors should turn towards more modern diffusion models etc. I advise the authors to look closely at these methods and uncover what real practical challenges still need to be solved. The paper should start with convincing evidence for the insufficiency of the state-of-the-art models.\", \"Communicating limitations: I also realized late while reading that this approach proposed an interpretable model instead of applying a CF generator post-hoc. I think this should be communicated earlier and the performance characteristics should be communicated.\", \"The success of the method falls with interpretability characteristics of the latent space, for which no theoretical guarantees exist. I don't know if it is a good idea to rely on such a framework in safety critical applications generally.\", \"While I still cannot recommend acceptance of the manuscript in its current form, and I hope that some of these suggestions help the authors to revise their work and resubmit it to a suitable venue.\"]}", "{\"comment\": \"Dear Reviewer,\\n\\nAs the discussion period concludes tomorrow, we would appreciate your feedback on our responses to your comments. Please let us know if our answers resolved your concerns or if there are additional points that need addressing.\\n \\nThank you,\\n\\nThe Authors\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe would like to follow up to see if our response addresses your concerns or if you have further questions. We would really appreciate the opportunity to discuss this further and know whether our response has already addressed your concerns. Thank you again!\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe would like to follow up to see if our response addresses your concerns or if you have further questions. We would really appreciate the opportunity to discuss this further and know whether our response has already addressed your concerns. Thank you again!\"}", "{\"comment\": \"Thank you for addressing my comments. While I am satisfied with some of them, I will need some further clarifications before my final decision.\\n\\nAfter the authors positioned their work among some more recent approaches, my main concerns are connected with: 'concepts without supervision' claim, limitation to gaussian-mixture loss-based models and its connection to related works.\\n\\n**'concepts without supervision' claim**\\nAdding a specific definition of 'concept' would greatly benefit the paper's contribution, as it is now unclear to me whether the claim about no concept supervision is actually true. Importantly, the method utilizes 'label supervision' (Eq. 2, lines 153, 176) for the gaussian-mixture-based classification model. Note that the notions of labels and concepts sometimes fully overlap. For example, [CelebA](https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) datasets provide labels that identify concepts of faces, such as smile presence or age. Moreover, when following the above understanding of concepts, many current approaches for counterfactual explanations are based on unsupervised generative models (like diffusion models) that do not utilize any labels (contrary to the authors approach) during training or inference, making them also concept independant. At last, it might me misleading to actually refer to no supervision of concepts, since the authors explicitly claim that human supervision is required in extracting concepts from their approach. Clarifying these points would greatly improve my understanding of the paper's contributions.\\n\\n**Connection to related works**\\nFollowing the authors response related to diffusion-based approaches for counterfactual explanation generation, I am not convinced that the evaluation is performed fairly. Following the reasoning above, which assumes that methods based on generative models also do not utilize concept supervision, it is unclear to me why these approaches are not compared to the authors algorithm. Are they not applicable to the considered scenario? While the authors mention that these algorithms lack transparency in generated explanations, I cannot find justification for its presence in the proposed method, as transparency is not defined anywhere. Note that these methods are typically evaluated using much larger and complex datasets than BloodMNIST, e.g., CelebA, CheXpert, ImageNet to mention a few examples. Is the authors methods also applicable to these cases? If yes, how does it compare to current state-of-the-art? If no, what are the actual reasons for that? Depending on the justification, the main claims of the paper should be properly modified. For example, claiming that user interaction is possible in real-time might be true for BloodMNIST, but will it be generally true for other, larger datasets?\\n\\n**Gaussian-mixture-loss-based models**\\nIn my honest opinion, the paper should mention the above limitation very explicitly, but it does not mention it at all at the moment. As stated, the approach is only applicable to a very specific subclass of DNNs. Moreover, the paper does not show that any 'external' model (even from this subclass) can be actually incorporated in the pipeline in a post-hoc manner, since the introduced framework trains the classification model together with the autoencoder from scratch.\\n\\nPlease note that I truly appreciate the authors efforts. However, the new comments stem mostly from what I tried to initially convey in the **Questions** section from the review. Referring to them would also greatly help me in making the final decision. At last, please excuse me for providing the answer at the last moment. I will take this into consideration before deciding on my final score, as it obviously limits the discussion.\"}", "{\"comment\": \"Dear reviewer, thank you for the clarification with regard to the still standing issues. Below we tackle the points made:\\n\\n**Scalability** Our counterfactual search process is scalable with respect to architecture depth as it is independent from the architecture size. Also, in the limitation section of our manuscript we explicitly state that given the need for human annotators using restricted latent spaces is very likely needed (line 528). While in the context of our experiment this was not limiting, we explicitly mention that we consider a very important avenue for future research to improve the applicability of our approach given the promising results our method yields in the interactive setting. More precisely, our technique can be applied to large scale models. For example, leveraging a latent diffusion model conditioned on the compact latent dimensions of the RAE or the RAE outputs are some of the directions we consider exploring. It is worth mentioning that, as our approach is centered around interpretable concepts, such larger models are required to support concept extraction. This may not always be possible as the reviewer correctly notices and we specify this in line 531 of our paper. In addition, in order to obtain real-time generation, the underlying generative model should guarantee fast generation. It is worth noticing that if this is the case when leveraging conditional LDPMs, we can still optimize counterfactual search directly in the latent space. This allows us to generate counterfactuals with a single conditioned generation of the LDPM, ensuring an efficient explanatory mechanism. \\n\\n**Generalization to independent models** The focus of our proposal is indeed a framework for interactive classification, in which a machine learning model is trained to perform classification and be amenable to counterfactual generation. Our approach is thus not a general purpose post-hoc counterfactual generation method. The post-hoc approaches we mention in the related work are meant to clarify why existing solutions are not appropriate for our interactive classification setting, namely the lack of real-time performance and concept-based explanations. While adapting an external model to generate interpretable counterfactuals should be feasible in principle (keeping in mind the concerns on real-time execution and quality of the concepts), this is not the main focus of our contribution. We better clarified the focus of our work in the abstract and introduction. \\n\\n**Labeled data requirement** Our approach leverages class labels to solve the classification task (e.g. cell type). Most real world datasets provide exclusively class-label information. CelebA on the other hand provides multiple labels per image which refer to the presence of a specific attribute or concept (e.g. glasses, smile, wrinkles\\u2026). These can be used to guide the learned representations of a model (the $z_i$) to encode specific concepts. Our approach encodes concepts without any supervision or it does not leverage the above mentioned information about the presence or absence of certain attributes (concepts). In conclusion our model is supervised at a label-level (e.g. class information is needed to distinguish between cell types) but unsupervised at a concept-level (no information about the attributes of the image are required). \\n\\nPlease let us know if there is any additional information you require us to provide and thank again for engaging in the discussion and for the helpful feedback\"}", "{\"comment\": \"Dear Reviewer,\\n\\nAs the discussion period concludes tomorrow, we would appreciate your feedback on our responses to your comments. Please let us know if our answers resolved your concerns or if there are additional points that need addressing.\\n \\nThank you,\\n\\nThe Authors\"}", "{\"summary\": \"The paper generates interpretable counterfactual images in real-time leveraging a disentangled\\nregularized autoencoder for labels and instances making it more accessible for HITL-approach\\nThe approach appears to be theoretically rigorous, using disentanglement and latent\\nspace regularization efficiently for counterfactual sampling. The method is theoretically\\nrobust, with well-founded training and selection mechanisms supported by rigorous\\nyet somewhat obvious proofs (so the theoretical contribution is limited).\\nWhile experimental methods are robust, including additional datasets would enhance\\nstatistical generalizability and validate findings. The paper is well-written, with effective\\nvisuals and a logical flow. Some additional annotations on the figures would make them more accessible.\\nThe framework has possibly some limited potential to impact AI explainability in real-time decision-making,\\nparticularly in human-centered applications but some points need yet to be clarifed (see points below).\\nIt addresses a gap by making counterfactual explanations feasible in interactive settings.\\nReal-world deployment could face challenges due to computational demands and the\\ncomplex training setup. Furthermore, the dependency on a well-defined latent space\\nmight limit the framework\\u2019s adaptability to highly complex or noisy data, which might\\nrestrict real-world deployment.\", \"some_suggestions_for_improvement\": \"-Expand the empirical evaluation with a broader range of datasets to improve robustness\\nand generalizability.\\n-Clarify the \\\"100% validity\\\" claim with a more nuanced discussion of potential limitations.\\n(might be redundant if the math already proves 100% validity).\\n-Conduct quantitative comparisons with other methods to offer a clearer perspective on\\nrelative strengths and weaknesses.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The approach has some potential to enhance counterfactual generation by balancing efficiency and interpretability, both of which are essential for real-time, human-centered applications. A user study indicates that the generated counterfactuals may potentially enhance\\nhuman task performance, which would be valuable in practical settings. The methodology is well-structured and includes clear descriptions of each step in the counterfactual generation pipeline. It seems that the user study is scientifically valid and includes appropriate metrics. With a generation time of about 1.2 seconds, the framework seems ready for real-time, interactive AI applications. The authors claim that their framework is the first of this kind.\", \"weaknesses\": \"The evaluation is limited to a single dataset (BloodMNIST) and task, which may impact the method's broader applicability. A wider evaluation would give a better sense of its utility across different contexts. The claim of \\u201c100% validity\\u201d might be overconfident; high-dimensional edge cases might present challenges to this level of accuracy.\", \"questions\": \"-Could the authors clarify how are the \\u201cassociated concepts\\u201d at line 348 identified? Does the framework directly offer human-comprehensable concepts for latent features? If so, how this is achieved?\\n\\n-What does \\u201cunsupervised\\u201d in line 406 mean? It seems that the training of the framework requires massive labeled data, therefore against the claim of being \\u201cunsupervised\\u201d.\\n\\n-Could the authors elaborate more on why the decoder $\\\\mathrm{DEC}$ is not suitable for generation? Both $\\\\mathrm{ENC}_s$ and $\\\\mathrm{ENC}_u$ learn distributions for the latent space, sampling a point from a specific Gaussian should generate a synthetic instance.\\n\\n-The denosing serves for shaping latent space structure, couldn\\u2019t it be applied during the training of the autoencoder? The decoder should be able to handle noises at an appropriate level, which makes the auxiliary model somehow redundant.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Y1ZA\", \"comment\": \"**Identity breach in screenshot in appendix** We apologize for the inconvenience, we did not realize this was part of the screenshot. We removed it from the revised version of the manuscript.\\n\\n**Minor points** We addresed the reviewer points and made the requested changes.\\n\\n**Related work** We extended the related to work to consider more recent and relevant approaches. Our method separates from mentioned papers because it does not leverage knowledge of a causal graph (a requirement for the other works). In addition, interest for diffusion models high quality image generation led to researchers leveraging them for counterfactual explanations. Although such approaches are able to generate realistic counterfactual images, the resulting explanations lack transparency which is a crucial component of our framework. \\n\\n\\n**Running times** We added quantitative experiments to Appendix C where we also evaluate running times. We compare with the competing approach of [1] as it is the only other counterfactual generating technique which leverages concepts without supervision that we are aware of. Results show our approach superior performance across various architecture complexities. \\n\\n\\n**Accuracy** The work of [2] shows that this loss can be used to obtain equivalent classification performance to softmax based classification scores for a variety of benchmarking datasets and CNN architectures. In addition the choice of a very simple architecture is due to the very simple input domain but much more complex architectures can be leveraged for our framework.\\n\\n**Evaluation** We added in Appendix C a quantitative evaluation of our method with FID, COUT and S3 in the appendix. We compare with the approach of [1] and with returning an explanation consisting of the point in the segment connecting the instance to explain and the counterfactual class mean for a given model confidence value. We experiment with the confidence values of 0.6, 0.8 and 0.9 as a mean for an ablation study. \\n\\n**Image quality** Images are blurry due to the reconstruction performance of the model that must leverage restricted latent spaces because of the concept extraction technique. We argue that even though blurry, counterfactuals were interpretable and actionable as the user study results prove. \\n\\n\\n**Disentanglement** Label disentanglement is what is requested for the approach to correctly generate valid counterfactuals. Latent disentanglement is also important in our framework, because it allows the method to extract clean and independent concepts that greatly improve the interpretability of our explanatory technique. We are aware of the lack of theoretical guarantees for unsupervised latent disentanglement. Our approach simply encourages it via latent regularization. We updated the manuscript to clearly distinguish between label and latent disentanglement, and we explicitly mention in the related work section the negative theoretical results about unsupervised latent disentanglement.\\n\\n\\n**User study** The number of 50 participants is not divided over the 3 conditions. Each condition was instead studied with 50 participants for a total of 150 participants in the study. Furthermore, the study does not require IRB approval in line with the ethical guidelines of our institution. Specifically, we assessed the risk of our study using a survey designed by our institution for this purpose. The assessment yielded a minimal risk level, which confirmed that IRB approval was not necessary.\\n\\n**Technical derivation** We added the mentioned technical derivation in Appendix B where we show how the weights are derived. We show how to carry the expected value computation and why we implement our methodology to estimate it. \\n\\nReferences\\n\\n[1] Luss, Ronny, et al. \\\"Leveraging latent features for local explanations.\\\"\\n\\n[2] Wan, Weitao, et al. \\\"Rethinking feature distribution for loss functions in image classification.\\\"\"}", "{\"title\": \"Global response\", \"comment\": \"We would like to thank the reviewers for their insightful feedback. We are pleased that our efforts in formalizing a method for human-AI decision-making and investigating the impact of explanations on real users were well-received. We have addressed the limitations mentioned by the reviewers through detailed responses in individual comments and updated our manuscript specifically highlighting the changes made. We look forward to engaging further in discussions on this topic.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nAs the discussion period concludes tomorrow, we would appreciate your feedback on our responses to your comments. Please let us know if our answers resolved your concerns or if there are additional points that need addressing.\\n \\nThank you,\\n\\nThe Authors\"}", "{\"comment\": \"Thank you for the detailed response to address my concerns.\\nThe clarification on rotations brings insight into the computational efficiency gain, which should be interpreted together with the multivariate gaussian distribution in the latent space. \\nWhile the clarifications are appreciated, some of the concerns persist.\\n\\n**Transparency** The authors claim that their approach tackles the transparency issue of the generator by showing human-understandable concepts. This claim is not entirely convincing. \\nFirst, the derivation of the concepts requires additional efforts from human experts, raising questions about the scalability. Second, there seems to be a gap between the generator's actual behavior and the patterns inferred by humans through observations. \\nIs there any mechanism to guarantee the derivation of faithful and truth-telling concepts?\\n\\n**Sparsity**: I could not find a formal definition of sparsity in the paper, and therefore assume the common understanding in the literature, i.e. sparsity implies altering a minimal subset of features (but please let me know if the context is different in this paper). \\nGiven the definition, could the authors elaborate on how their approach ensures sparsity? The computation of the expected counterfactual balances likeliness and closeness (in terms of latent space distance), but its connection to sparsity remains unclear. If the segment $\\\\mathbb{S}$ is not aligned with the latent space axis (which is very likely to happen in a high-dimensional space), modifications will apply to all units in $z_s$ to reach the expected counterfactual. Since each latent dimension corresponds to one concept, this suggests that deriving the final counterfactual involves altering all concepts, which contradicts the definition of sparsity.\\n\\nAlso, I appreciate the authors' effort in visualizing users' cumulative errors in Figure 13, which brings further questions regarding the **training effect**. From my perspective, the training effect should manifest as users improving their performance over time due to the additional information provided, resulting in (relatively) concentrated errors in earlier stages of a task. \\nThis would presumably lead to a curve above the red line in Figure 13, indicating decreasing cumulative errors as users adapt. \\nHowever, the presented data does not align with this expectation.\\n\\nWhile taking the split at question 13 may support the authors' claim of a training effect, it is unclear why a burst in errors occurs after a certain period of training (between questions 8 and 13).\\nFurthermore, the changing patterns between the \\\"Label\\\" and \\\"Label+Explanation\\\" conditions appear highly similar, raising questions about the origin of the claimed training effect.\"}", "{\"summary\": \"This paper focuses on the topic of generating visual counterfactual explanations for predictive models in computer vision. The proposed method is based on a Denoising Disentangled Regularized Autoencoder trained in a two stage manner. The first stage deterministically trains an encoder-decoder architecture with the encoder split into two separate parts responsible for label-based and label-independent information. The second stage introduces stochasticity to the learning process and, after freezing the previously learned weights, trains an additional autoencoder on the combined latents. The introduced architecture is utilized for the counterfactual explanation generation by encoding the factual image into its two-part latent representation, modifying the label-relevant part to identify candidate counterfactuals, computing the `expected counterfactual', extracting the most important concepts based on a proposed metric, and decoding the modified label-based latent together with label-irrelevant part to obtain the explanation. The work explicitly defines the properties of the mentioned counterfactual candidates and develops a theoretical result to more efficiently search for the best candidate. The proposed method is evaluated through a user study based on the BloodMNIST dataset with detailed analysis of the obtained results, showing how the proposed approach can guide humans to improved decision-making.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"S1. The authors provide a clear introduction and motivation to the problem addressed in their work.\\n\\nS2. The method's overview clearly explains how specific components of the proposed architecture aim to address the mentioned limitations of previous works. The description of each component of the optimized loss is properly described. Overall, despite the complexity of the framework, the authors succeed in clearly communicating its inner workings using the attached figures and pseudocode.\\n\\nS3. In addition to the practical side of the framework (two-step training procedure), the authors propose an interesting theoretical result to efficiently search for the `optimal' counterfactual candidate in the model's latent space. I also enjoyed the introduced relevance scores for concept selection, Fig. 2 that nicely summarizes the candidate selection procedure and the attached pseudocode in Algorithm 1.\\n\\nS4. The experimental evaluation is based on a properly designed user study with detailed analysis of the obtained results. Interesting research questions were proposed and the provided results were very well processed to provide principled answers. \\n\\nS5. The overall writing is clear and properly redacted, except some small typos, e.g., $S_1$ in line 270.\\n\\nS6. Source code is provided as an anonymised repository to ensure reproducibility.\", \"weaknesses\": \"W1. Both the abstract and the introduction mention specific limitations of previous works, specifically: efficiency, likeliness, interpretability, validity and sparsity. While the paper provides motivation on how each of the component's framework addresses those, I must argue that the paper struggles to provide empirical evidence for these claims. While some quantitative results are provided (e.g., average generation time in line 408), they are mainly focused on the influence of the method on human decision-making and do not address the limitations. Also, no comparison with previous work is given. The above measures could be quantified using, e.g., some variation of FID [5] and S3 [2,6] (likeliness, validity), and COUT [7] for sparsity.\\n\\nW2. While the paper is well-written, I got confused with how the authors position their work in the XAI domain, its connection to counterfactual explanations for deep learning models as post-hoc explanations and their required properties. To the best of my understanding, the proposed method is not able to provide post-hoc explanations for any predicitve model other than the Gaussian-mixture-based classifier trained together with other components. This is problematic since the described limitations are mentioned as problems of generally applicable methods and their relationship with the authors work is unclear.\\n\\nW3. The authors mention real-time generation as one of the main contributions of their method. However, the paper lacks quantitative results comparing the proposed approach to any of the previous works in this context, making it unclear whether the improvements are only incremental or actually ground-breaking. While the method might indeed be an effective real-time generator, it is not clear how this relates to previous works which address a more general class of models (see W2.).\\n\\nW4. The paper mainly cites works from the 2018-2022 period. While I do not consider myself a highly-educated expert in either the topic of contrastive explanations, deterministic regularized autoencoders nor latent disentanglement, I cannot escape the feeling that there are more recent papers that could be mentioned in these contexts. This is not an explicit weakness of the paper, but I would be happy if the authors could address why more recent works do not appear in the literature section (has the field somehow slowed down or community lost interest in it?). My another concern is the specific connection of this work to the topic of contrastive explanations - I must argue that mentioning specific works on visual counterfactual explanations (VCEs) would better reflect the paper's connection to the field. In this case, the authors fail to mention a large amount of work combining generative models and VCEs from the recent years, e.g. [1,2,3,4] to name a few. How does the authors' paper place itself in the context of these works?\\n\\nW5. From a theoretical point of view, the paper often provides very strong claims like `the non-existence of a strictly better counterfactual' (lines 259-260), `such counterfactual intrinsically optimizes the trade-off between the likelihood of the explanation and the distance\\nfrom the instance to explain` (lines 281-283) which are true only when assuming that the Gaussian mixture model perfectly preserves the relationships between the samples from the original data distribution. This should be stated explicitly in the paper and I would like to authors to further elaborate on the assumptions and limitations of this approach.\\n\\nW6. Another contribution mentioned by the authors is the possibility of extracting interpretable concepts associated with the latent dimenions of the proposed architecture. It should be stated clearly that the identification of these concepts requires a human expert that will properly label them. For example, to the best of my understanding, examples like those in Fig. 10 include descriptions of concepts that were first labelled by the expert. In terms of experiments, I think that the influence of these concepts on the human decision-making has not been properly studied, making it difficult to disentangle their contributions to the overall process.\\n\\nW7. To maintain the review's structure, I will include my question regarding the theoretical derivations here. I must stress that these are very detailed and I was generally satisfied with them. My concern is the pharsing from line 850, where it is stated that `the expected value, according to an isotropic Gaussian, of the elements in a segment $S$ (...)`. Could the authors clarify whether this sentence assumes that elements in $S$ follow an isotropic Gaussian distribution? To the best of my understanding, $S$ forms a bounded interval since $S = \\\\\\\\{ (1 - t) \\\\cdot a + t \\\\cdot b \\\\mid t \\\\in [0, 1] \\\\\\\\}$, hence its elements cannot follow a Gaussian distribution which assumes infinite support.\\n\\n[1] Jeanneret et al., Diffusion Models For Counterfactual Explanations, ACCV 2022\\n\\n[2] Jeanneret et al., Adversarial Counterfactual Visual Explanations, CVPR 2023\\n\\n[3] Augustin et al., Diffusion Visual Counterfactual Explanations, NeurIPS 2022\\n\\n[4] Boreiko et al., Sparse visual counterfactual explanations in image space, DAGM 2022\\n\\n[5] Heusel et al., GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium, NIPS 2017\\n\\n[6] Chen et al., Exploring simple siamese representation learning, CVPR 2021\\n\\n[7] Khorram and Fuxin, Cycle-Consistent Counterfactuals by Latent Transformations, CVPR 2022\", \"questions\": \"I would be happy if the authors could address each specific weakness mentioned above. In general, I would say that the paper has great potential but mentions contributions that are clearly not addressed. My suggestion for the authors would be to refocus the text on the human-machine interaction, since this is the most promising result, and deviate from the counterfactual explanation domain. It is difficult to be convinced that the proposed framework is in fact a counterfactual explanation generator if it only allows to provide them for a model trained inherently in the framework which is a very simple Gaussian-mixture-based classifier. Both the experimental design and theoretical derivations are very elegant and I encourage the authors to just focus on those with an extended empirical evaluation. Overall, the paper shows great promise that it might be worth training the framework from scratch for each new problem, since it may greatly improve the understanding of complex domains by inexperienced users.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe would like to follow up to see if our response addresses your concerns or if you have further questions. We would really appreciate the opportunity to discuss this further and know whether our response has already addressed your concerns. Thank you again!\"}", "{\"title\": \"Response to egRZ\", \"comment\": \"**Transparency** The black-box nature of the classifier hinders transparency. We explicitly tackle the issue leveraging human-understandable concepts. This approach allows users to understand when a machine makes a choice for a \\u2018right\\u2019 or \\u2018wrong\\u2019 reason as concepts are explicitly stated in the explanation. Evidence of this is further supported by our user study.\\n\\n\\n**Flexibility** Our approach is not model-agnostic. In order to apply our explanatory technique in the latent space the minimum requirement is that the classifier uses a gaussian-mixture loss. The work of [1] shows that this loss can be used to obtain equivalent performance to softmax based classification scores for a wide variety of benchmarking datasets and CNN architectures. Since one can explain any DNN that uses this loss with our method creating a counterfactual in the latent-space, in order to generate explanations in the input space a decoding model reconstruction is required. In conclusion, even though our approach does not extend to DNN with softmax classification layers, the requirement to implement our approach is very simple. Other components of our framework, such as label-relevant/label-irreleveant encoders, try to target specific desiderata of counterfactual explanations such as sparsity or validity but the overall applicability of our method is not limited and does not imply worse classification performances. \\n \\n\\n**$\\\\mathcal{P}_2$ violations** We are aware of this and we added proof in Appendix B that the deviation from $\\\\mathcal{P}_2$ of the expected counterfactual is bounded and this error is negligible. \\n\\n\\n**Sparsity** The role of $z_u$ is to encode generative factors that are shared across labels and therefore should be fixed when computing a counterfactual for a given user specified label. By keeping part of the encoding unchanged this improves sparsity of the explanation. The reviewer rightfully notices that this is only 25% of the whole latent encoding and does not suffice to ensure sparse explanations. For this reason we optimize the trade-off between sparsity and likeliness by the computation of the expected counterfactual in the latent space. Overall Sparsity is therefore tackled in a two-fold manner: By keeping part of the latent encoding fixed and by explicitly optimizing for it in the formulation of our counterfactual-search problem.\\n\\n\\n**Versions of $S$** We apologize for typos. We updated the text to highlight that the presence of the superscript indicates that property $\\\\mathcal{P}_1$ is also satisfied (line 273-281). \\n\\n\\n**Training effects** We added a new section in Appendix G to analyze this phenomenon in detail. Instead of plotting cumulative accuracies, we chose to plot cumulative errors, as this provides a clearer view of where and how frequently errors occurred. The results align with our claim, highlighting the presence of training effects among participants since most errors are made in the early stage of the user study.\\n\\n\\n**Application of the gaussian mixture loss** The object of the sentence is the gaussian mixture loss function that we introduced in the following line. We updated the text to make sure this is clearer (line 177-180).\\n\\n**Sampling labels** We updated the paper to make this clearer (line 1294). More precisely we refer to a two-step sampling mechanism. First labels are sampled according to a distribution (e.g. multinomial) and then images are extracted from the sampled label conditional distribution. \\n\\n\\n**Benefit of rotations** It is true that $t$ is a single dimension element but the suggestion still does not compute a univariate estimate but a multivariate one as densities need to be computed according to multivariate distributions. Nonetheless one can estimate the expected position by interpolating and accumulating the densities at different values of $t$ as to approximate the integral. The difference between this approach and the one we suggest is that the reviewer\\u2019s approach requires a hyperparameter \\u2018step\\u2019 to evaluate the densities and that the amount of \\u2018steps\\u2019 needed varies at the varying of the length of the segment. Overall the accuracy of the estimate and the performance of this method will depend on the length of this segment which can vary largely according to two factors: The margin of the classifier, (large margins implementation may be used as in [1]) and the class asked by the user (If the user asks a counterfactual for a class that is distant in the latent space from the original instance). In conclusion our approach guarantees competitive running-times which are independent from the decision boundary learned by the model and the counterfactual query. The other suggested approach running times may be susceptible to significant variance. \\n\\nreferences \\n\\n[1] Luss, Ronny, et al. \\\"Leveraging latent features for local explanations.\\\"\\n\\n[2] Wan, Weitao, et al. \\\"Rethinking feature distribution for loss functions in image classification.\\\"\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe would like to follow up to see if our response addresses your concerns or if you have further questions. We would really appreciate the opportunity to discuss this further and know whether our response has already addressed your concerns. Thank you again!\"}", "{\"comment\": \"Dear Reviewer,\\n\\nAs the discussion period concludes tomorrow, we would appreciate your feedback on our responses to your comments. Please let us know if our answers resolved your concerns or if there are additional points that need addressing.\\n \\nThank you,\\n\\nThe Authors\"}", "{\"title\": \"Response to vmA4\", \"comment\": \"**limited evaluation** We added to Appenidx C a more thorough quantitative evaluation of our approach. We compare with the competing approach of [1] as it is the only other counterfactual generating technique which leverages concepts without supervision that we are aware of. Our technique has comparable performance to our competitor while being substantially more efficient.\\n\\n\\n**100% validity** We refer to validity as the explanation being classified by the model as the class asked by the user. In that regard counterfactual candidates are valid by definition as only points on the side of the decision boundary associated to the query class are considered. To further strengthen this, expectations are computed sampling from the conditional counterfactual label distribution making it impossible to obtain explanations that are not predicted as the query class. High-dimensional latent spaces are not considered due to the concept-extraction mechanism which relies on compact latent spaces. \\n\\n\\n**Concept extraction** The model learns latent representations which can be associated to interpretable concepts via latent traversal. In order to obtain high quality concepts latent disentanglement is needed. We ecnourage this via latent regularization. We modified related work to make this clearer and expand on the concept extraction technique in Appendix E. \\n\\n**Unsupervised conceps** Unsupervised refers to the concepts which are not learned with supervision. Classification is supervised as the reviewer correctly notices. We rephrased to make this clearer (line 415).\\n\\n\\n**Decoder generation** This is because with increasing latent dimensions the densities of the points vanishes. This implies that, in order to sample, shaping data according to a distribution is not sufficient. The model needs to additionally learn \\u2018smooth\\u2019 latent space which is achieved with noise addiction. Please refer to [2] for more details.\\n\\n\\n**Auxiliary model** The noise injection mechanism is used to \\u2018smooth\\u2019 the latent space as it is already shaped according to a gaussian distribution in the deterministic version of the model. Since our concept extraction technique relies on compact latent spaces which causes a high loss of information after encoding, handling simultaneously noise and reconstruction can be difficult for the decoder. With the suggested approach the decoder focuses only on reconstruction in the first stage and then \\u2018smooth\\u2019 representations are induced with an auxiliary model helping the decoder to handle the noise improving reconstruction quality with respect to the noise injection mechanism of the VAE. \\n\\n\\n\\n\\nReferences\\n\\n[1] Luss, Ronny, et al. \\\"Leveraging latent features for local explanations.\\\"\\n\\n[2] Ghosh, Partha, et al. \\\"From variational to deterministic autoencoders.\\\"\"}" ] }
9TMbdO870O
H2IL-MBOM: A Hierarchical World Model Integrating Intent and Latent Strategy as Opponent Modeling in Multi-UAV Game
[ "Jiaming Cheng", "Ni Li", "Ruiguang Hu", "Yaning Wang" ]
In the mixed cooperative-competitive scenario, the uncertain decisions of agents on both sides not only render learning non-stationary but also pose a threat to each other's security. Existing methods either predict policy beliefs based on opponents' interactive actions, goals, and rewards or predict trajectories and intents solely from local historical observations. However, the above private information is unavailable and these methods neglect the underlying dynamics of the environment and relationship between intentions, latent strategies, actions, and trajectories for both sides. To address these challenges, we propose a Hierarchical Interactive Intent-Latent-Strategy-Aware World Model based Opponent Model (H2IL-MBOM) and the Mutual Self-Observed Adversary Reasoning PPO (MSOAR-PPO) to enables both parties to dynamically and interactively predict multiple intentions and latent strategies, along with their trajectories based on self observation. Concretely, the high-level world model fuses related observations regarding opponents and multi-learnable intention queries to anticipate future intentions and trajectories of opponents and incorporate anticipated intentions into the low-level world model to infer how opponents' latent strategies react and their influence on the trajectories of cooperative agents. We validate the effectiveness of the method and demonstrate its superior performance through comparisons with state-of-the-art model-free reinforcement learning and opponent modeling methods in more challenging settings involving multi-agent close-range air-combat environments with missiles.
[ "Multi-UAV Game", "Opponent modeling", "World model", "Multi-agent Reinforcement Learning" ]
https://openreview.net/pdf?id=9TMbdO870O
https://openreview.net/forum?id=9TMbdO870O
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tskNwdqhV3", "sH2tj8hUbE", "Tm5OGHJ0eB", "LkogGgbDPN" ], "note_type": [ "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730652147445, 1731110458681, 1731572135248, 1730459224127 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3710/Reviewer_QuNh" ], [ "ICLR.cc/2025/Conference/Submission3710/Reviewer_P9KM" ], [ "ICLR.cc/2025/Conference/Submission3710/Authors" ], [ "ICLR.cc/2025/Conference/Submission3710/Reviewer_vYGG" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces H2IL-MBOM, a hierarchical world model that integrates intent and latent strategy for opponent modeling in multi-UAV games. It addresses challenges in mixed cooperative-competitive scenarios by enabling dynamic prediction of opponents' intentions and strategies. The proposed MSOAR-PPO algorithm allows for real-time inference of adversaries' strategies and intentions, facilitating rapid adaptation to changes in opponents' behaviors. The method's effectiveness is demonstrated through comparisons with state-of-the-art methods in multi-agent air-combat simulations, showing superior performance and generalization ability. The paper concludes that H2IL-MBOM enhances decision-making in complex, dynamic environments by accurately capturing opponents' mental states and their evolving strategies.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The proposed method demonstrates superior performance when compared to state-of-the-art model-free reinforcement learning and opponent modeling methods. It effectively captures the changing behavior patterns of opponents and exhibits strong generalization capabilities in multi-agent close-range air-combat environments with missiles.\", \"The H2IL-MBOM, coupled with the MSOAR-PPO algorithm, enables dynamic and interactive prediction of multiple intentions and latent strategies. This allows for real-time adaptation to changes in opponents' intentions and strategies, addressing the non-stationarity issue in multi-agent interactions and enhancing decision-making processes.\"], \"weaknesses\": [\"Lack of novelty. Modeling others and world dynamics for multi-agent reinforcement learning has been widely explored in previous works[1,2,3]. It is necessary to justify the novelty of the proposed hierarchical framework. Although previous works do not apply to the multi-UAV game, I can not see any additional challenge introduced in the game.\", \"Most of the baselines are out-of-date, e.g., MADDPG, MAPPO. It is necessary to compare stronger baselines with the SOTA opponent modeling methods that were introduced for general multi-agent games.\", \"Generalization. The generalization of the learned model to different numbers of agents/opponents and unseen behaviors during test time is not evaluated. The paper primarily focuses on air-combat scenarios. It is not clear how well the proposed methods would generalize to other types of multi-agent environments with different dynamics and objectives.\", \"The hierarchical model proposed is complex, which could limit its scalability and applicability.\"], \"references\": \"[1] Proactive Multi-Camera Collaboration for 3D Human Pose Estimation, ICLR 2023\\n\\n[2] Fast Peer Adaptation with Context-aware Exploration, ICML 2024\\n\\n[3] Greedy when sure and conservative when uncertain about the opponents, ICML 2022\", \"questions\": [\"How to extend the framework to handle the visual observation for real-world applications?\", \"Can you show some videos about the simulation and the learned policy?\", \"Is the model robust to the different scales of the population of agents, e.g. 10 vs. 10?\"], \"flag_for_ethics_review\": \"['Yes, Potentially harmful insights, methodologies and applications']\", \"details_of_ethics_concerns\": \"Military Applications and Escalation: The model could be used to enhance military drone technologies, potentially leading to more efficient and lethal autonomous weapon systems. This raises concerns about the escalation of armed conflict and the dehumanization of warfare.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a multi-agent model-based reinforcement learning framework for close-range air combat. Compared to prior work, this paper employs a more realistic observation model in which agents cannot observe private state information from other agents. The main contribution of the paper is a data-driven two-level latent variable model. The high-level model learns a latent space for \\\"intentions,\\\" and the low-level model learns another for \\\"strategies.\\\" The forward world model consists of models for intentions and strategies and how they affect future states/observations. These models are parameterized by Transformers, similar to prior work on TSSM.\\n\\nThe authors employ a self-play setting to evaluate the RL agent performance in a simulated air combat environment. The main results demonstrate that the proposed method achieves higher rewards than relevant model-free and model-based RL baselines. The results suggest that the novel hierarchical modeling approach helps more accurately predict the interleaving dynamics in a multi-agent environment.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The main algorithm is a sophisticated approach to solving a challenging, practical multi-agent control problem.\", \"The authors include in-depth derivation and detailed algorithms in the appendix.\", \"The algorithm outperforms various model-free and model-based RL baselines.\"], \"weaknesses\": [\"The presentation of the paper needs improvement:\", \"The paper's key contribution is modeling the dynamics of \\\"intentions\\\" and \\\"strategies,\\\" but I don't see a clear definition. Are they just two generic latent spaces the authors have assigned names to? What is a mental state?\", \"The paper is swamped with rather random-looking abbreviations. These don't flow well in sentences, making the method section difficult to follow.\", \"The experiment settings are not communicated clearly (see questions).\", \"The result figures are noisy. The authors should consider running multiple seeds to visualize the average trend. Also, the text in the figures is too small, and the captions are not informative at times.\", \"Equation 1 is outside the paper margin.\", \"Overall, I often find it hard to distinguish if a statement is a motivation, a hypothesis, or some standard definition from prior work (e.g., section 3.1).\"], \"questions\": [\"How are the plotted rewards computed? My understanding is that the authors use self-play during training. Then, how is the policy performance evaluated? Are the baseline methods compared on the same opponent team?\", \"Does each agent make decisions independently? I don't think the MDP formulation is appropriate here because not everything is observable. Also, the MDP seems to describe the entire simulation state but not from individual agents' point of view.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"Respond to reviewer 1\\n1. The definition of intentions and latest strategies are presented in lines 207-209, \\\"the opponent's evolving intention directly reflects the changes in the opponents' trajectories, while their evolving strategy influcences the trajectories of the alliance agents\\\". The definition of mental state is detailed in lines43, and we suggest that the reviewer carefully read the paragraphs of lines 42-63, 98-116 and 191-215. The reviewer seems to be unfamiliar with the relevant concepts and this field.\\n2. As sentence of lines 263 and 356-357 say, alliances and opponents are equipped with the same H2IL MBOM, and they learn independently **without adopting our old strategy through self play**. The pseudocode in lines 707 can also verify this. And in the experiment, we also carefully described our experimental setup, as lines 407-411 states, \\\"It is notifying that we use two,,,,,,, one is where, ,,,,,,, and the other is,,,,,,\\\". Moreover, the self play method was compared in Figure 2 (d).\\n3. All methods are against the same group of opponents, and the reward is the average episode return of all agents on the same team.\\n4. Our MSOAR-PPO are based on th ippo method\\uff0cwhich is different from Mappo and IPPO . Because the observations of opponents that can be not accessed in reality\\uff0cwe neither use global observations nor own respective local observations in the critic network, but instead use observable teammates' observations. Additionally, in the actor network, we combine local observations with reasoning intentions and strategies to make decisions. **And our input is not the state, but rather local observations relative to the state of our teammates and opponents within a limited observation range.** In other words, the assumptions about global observation for critic network of mappo do not match the actual situation. It seems that the reviewers are not very familiar with IPPO, MDP, and CTDE, especially the relationship between IPPO and MDP. \\n\\nRespond to reviewer 2\\n1. The references 1-3 cited first are not quite the same as what we are concerned about, especially the mutual inference of multiple agents between two teams.\\n2. The challenge of this article has been elaborated in detail in lines 42-63 of the introduction.\\n3. We valide the effectiveness of opponent model method such as Rommeo\\uff0c PR2\\uff0c TDOM-AC\\uff0c and AORPO in Figure2 , and there are few opponent modeling methods for multi-agent intent inference. In some articles, although it is a multi-agent setting, other agents are mainly treated as opponents, and the interested agent is used to reason about these opponents, rather than interactive reasoning between teammates for all agents. And our method takes into account that the opponent has the same reasoning ability, but it is not simply a self play approach. In addition, as lines 46-58 say, other methods either use the opponent's private information as label data or do not consider the continuous mutual influence of the intentions, strategies, and trajectories of both parties.\\n4. The real-world applications can also be discussed in the section limits of appendices, and the visualized trajectories can be found in section A13 of appendices.\\n5. The generalization experiments of 10vs10 are presented in the section \\\"testing results\\\" of appendices. \\n\\nRespond to reviewer 3\\n\\n1. As the title suggests, we are concerned with multi drone game tasks, and articles on intent inference in the field of autonomous driving will not be validated in football experiments. This additional requirement for irrelevant experiments seems unfair to us. Moreover, our experiment was conducted in the mixed cooperative-competitive scenario, and both parties had the ability to reason about their opponents. The biggest difference between this and SMAC and football environments is that their opponents use built-in AI, i.e. pure collaborative scenarios, while our method, as lines408 stated, does not use built-in AI. These environments are not aligned with our mission objectives.\\n\\nNevertheless, we believe our H2IL-MBOM is equally effective in other multi-agent tasks and environments.\\n\\n2. The meaning of \\u201cusing these these predictions along with observations to inform decision-makings.\\u201dis to make decisions based on the observation and speculated intentions and strategies.\\n\\n\\n3. We valide the effectiveness of opponent model method such as Rommeo\\uff0c PR2\\uff0c TDOM-AC\\uff0c and AORPO in Figure2. And the effectiveness of our method for opponent intention and strategy inference was also verified and analyzed through tsne distribution in Figure 3 (c) - (d).\\n\\n4. This environment[1] is open source and has become a widely used benchmark for drone gaming, with its observation space, action space, and rewards widely recognized.\\n\\n[1] Qihan Liu, Xiaoteng Ma, et. al, \\u201cLight Aircraft Game: A lightweight, scalable, gym-wrapped aircraft competitive\\nenvironment with baseline reinforcement learning algotithms.\\u201d https: //github.com/liuqh16/CloseAirCombat, 2022.\"}", "{\"summary\": \"This paper presents H2IL-MBOM, a hierarchical model for opponent modeling in multi-agent reinforcement learning, particularly in air combat scenarios. H2IL-MBOM combines high-level intention inference with low-level strategy prediction to address non-stationary dynamics in mixed cooperative-competitive settings. Integrated into the PPO framework, this model achieves enhanced accuracy and interpretability, showing improved performance over baseline methods in simulations.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe approach of modeling opponents through world models in air combat scenarios is innovative.\\n\\n2.\\tH2IL-MBOM models opponents based on observational data, providing a useful approach for scenarios such as air combat, where direct access to the opponent's precise actions and states is unavailable.\\n\\n3.\\tComprehensive experiments in Gym-Jsbsim demonstrate the method\\u2019s significant performance advantages over model-free MARL and other opponent modeling methods, including ablation studies that confirm module effectiveness. Sufficient details of the experimental implementation are also provided.\\n\\n4.\\tIn the experimental section, Figure 3 and Appendix A.13 effectively demonstrate and validate that the proposed method can capture changes in opponent intentions in the air combat environment.\", \"weaknesses\": \"1.\\tThe paper\\u2019s expression and presentation lack clarity; the authors provide numerous equations for various modules, making it difficult to smoothly understand the intent and overall functionality of the H2IL-MBOM framework. The extensive use of abbreviations also confuses readers. Figure 1, intended as an overview of H2IL-MBOM, includes excessive module details, which makes it challenging for readers to grasp the authors' main ideas. Consider breaking it down or omitting unnecessary details.\\n\\n2.\\tAlthough applying the method of modeling opponents through world models in the air combat environment is innovative, I still hope the authors can conduct comparative experiments in more multi-agent adversarial environments, such as Google Football, and compare against additional baselines to demonstrate the advantages of the proposed method, especially since there is relatively less existing work in MARL for air combat environments.\\n\\n3.\\tThere are still some type errors, such as in the first paragraph of Section 3, where it says, \\\"and using these these predictions along with observations to inform decision-makings.\\\"\\n\\n4.\\tThe authors do not provide a detailed analysis or validation of the effectiveness of the opponent model in the methods and experimental sections.\", \"questions\": \"1.\\tIs H2IL-MBOM equally effective in other multi-agent tasks and environments?\\n\\n2.\\tIs there a rationale for the design of the action space, state space, and reward function used in the reinforcement learning (RL) framework?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
9TL99KnTv5
Align Your Intents: Offline Imitation Learning via Optimal Transport
[ "Maksim Bobrin", "Nazar Buzun", "Dmitrii Krylov", "Dmitry V. Dylov" ]
Offline reinforcement learning (RL) addresses the problem of sequential decision-making by learning optimal policy through pre-collected data, without interacting with the environment. As yet, it has remained somewhat impractical, because one rarely knows the reward explicitly and it is hard to distill it retrospectively. Here, we show that an imitating agent can still learn the desired behavior merely from observing the expert, despite the absence of explicit rewards or action labels. In our method, AILOT (Aligned Imitation Learning via Optimal Transport), we involve special representation of states in a form of intents that incorporate pairwise spatial distances within the data. Given such representations, we define intrinsic reward function via optimal transport distance between the expert's and the agent's trajectories. We report that AILOT outperforms state-of-the art offline imitation learning algorithms on D4RL benchmarks and improves the performance of other offline RL algorithms by dense reward relabelling in the sparse-reward tasks.
[ "Optimal Transport", "Reinforcement Learning", "Offline RL", "Intention learning" ]
Reject
https://openreview.net/pdf?id=9TL99KnTv5
https://openreview.net/forum?id=9TL99KnTv5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xP6n93R1kQ", "vvYOrPNv7f", "rKsp0IrsND", "qVqEi4owmI", "lfJ9rRurBJ", "kmnwRZdYRf", "h8Etovn8Wk", "eh5fHqRx2T", "dS3VhOBSgs", "UBONfmOOsQ", "SUyBXxPtku", "IB0FzbeRY1", "GctKB2BTiU", "F003D0eYJ8", "ATs6NX06O6", "96rOwN9y5G", "7YtXLEH8Lp" ], "note_type": [ "meta_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1734743408208, 1730439622194, 1732792808147, 1730684577877, 1732792474871, 1732631411235, 1730646451253, 1731583641523, 1732792249588, 1730641671984, 1732534209067, 1732549755513, 1732618618952, 1733153826843, 1733021684343, 1737523872376, 1732716339764 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7886/Area_Chair_Cm6B" ], [ "ICLR.cc/2025/Conference/Submission7886/Reviewer_dczu" ], [ "ICLR.cc/2025/Conference/Submission7886/Authors" ], [ "ICLR.cc/2025/Conference/Submission7886/Reviewer_2N2x" ], [ "ICLR.cc/2025/Conference/Submission7886/Authors" ], [ "ICLR.cc/2025/Conference/Submission7886/Authors" ], [ "ICLR.cc/2025/Conference/Submission7886/Reviewer_ANYp" ], [ "ICLR.cc/2025/Conference/Submission7886/Authors" ], [ "ICLR.cc/2025/Conference/Submission7886/Authors" ], [ "ICLR.cc/2025/Conference/Submission7886/Reviewer_LCQn" ], [ "ICLR.cc/2025/Conference/Submission7886/Authors" ], [ "ICLR.cc/2025/Conference/Submission7886/Authors" ], [ "ICLR.cc/2025/Conference/Submission7886/Reviewer_ANYp" ], [ "ICLR.cc/2025/Conference/Submission7886/Reviewer_LCQn" ], [ "ICLR.cc/2025/Conference/Submission7886/Reviewer_dczu" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7886/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a method for the offline RL setting where rewards are difficult to specify but one (or multiple) expert trajectories demonstrating the behavior may be found. The proposed method computes rewards by compares the optimal transport distance (computed via ICVF) between a new trajectory and the expert trajectory. The proposed method outperforms state-of-the-art offline imitation learning methods and improves other offline reinforcement learning methods on the D4RL benchmarks, especially in sparse-reward environments.\\n\\nReviewers appreciated the strong empirical performance, the strong motivation (esp. Fig 1) and the discussion of prior work. They also noted appreciated the thorough experimental details.\\n\\nOne repeated concern was the relationship with Luo et al (OTR), and urged the authors to revise the paper to include a more detailed discussion of the relationship with this prior work to avoid misleading readers into misattributing some of the ideas discussed in this paper. Given the similarity with Luo et al, the reviewers were looking for more thorough experiments (e.g., when and why does using the learned metric help). Reviewers also requested an additional baseline (O-DICE) and more details about the training setup.\\n\\nOverall, I feel like this has the makings of a strong paper, but the paper needs additional revisions/analysis to address the concerns about similarity with Luo et al.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, the authors ran several new experiments (e.g., adding the requested O-DICE baseline), clarified some of the differences from prior work, the computational cost of the proposed method, and the construction of the cost matrix.\\n\\nThese revisions and experiments address some of the reviewer concerns, but don't seem to get at the most prevalent concern (relationship with Luo et al). Indeed, when I read the introduction of the revised paper, the similarity/difference from Luo et al is not immediately apparent.\"}", "{\"summary\": \"This paper introduces Aligned Imitation Learning via Optimal Transport (AILOT), a method for offline reinforcement learning that uses optimal transport to align an agent's behavior with an expert's in a \\\"intent\\\" space. The intent space is learned with some previously suggested method. AILOT outperforms existing methods on benchmark tasks, especially in sparse-reward environments.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Strong empirical performance\"], \"weaknesses\": [\"This paper is basically a souped-up version of (Luo et al., 2023), where the optimal transport between state is replaced with the optimal transport between intentions. Since the intention space is also learned with the previously suggested method, the contribution of the paper is 1) idea of using intention instead of state itself, and 2) the design of the cost matrix in Eq. (10). However, the design of the cost matrix Eq. (10) is not well analyzed in the paper, neither theoretically nor empirically. In my opinion, the idea of using intention instead of state is straightforward, and the paper for ICLR should contain more messages than it.\", \"SInce the usage of intention on OTR is the key contribution of the paper, I would expect the paper to analyze on what aspects do we need to improve from state space OTR methods. However, the paper relies on a single intention learning method and do not discuss on why the used intention learning method improves.\"], \"questions\": [\"The paper argues that learning expert states alone (without actions nor rewards) is one strong contribution of the paper. Are previous works incapable of doing that (e.g., (Luo et al., 2023))?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer, please kindly let us know if we have clarified our contributions. The discussion period is about to end and we have not had a chance to elaborate anything further during the entire discussion period. Please let us know if any other items need our clarification. Thank you.\"}", "{\"summary\": \"This paper considers learning from offline data in settings where reward may be difficult to specify, but one (or multiple) expert trajectories demonstrating the behavior may be found. The general idea is to assign rewards within a trajectory based on an optimal transport distance between trajectories in the offline data, and this optimal trajectory. The primary innovation is to use a dynamical distance (ICVF) to parameterize a more semantically meaningful cost function for the optimal transport problem. The evaluation demonstrates improvement over prior approaches in this problem setting on all state-based D4RL tasks (including locomotion, antmaze, and adroit).\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The problem setting is topical, and the method is simple and well-motivated -- Figure 1 in particular illustrates clearly the benefit of parameterizing distances in a latent space instead of the raw state space (or equivalent).\\n\\nThe experiments clearly demonstrate performance improvement over prior approaches (both based on optimal transport, or other imitation learning) in the D4RL suite. Admittedly, these tasks are relatively toy and now saturated, but even so, the results seem convincing. \\n\\nThe related work throughout the paper (in intro, related work, and method section) contextualizes the contributions of this paper well.\\n\\nThe appendix (and experimental section) thoroughly describes the comparisons and the benchmark setting.\", \"weaknesses\": \"I found the writing in the paper to be difficult to comprehend at many parts, making it difficult to understand the method exactly and what the exact contributions are relative to prior work in this space (e.g. Luo et al).\\n\\nFor instance, the introduction barely touches on the method being proposed, instead discussing in great detail the motivation for IL methods, for optimal transport, etc. This makes it difficult to understand and contextualize the specific contributions of the method being proposed in the paper.\\n\\nThe paper is most closely related to Luo et al, 2023 (OTR), but within the method section, does not distinguish between what ideas come from Luo et al, and which are newly introduced in this paper. For readers who may not be familiar with this prior work, this can lead to misattribution of ideas. It would be useful (whether in the related work, background, or method) to more clearly lay out what is done in Luo et al, and what new ideas are being considered. \\n\\nThe novelty of the idea (to my understanding) over Luo et al is relatively low -- this, in itself, is not a bad thing. However, given the simplicity of the idea, it would have been nice to see more thorough ablations and analyses to understand how e.g different dynamical distances perform, what types of data this is most helpful with, the importance of both components of the cost function. Another axis that could improve the thoroughness of the paper is to evaluate on more challenging domains beyond where standard cost metrics succeed (for example, in image-based domains). One other possible avenue of improvement here may be to thoroughly investigate what the actual computed rewards look like between this method and prior work. \\n\\nAs it stands right now, while the method demonstrates mild improvements on D4RL, the paper could be much improved by expanding the analysis on the axes why the learned representation is much more useful, or by testing on a more difficult suite of tasks.\", \"questions\": \"1. Why is there minimal benefit to scaling the number of expert trajectories? How well would this method handle using expert trajectories that take different behaviors to solve the same problem (for example, the Push-T task from Diffusion Policy)\\n\\n2. Could you explain better what the two different components of the cost function are doing? The text didn't well-motivate why these were chosen in this way.\\n\\n3. How sensitive is the method to `k`?\\n\\n4. Would be nice to expand the discussion about how this method handles sub-optimal / orthogonal data compared to traditional offline algorithms -- Can this method \\\"stitch\\\" trajectories together?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Given the discussion period is about to end, please kindly let us know if you are satisfied with the revised paper and our answers. If our answers and the extra experiments have met your expectations, we would greatly appreciate if you could revise the score. Thank you.\"}", "{\"comment\": \"Thank you very much for your time and detailed consideration of our paper!\\n\\n***Overlap with existing research:***\\n\\nOur original hypothesis was to improve the distance function between states so that it estimates the minimum number of steps between states instead of the conventional Euclidean distance. In addition to evaluating the algorithm as a whole on d4lr, we also investigated the properties of the intent distance metric itself. We proved theoretically (Proposition 1) and experimentally (Figure 3) that the chosen metric corresponds to the stated hypothesis. Such results are not presented in previous works of OTR (Luo et al., 2023) and ICVF (D. Ghosh et al., 2023). This is a new property and a new application of intents, which we investigated in this paper. \\n \\n***AILOT\\u2019s performance may be compromised if the expert\\u2019s behavior is ambiguous or multi-modal, making alignment challenging:***\\n\\nAccording to algorithm 1 (step 15) we take the minimum over expert trajectories. That is, in particular, it searches for the closest expert trajectory to the agent's trajectory. If there are expert trajectories that take different behaviors to solve the multi-modal problem then AILOT will imitate all these different behaviors.\\n\\n***Clarification on Dataset Size and Training Configuration:*** \\n\\nThe paper uses D4RL datasets for evaluation, which include: Gym-MuJoCo locomotion tasks, Adroit manipulation tasks and AntMaze navigation tasks. Each dataset includes approximately 10^6 transitions (we took datasets provided in D4RL). We do pre-training of ICVF procedure (D. Ghosh et al., 2023) for around 250k steps. In order to report final numbers, we take original IQL hyperparameters parameters (particularly 10^6 train steps with batch size 256) in order to be consistent with original IQL paper\\n\\n***Comparison with Modern State-of-the-Art Methods (O-DICE):*** \\n\\nOur method is not tied to IQL and can be used with any other offline RL method, for example with O-DICE. We tested AILOT + O-DICE (instead of IQL) and results showcase that on long-horizon planning tasks the proposed approach also outperforms O-DICE. O-DICE itself shows better scoring than IQL, and the corresponding replacement of the RL algorithm gives improvements in almost all the tasks below.\\n\\n\\n| Dataset | O-DICE | AILOT + IQL | AILOT + O-DICE |\\n| --------------------- | ---------- | ---------- | -------------- |\\n| halfcheetah-medium-v2 | 47.4 &pm; 0.2 | 47.7 &pm; 0.2 | **49.5 &pm; 0.4** |\\n| halfcheetah-medium-replay-v2 | 44.0 &pm; 0.3 | 42.4 &pm; 0.2 | **46.2 &pm; 0.6** |\\n| halfcheetah-medium-expert-v2 | 93.2 &pm; 0.6 | 92.4 &pm; 1.5 | **93.6 &pm; 1.8** |\\n| hopper-medium-v2 | **86.1 &pm; 4.0** | 82.2 &pm; 5.6 | 85.5 &pm; 3.7 |\\n| hopper-medium-replay-v2 | **99.9 &pm; 2.7** | 98.7 &pm; 0.4 | 99.1 &pm; 0.2 |\\n| hopper-medium-expert-v2 | **110.8 &pm; 0.6** | 103.4 &pm; 5.3 | 106.9 &pm; 2.1 |\\n| antmaze-large-play-v2 | 55.9 &pm; 3.9 | 57.6 &pm; 6.6 | **58.2 &pm; 4.3** |\\n| antmaze-large-diverse-v2 | 54.0 &pm; 4.8 | 66.6 &pm; 3.1 |**68.3 &pm; 3.1** |\\n\\n\\nThe comparison between O-DICE and AILOT+O-DICE is not quite correct as they solve different problems (offline RL and imitation learning, respectively). But it should be noted that results on hard antmaze-large-play and antmaze-large-diverse tasks outperform those of O-DICE. This is of no surprise since temporal grounding is lacking in the O-DICE method, where only distribution matching with behavior dataset is performed. \\n\\n***Comprehensive Sensitivity Analysis for Cost Function and Hyperparameters:*** \\n\\nHyper-parameters for the main algorithm in squashing function for rewards were chosen to be consistent with those in OTR for ease of comparison. In our experiments we found that values of a < 1, a > 5 drop performance significantly and this is due to an inappropriate range of resulting rewards. We conducted experiments with varying parameters on hard antmaze tasks. Overall, across different environments the drop is small.\\n\\n\\n| Dataset | AILOT (a = 2, tau=1) | AILOT(a=3, tau=1) | AILOT(a=3, tau=2) |\\n| --------------------- | -------------------- | ----------------- | ----------------- |\\n| antmaze-large-diverse | 63.4 +-2.1 | 63.1 +- 2.0 | 64.6 +- 3.4 |\\n| antmaze-large-play | 56.3 +- 4.6 | 56.1 +- 3.7 | 57.3 +- 4.1 |\"}", "{\"summary\": \"This paper focuses on practical offline reinforcement learning tasks with only expert observations, avoiding the requirements for expert actions and reward labels. Specifically, this paper proposed AILOT (Aligned Imitation Learning via Optimal Transport), which defines the intrinsic rewards using optimal transport distance between the intention representations of the expert\\u2019s and agent\\u2019s trajectories. Through dense reward relabeling, AILOT outperforms state-of-the-art offline imitation learning methods and improves other offline reinforcement learning methods on the D4RL benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe proposed AILOT method eliminates the requirements of expert rewards and actions. Instead of performing Optimal Transport matching, AILOT maps the initial state space to the space of intentions and aligns the intents of the agent with those of the expert via Optimal Transport. This approach involves several steps: 1) training general-purpose value functions from the expert dataset to learn the metric-aware representations; 2) solving the Optimal Transport alignment to obtain the coupling matrix; 3) reward labeling for the expert observations using the coupling matrix; and 4) training RL using the expert dataset with labeled rewards to obtain the final policy.\\n2.\\tThe intent differences between the k-step state representations have a linear dependence on the step count. This near-monotone function reflects the global geometric dependencies between states in the expert dataset. This good property is important for defining the cost function of Optimal Transport alignment learning. \\n3.\\tThe dense reward from AILOT can also boost the performance of other offline reinforcement learning methods. The performances of offline imitation learning and offline reinforcement learning have been demonstrated in the extensive experiments on D4RL benchmarks.\", \"weaknesses\": \"1.\\tAILOT is built on top of OTR, following the idea of performing reward relabeling through optimal transport. The most interesting part of AILOT is to perform Optimal Transport alignment in the space of intention instead of the original state space. However, the intention learning method is an existing work called ICVF, which limits the novelty.\\n2.\\tOptimal Transport introduces additional runtime overhead compared to the offline RL algorithms, with the benefits of reward labeling.\", \"questions\": \"1.\\tIn the experiments, AILOT is applied with Implicit Q-Learning (IQL) because it is a simple and robust offline RL algorithm. Is there any special reason or motivation for using IQL here, and will AILOT also perform well with any other offline RL algorithm?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your thorough and insightful review. Your questions help to clarify important aspects of our work. I'll address each point carefully below.\\n\\n***Differences between this Approach and OTR***\", \"you_are_correct_that_the_main_differences_are\": \"Our focus on intent-based representations rather than raw state-action pairs.\\nThe modified structure of matrix C (eq. 10) with k=2 instead of k=1.\", \"supporting_evidence\": \"See ablation studies in Tables 1,2,3,6,7\\n\\nWe also investigated the properties of the intent distance metric itself. We proved theoretically (Proposition 1) and experimentally (Figure 3) that the chosen metric corresponds to the stated hypothesis. Such results are not presented in previous works of OTR (Luo et al., 2023) and ICVF (D. Ghosh et al., 2023). This is a new property and a new application of intents, which we proposed in this paper.\\n\\n***Minimal Benefit from Scaling Expert Trajectories***\\nAccording to the algorithm 1 (step 15) we take the minimum over expert trajectories. That is, in particular, it searches for the closest expert trajectory to the agent's trajectory. Additional similar expert trajectories may provide redundant information without adding new insights. If there are expert trajectories that take different behaviors to solve the same problem and the policy is stochastic (like in IQL) then AILOT will imitate all these different behaviors.\\n\\n***Cost Function Components***\", \"first_term\": \"Aligns current states in intent space.\", \"second_term\": \"Ensures temporal consistency via future state alignment.\\nTogether they enforce both spatial and temporal alignment.\", \"reference\": \"Lines 262-269 in paper for detailed motivation\\n\\n***Sensitivity to Parameter k***\\nBoth OTR and AILOT are not sensitive to k. But setting k=2 gives a small improvement. Smaller k=1 may not capture enough temporal context, while larger k>2 values makes alignment harder.\", \"evidence\": \"See Table 7 for experimental results.\\n\\n***Trajectory Stitching Capability***\\nYes. According to the algorithm 1 (step 15) it can stitch rewards from different expert trajectories.\"}", "{\"comment\": \"Given the discussion period is about to end, we wanted to inquire if we had addressed the concerns. Please kindly let us know if we can do anything else to deserve the extra points in our score. Thank you.\"}", "{\"summary\": \"This paper addresses how to effectively imitate expert behavior in **offline reinforcement learning** with **sparse rewards** and without action labels or ground truth rewards. Previous methods used optimal transport to measure similarity between agent and expert trajectories as a reward signal, but relied on raw state distances. This paper introduces **AILOT**, which uses \\\"intent alignment\\\" and \\\"optimal transport\\\" in an intent space to calculate intrinsic rewards, enabling the agent to learn expert behavior more effectively and improve performance in sparse reward tasks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\t**Novel approach within the scope of existing methods:** While optimal transport has been used in similar imitation learning research, AILOT applies it uniquely by focusing on \\u201cintent alignment\\u201d in a metric-aware latent space, which differentiates it from other OT-based approaches.\\n2.\\t**Demonstrated performance improvement:** AILOT outperforms several baseline models, including those using OT, in various benchmark tasks, indicating it successfully optimizes OT alignment in a manner that strengthens offline RL without explicit reward signals.\\n3.\\t**Robust integration with other RL algorithms:** The method is designed to enhance the performance of other offline RL algorithms, making it versatile for broader applications.\", \"weaknesses\": \"1.\\t**Overlap with existing research:** The approach shares similarities with prior work that applies optimal transport to offline imitation learning, such as Optimal Transport for Offline Imitation Learning (arXiv:2303.13971) and Combining Expert Demonstrations in Imitation Learning via Optimal Transport (arXiv:2307.10810). These papers also use OT to create reward signals from expert trajectories, raising concerns about the novelty of AILOT\\u2019s contribution. Although AILOT introduces intent alignment as a distinct feature, further justification of how this approach advances beyond these prior works would strengthen the contribution.\\n2.\\t**Limited comparison with recent state-of-the-art methods:** The paper does not include comparisons with more recent imitation learning algorithms, such as O-DICE (ODICE: Revealing the Mystery of Distribution Correction Estimation via Orthogonal-gradient Update, arXiv:2402.00348), which have demonstrated strong performance in offline imitation learning tasks. Including such comparisons would provide a clearer understanding of AILOT\\u2019s relative performance and contributions.\\n3.\\t**Dependency on well-defined intents:** AILOT\\u2019s performance may be compromised if the expert\\u2019s behavior is ambiguous or multi-modal, making alignment challenging. This is especially relevant when handling multi-intent expert demonstrations, an area where existing OT-based methods may also encounter limitations.\\n4.\\t**Lack of clarity in training configuration:** While the paper provides an estimated runtime on an NVIDIA RTX 3090 GPU (10-25 minutes), it lacks specific details on training configurations, such as the number of samples or epochs used. Including this information would improve reproducibility and allow readers to better assess the computational efficiency of AILOT.\", \"questions\": \"1.\\t**Clarification on Dataset Size and Training Configuration:** Could the authors provide specific details on the number of samples and epochs used during training? This information would help clarify the computational efficiency of the method, beyond the hardware and runtime specifics provided.\\n2.\\t**Comparison with Modern State-of-the-Art Methods:** Have the authors considered including comparisons with more recent state-of-the-art methods in imitation learning, such as O-DICE or other recent 2024 approaches? This would offer a more comprehensive view of AILOT\\u2019s performance relative to current advancements in the field.\\n3. **Comprehensive Sensitivity Analysis for Cost Function and Hyperparameters:** While the paper includes a limited ablation study with only two configurations (\\u03b1=5, \\u03c4=0.5 and \\u03b1=1, \\u03c4=1), a broader exploration of these hyperparameters would provide a clearer picture of AILOT\\u2019s robustness. Could the authors expand the sensitivity analysis with more variations in these parameters or offer additional insights into how these choices affect the model\\u2019s performance across different tasks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your time and detailed consideration of our work and for highlighting the key strengths of the paper!\\n\\n***Optimal Transport introduces additional runtime overhead compared to the offline RL:***\\n\\nThe Runtime Section 5.2 demonstrates that the optimal transport incurs minimal computational overhead - no more than ~10 minutes beyond the standard offline RL processing time. Importantly, because these calculations take place in a fixed latent space of intents, the added computation time remains stable across different tasks, *independent of their state dimensionality*.\\n\\n***The choice of IQL as the base offline RL algorithm:***\\n\\nFirst of all as you also noted IQL is known for its simplicity and robust performance across different tasks. But the specific reason for this choice is that OTR (the main baseline) also includes IQL and it makes the comparison with OTR more meaningful because we can take the results from their article. Also by construction AILOT's reward relabeling approach is RL-algorithm-agnostic. So the performance improvements over baselines suggest the benefits come from better reward signals rather than the choice of base RL algorithm. \\nAILOT can be combined with other RL methods like Diffusion-QL (see Table 8 in Appendix) or more recent O-DICE (L. Mao et al. ICLR'2024).\\n\\nTo substantiate this, we have combined AILOT with the recent state-of-the-art RL method method O-DICE (L. Mao et al. ICLR'2024) and performed the corresponding experiments. The results are summarized in the table below. \\n\\n| Dataset | O-DICE | AILOT + IQL | AILOT + O-DICE |\\n| --------------------- | ---------- | ---------- | -------------- |\\n| halfcheetah-medium-v2 | 47.4 &pm; 0.2 | 47.7 &pm; 0.2 | **49.5 &pm; 0.4** |\\n| halfcheetah-medium-replay-v2 | 44.0 &pm; 0.3 | 42.4 &pm; 0.2 | **46.2 &pm; 0.6** |\\n| halfcheetah-medium-expert-v2 | 93.2 &pm; 0.6 | 92.4 &pm; 1.5 | **93.6 &pm; 1.8** |\\n| hopper-medium-v2 | **86.1 &pm; 4.0** | 82.2 &pm; 5.6 | 85.5 &pm; 3.7 |\\n| hopper-medium-replay-v2 | **99.9 &pm; 2.7** | 98.7 &pm; 0.4 | 99.1 &pm; 0.2 |\\n| hopper-medium-expert-v2 | **110.8 &pm; 0.6** | 103.4 &pm; 5.3 | 106.9 &pm; 2.1 |\\n| antmaze-large-play-v2 | 55.9 &pm; 3.9 | 57.6 &pm; 6.6 | **58.2 &pm; 4.3** |\\n| antmaze-large-diverse-v2 | 54.0 &pm; 4.8 | 66.6 &pm; 3.1 |**68.3 &pm; 3.1** |\"}", "{\"comment\": \"Thank you very much for your time your thorough review. I'll address each point carefully below.\\n\\n***This paper is basically a souped-up version of (Luo et al., 2023), where the optimal transport between state is replaced with the optimal transport between intentions:***\\n\\nWe respectfully disagree with this assessment. Our original hypothesis was to improve the distance function between states so that it estimates the minimum number of steps between states instead of the conventional Euclidean distance. In addition to evaluating the algorithm as a whole on d4lr, we also investigated the properties of the intent distance metric itself. We proved theoretically (Proposition 1) and experimentally (Figure 3) that the chosen metric corresponds to the stated hypothesis. Such results are not presented in previous works of OTR (Luo et al., 2023) and ICVF (D. Ghosh et al., 2023). This is a new property and a new application of intents, reported for the first time in our paper. \\n\\n***The design of the cost matrix Eq. (10) is not well analyzed in the paper:***\\n\\nThe design change in matrix C (eq. 10) merely consists of setting k=2 instead of k=1. We have studied the effect of this modification empirically (please refer to Table 7). As demonstrated, setting k=2 gives a small improvement. Smaller k=1 may not capture enough temporal context, while larger values k>2 make the alignment harder. \\n\\n***The paper argues that learning expert states alone (without actions nor rewards) is one strong contribution of the paper:***\\n\\nRespectfully, this is not quite true; in lines 70-76 we list three contributions of the paper, of which the main one is a new way of specifying the intrinsic reward function. Learning without the rewards and the expert actions is a common practice in imitation learning, and we have shown how this can be done more efficiently. As a result, we report better metric scores (Table 1) than those algorithms that use traditional real rewards extracted from the environment.\"}", "{\"comment\": \"Thank you for your responses. I will keep my previous score.\"}", "{\"comment\": \"Thank you for your responses. I will keep my previous score.\"}", "{\"comment\": \"After reading the authors' response, I am still not convinced whether the contribution of the paper is sufficient. I keep my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewers,\\n\\nThank you for your valuable feedback that has greatly improved our manuscript.\\n\\nWe have carefully considered and addressed all of your comments, making the necessary revisions and running **14 additional experiments** during the rebuttal. In particular, we conducted experiments in combination with another recent RL method O-DICE (L. Mao et al. ICLR 2024), confirming that AILOT is not tied to a specific RL algorithm and works well with any of them. We hope our responses meet your expectations. \\n\\nIt is also worth emphasizing that we not only proposed a new distance metric between states, which performed well in experiments, but also studied its properties, giving an answer to why it is better than the conventional Euclidean metric. This drastically distinguishes our method from the OTR (Luo et al, 2023) baseline.\\n\\nTo summarize, we showcased that AILOT is the state-of-the-art Imitation Learning algorithm. Specifically, **we outperformed 8 powerful SOTA methods in 32 benchmarked dataset tasks**. Thanks to OT, our agent managed to do it merely by observing and imitating the expert (no labels, no ground truth rewards). Given the long history of RL and the elegant proposal to align intentions, we long for an opportunity to present these results and ideas to the ICLR community.\\n\\nIf there are any remaining concerns or suggestions, we are available for prompt responses to any queries.\\n\\nThank you once again for your time and all the insightful feedback!\\n\\nBest regards,\\n\\nAuthors of Submission 7886\"}" ] }
9TClCDZXeh
Differentiable and Learnable Wireless Simulation with Geometric Transformers
[ "Thomas Hehn", "Markus Peschl", "Tribhuvanesh Orekondy", "Arash Behboodi", "Johann Brehmer" ]
Modelling the propagation of electromagnetic wireless signals is critical for designing modern communication systems. Wireless ray tracing simulators model signal propagation based on the 3D geometry and other scene parameters, but their accuracy is fundamentally limited by underlying modelling assumptions and correctness of parameters. In this work, we introduce Wi-GATr, a fully-learnable neural simulation surrogate designed to predict the channel observations based on scene primitives (e. g., surface mesh, antenna position and orientation). Recognizing the inherently geometric nature of these primitives, Wi-GATr leverages an equivariant Geometric Algebra Transformer that operates on a tokenizer specifically tailored for wireless simulation. We evaluate our approach on a range of tasks (i. e., signal strength and delay spread prediction, receiver localization, and geometry reconstruction) and find that Wi-GATr is accurate, fast, sample-efficient, and robust to symmetry-induced transformations. Remarkably, we find our results also translate well to the real world: Wi-GATr demonstrates more than 35% lower error than hybrid techniques, and 70% lower error than a calibrated wireless tracer.
[ "inverse problems", "learning to simulate", "wireless channel modeling", "geometric deep learning", "equivariance", "inverse problems", "electromagnetic signals", "diffusion models" ]
Accept (Poster)
https://openreview.net/pdf?id=9TClCDZXeh
https://openreview.net/forum?id=9TClCDZXeh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xx42U6IxuH", "x0Ff9ARFvn", "wnhTdrFxZl", "vitgNpmytG", "u1oc1vSgZ6", "twBC0ET8Ev", "tDtLt11Y5I", "smRFxHp22T", "rGC4bBEBFU", "piYZY87TGl", "lKlXdjb6cE", "daf6lGzhXE", "ZjSUdj9HQ8", "Ww5TndMQ8Q", "Vp2PCQE5b1", "PSb21XKkAA", "Jgzsl0h2ha", "IGf5EyIWna", "I5B4FY0bLM", "0rWZu1Iv9e" ], "note_type": [ "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1729388795759, 1731938253024, 1737523827008, 1731937469659, 1730614760361, 1732812786213, 1732630672286, 1734608686176, 1730639937291, 1733018690067, 1731937735120, 1740748960014, 1732631431811, 1731937936632, 1730389163555, 1732631339116, 1732511941118, 1732624131133, 1732921287605, 1731938090548 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7255/Reviewer_HmDB" ], [ "ICLR.cc/2025/Conference/Submission7255/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7255/Authors" ], [ "ICLR.cc/2025/Conference/Submission7255/Reviewer_z8wX" ], [ "ICLR.cc/2025/Conference/Submission7255/Authors" ], [ "ICLR.cc/2025/Conference/Submission7255/Reviewer_HmDB" ], [ "ICLR.cc/2025/Conference/Submission7255/Area_Chair_vsq6" ], [ "ICLR.cc/2025/Conference/Submission7255/Reviewer_ToCc" ], [ "ICLR.cc/2025/Conference/Submission7255/Reviewer_HmDB" ], [ "ICLR.cc/2025/Conference/Submission7255/Authors" ], [ "~Thomas_Hehn1" ], [ "ICLR.cc/2025/Conference/Submission7255/Authors" ], [ "ICLR.cc/2025/Conference/Submission7255/Authors" ], [ "ICLR.cc/2025/Conference/Submission7255/Reviewer_gkSN" ], [ "ICLR.cc/2025/Conference/Submission7255/Authors" ], [ "ICLR.cc/2025/Conference/Submission7255/Reviewer_gkSN" ], [ "ICLR.cc/2025/Conference/Submission7255/Reviewer_ToCc" ], [ "ICLR.cc/2025/Conference/Submission7255/Reviewer_z8wX" ], [ "ICLR.cc/2025/Conference/Submission7255/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces Wi-GATr, a novel learnable neural surrogate for wireless channel simulation that leverages geometric primitives such as 3D surfaces, antenna positions, and orientations. The primary focus is on addressing limitations in wireless signal propagation modeling by integrating geometric algebra transformers (GATr), which enhance efficiency and accuracy. Wi-GATr is shown to outperform existing models in tasks like signal strength prediction and receiver localization, achieving significant error reductions compared to existing methods.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper presents a new approach, Wi-GATr, which is a neural surrogate for wireless channel modeling using Geometric Algebra Transformers, a technique not widely applied in this field. This originality sets it apart from traditional methods by addressing key limitations in differentiability and scalability. The research is supported by thorough empirical evaluations across both simulated and real-world datasets, showing substantial improvements.\\nThe introduction of two new datasets, Wi3R and WiPTR, further enhances the credibility and reproducibility of the results. The methodology and results are clearly presented. In terms of significance, this work makes contributions to both wireless communication and machine learning.\", \"weaknesses\": \"1. Adam is commonly used in deep learning applications, particularly image-processing tasks. However, wireless signal modeling involves different characteristics and challenges than image data. The authors would benefit from a more detailed discussion on why Adam was chosen, especially considering the fundamental differences between wireless signal modeling and typical image tasks.\\n\\n2. Wi-GATr\\u2019s generalization capabilities come from the E(3)-equivariant design of the Geometric Algebra Transformer (GATr). It would be valuable for the authors to provide more justification or discussion regarding their contribution and novelty in implementing or improving such a design.\\n\\n3. While the authors introduced their own datasets (Wi3R and WiPTR), the authors should provide a clearer justification for their choice of benchmarks, and using more widely recognized simulators such as WinProp or Wireless InSite could strengthen their work.\", \"questions\": \"Listed above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank all reviewers for their insightful comments and suggestions.\\nWe are glad to read that the reviewers appreciate the relevance and introduction of the problem (z8wX,ToCc), the originality of our approach (HmDB), the versatility and performance of our results (z8wX, ToCc, HmDB, gkSN), and the value of our datasets for research community (z8wx, gkSN).\\nBased on their feedback, we were able to improve our paper as discussed in the individual responses.\\nOur changes in the revision are highlighted in red.\\n\\n\\nWe are looking forward to an interesting and constructive discussion!\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to initial reviewer comments\", \"comment\": \"Thank you for your review and comments! In the following we discuss them in more detail and refer to relevant revised sections of our paper.\\n\\n\\n> The definitions of inverse problems lack clarity. For instance, the authors should provide a more detailed discussion and formulation for receiver localization. Additionally, the explanations of probabilistic inference with diffusion models on Page 5 are inconsistent with the discussion of diffusion models on Page 13...\\n\\nThank you for bringing this to our attention. In our revised paper, we have incorporated additional details on conditional sampling and receiver localization. Furthermore, we have extended the appendix with detailed descriptions of our masked training procedure, including pseudocode algorithms for training and sampling from our diffusion model. Overall, we use the DDPM formulation for the noise scheduler during training and we have adapted the notation in the appendix to clarify this.\\n\\n\\n> This paper lacks of comparisons with public datasets for channel prediction and other state-of-the-art channel prediction models with NeRF and diffusion models.\\n\\nWe would like to highlight that the DICHASUS dataset is publicly available and contains real-world measurements.\\nCould you provide more concrete pointers which aspects of other public datasets you are missing, please?\\n\\n\\n> The novelty in the machine learning aspect of this paper is unclear. It appears that the work mainly leverages the equivariant Geometric Algebra Transformer for channel prediction.\\n\\nYou're right: the Geometric Algebra Transformer architecture is indeed a crucial component to our work.\\nHowever, the architecture alone is not enough to solve practical wireless problems.\\nWe add both a novel tokenization scheme that allows us to represent 3D wireless problems in geometric algebra representations as well as inference algorithms, including a new approach to wireless inverse problems based on diffusion models.\\nFor practitioners, our release of two new datasets could be equally important.\\nBeyond the technical contributions, we view our treatment of wireless channel modelling as a geometric deep learning problem as our novel key insight.\\n\\n\\n> Regarding generalization, the authors primarily validate their approach on two different datasets. It would be helpful to consider cross-dataset scenarios to assess performance in unseen conditions. Additionally, the authors should discuss the impact of parameters, such as varying simulated frequencies and the number of paths, on prediction performance.\\n\\nBy pretraining on simulated data and fine-tuning on real-world data, we have shown cross-dataset benefits of our model and our generated dataset.\\nGeneralization with respect to other frequencies represents an interesting, but separate, orthogonal problem to the one that we tackled, namely generalization across geometries.\\n\\n\\n> ... correct typographical errors, such as \\u201cThe The Tx and Rx locations are sampled uniformly within the bounds of the floor layouts.\\u201d\\n\\nThank you for pointing out this typo in our appendix. We have addressed it in our revision.\"}", "{\"summary\": \"In this paper, the authors introduce Wi-GATr, a fully learnable neural simulation surrogate designed to predict wireless channels based on indoor scene elements, including surface mesh, antenna position, and orientation. They employ an equivariant Geometric Algebra Transformer with a tokenizer for wireless simulation. The proposed method is validated using two distinct simulated datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"(1) Wireless channel prediction is essential in wireless systems, and developing a fully learnable neural simulation surrogate for predicting wireless channels is an emerging topic.\\n\\n(2) The techniques and experimental results in this paper are solid. The authors apply their proposed model not only to channel prediction but also to two inverse problems: receiver localization and scenario generation. Various simulation results further validate the effectiveness of the proposed method.\\n\\n(3) The authors have developed two new 3D wireless datasets to validate their model, which would be valuable resources for the wireless research community if published.\", \"weaknesses\": \"(1) The novelty of this paper in the machine learning component is unclear.\\n\\n(2) The definitions of inverse problems lack clarity.\\n\\n(3) This paper lacks of comparisons with public datasets for channel prediction and other state-of-the-art channel prediction models with NeRF and diffusion models.\", \"questions\": \"(1) The novelty in the machine learning aspect of this paper is unclear. It appears that the work mainly leverages the equivariant Geometric Algebra Transformer for channel prediction. The authors should clarify which components in the machine learning section present new contributions.\\n\\n(2) The definitions of inverse problems lack clarity. For instance, the authors should provide a more detailed discussion and formulation for receiver localization. Additionally, the explanations of probabilistic inference with diffusion models on Page 5 are inconsistent with the discussion of diffusion models on Page 13, as the diffusion models used do not seem to follow the standard DDPM framework. The authors should include the training and sampling algorithms for the diffusion model utilized, as well as discuss the model's input.\\n\\n(3) It would be beneficial to compare the proposed method on public datasets and with other models related to channel prediction, such as NeRF and diffusion models.\\n\\n(4) Regarding generalization, the authors primarily validate their approach on two different datasets. It would be helpful to consider cross-dataset scenarios to assess performance in unseen conditions. Additionally, the authors should discuss the impact of parameters, such as varying simulated frequencies and the number of paths, on prediction performance.\\n\\n(5) The authors should proofread the paper to correct typographical errors, such as \\u201cThe The Tx and Rx locations are sampled uniformly within the bounds of the floor layouts.\\u201d\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> Besides, the fundamental differences between wireless signals and images or texts remain unexplained.\\n\\nWe extended our existing discussion in Section 1 (text highlighted in red). In short, we highlight that unlike typically studied modalities (e.g., images, text), our task of predicting wireless signals is an inherently geometric problem that requires keeping track of many interactions of the signal with the environment.\\n\\n&nbsp;\\n\\n> However, before some experiment results are provided, I am not very convinced by the explanation for the Adam optimizer\\n\\nTo provide empirical evidence, we trained GATr and Transformer with different optimizers on the DICHASUS data for 5k Tx positions (Section 5.4). After 50k steps, we evaluate the test performance. We obtain the following results:\\n| | Adam | RMSProp | SGD | SGD + Momentum |\\n|:------------|-------:|----------:|------:|-----------------:|\\n| GATr | 0.68 | 0.67 | 0.75 | 0.71 |\\n| Transformer | 0.69 | 1.4 | 0.92 | 0.74 |\\n\\nYou can see here that Adam performs more robustly compared to the other optimizers for both models. Please note that we are not claiming that Adam is the best optimizer for this problem.\\n\\nAdam was also used in prior works relevant to the wireless domain (Geometric DL [1, 2, 3] and Wireless surrogates [4, 5, 6]).\\n \\n&nbsp;\\n\\n[1] Johannes Brandstetter, Rob Hesselink, Elise van der Pol, Erik J Bekkers, and Max Welling. Geometric and physical quantities improve E(3) equivariant message passing. In ICLR, 2022b.\\n\\n[2] Johann Brehmer, Pim de Haan, S\\u00a8onke Behrends, and Taco Cohen. Geometric Algebra Transformer.\\nIn NeurIPS, 2023.\\n\\n[3] Julian Suk, Pim De Haan, Baris Imre, and Jelmer M.Wolterink. Geometric algebra transformers for large 3d meshes via cross-attention. In ICML 2024 Workshop on Geometry-grounded Representation Learning and Generative Modeling, 2024.\\n\\n[4] Jakob Hoydis, Faycal A\\u0131t Aoudia, Sebastian Cammerer, Florian Euchner, Merlin Nimier-David, Stephan ten Brink, and Alexander Keller. Learning radio environments by differentiable ray tracing. arXiv:2311.18558v1, 2023.\\n\\n[5] Tribhuvanesh Orekondy, Pratik Kumar, Shreya Kadambi, Hao Ye, Joseph Soriaga, and Arash Behboodi. Winert: Towards neural ray tracing for wireless channel modelling and differentiable simulations. In ICLR, 2022b.\\n\\n[6] Thomas M. Hehn, Tribhuvanesh Orekondy, Ori Shental, Arash Behboodi, Juan Bucheli, Akash Doshi, June Namgoong, Taesang Yoo, Ashwin Sampath, and Joseph B Soriaga. Transformer-based neural surrogate for link-level path loss prediction from variable-sized maps. In IEEE Globecom, 2023.\"}", "{\"comment\": \"The author answered my question to some extent. However, before some experiment results are provided, I am not very convinced by the explanation for the Adam optimizer. Besides, the fundamental differences between wireless signals and images or texts remain unexplained. I will keep the current rating now.\"}", "{\"metareview\": \"In this paper, the authors present a novel geometric transformer model for wireless simulation. The reviewers are generally positive and recognize the valuable contribution of the proposed transformer model, particularly its application to wireless indoor modeling. This work has the potential to advance the deployment of indoor communication nodes, such as pico-cells in 5G networks.\\n\\nThe authors focus on indoor systems, but it would be valuable to include a discussion of outdoor systems in highly dense urban environments, where propagation simulators may encounter similar limitations. However, the proposed transformer model might also face challenges related to scalability in 3D spaces and over larger distances. Additionally, the paper would greatly benefit from providing links to the code and data, as replicating such a tool without these resources would be challenging.\", \"additional_comments_on_reviewer_discussion\": \"The authors and reviewers engaged constructively during the discussion phase, and the authors successfully convinced the reviewers of the merits of their work.\\n\\nWhile the paper is strong, it would be significantly improved by releasing the code and data, as this would enhance its reproducibility and impact. However, I am uncertain whether acceptance can be made conditional on providing these resources.\\n\\nIn my opinion, this paper is not as strong as the reviewers perceive it to be. I believe most communication engineers would likely prefer ray tracing algorithms, especially when the computational complexity of the proposed method does not offer a significant improvement (e.g., a 5x speed-up). Additionally, the proposed model may perform poorly in out-of-distribution (OOD) cases. However, since the reviewers were very positive about the work, I chose not to intervene in their evaluation process. While this is not a bad paper, I believe its applicability may be limited. Releasing the code and data would help enhance its practical utility and impact.\"}", "{\"summary\": \"The paper presents a learnable approach to tackle the problem of indoor wireless simulation. The proposed architecture is based on a Geometric Algebra Transformer, and a new tokenizer is introduced, allowing to leverage a 3D representation of the scene by taking 3D primitives as input. The model can also be integrated into an inverse problem framework based on diffusion, allowing to retrieve the position of the transmitter, the receiver or the geometry of the scene. Two new datasets for wireless simulation are also presented. Experiments are conducted in synthetic and real settings.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The presentation is clear and the paper is well-written. The problem at hand and coresponding challenges are well introduced to the reader.\", \"The quantitative and qualitative results show the superiority of the method in multiple settings, and with regards to multiple variables (number of training samples, number of training rooms / transmitters, etc..\", \"The versatility of the model is underlined by its adaptation to the inverse problem setting.\"], \"weaknesses\": [\"Although the results on synthetic data are convincing regarding the contribution of the proposed architecture, the impact of the proposed architecture w.r.t. the transformer is not so clear on real data, although the authors explain this by the simplicity of the scene.\", \"The most competitive baseline (SEGNN) is not evaluated on the WiPTR dataset.\"], \"questions\": [\"Why does data augmentation lead to poorer results in some cases in table 2 for the transformer baseline ?\", \"Are the input coordinates of the transmitter/receiver 2D or 3D for the proposed model ?\", \"For Rx interpolation in in-distributions experiments (l. ~329) in table 1, have the floor layouts been seen by the model during training ? If so, this should be explained more clearly, and why this setting is relevant.\"], \"minor_remarks\": [\"l. 297: |ap|^2 -> ap^2 ?\", \"l. 315: while -> While\", \"l. 789: The The -> The\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the clarification. I am updating the score to 8.\"}", "{\"title\": \"Response to initial reviewer comments\", \"comment\": \"Thank you for your review and comments! In the following we discuss them in more detail and refer to relevant revised sections of our paper.\\n\\n\\n> The most competitive baseline (SEGNN) is not evaluated on the WiPTR dataset.\\n\\nWe have implicitly mentioned in line 308 that training SEGNN runs out of memory on WiPTR.\\nWe have made this more explicit in our revision.\\nThank you for pointing this out!\\nIt is one of the advantages of our Wi-GATr approach that thanks to the transformer architecture, it scales much better to larger scenes than SEGNN.\\n\\n\\n> Are the input coordinates of the transmitter/receiver 2D or 3D for the proposed model ?\\n\\nWe consider everything in 3D, including the input coordinates of the transmitter and receiver.\\nIn the synthetic datasets, the receiver and transmitter positions are randomly sampled in 3D, making it essential to take the height into account.\\nThis is one of the novelties of our approach.\\n\\n\\n> Although the results on synthetic data are convincing regarding the contribution of the proposed architecture, the impact of the proposed architecture w.r.t. the transformer is not so clear on real data, although the authors explain this by the simplicity of the scene.\\n\\nWe argue that the main strength of our architecture is the generalization capabilities to new geometric layouts.\\nCurrently, real-world datasets are lacking variety in geometric layouts, collecting data on multiple scenes is challenging and time-consuming.\\nTherefore, we show the generalization capabilities on simulated data.\\nOn the single real-world scene, where the geometry remains fixed throughout, the transformer performs competitively.\\nHowever, as we evaluate on unseen scenarios, such as rotated or translated scenes (Table 1), Wi-GATr remains competitive and robust, while a baseline transformer fails.\\n\\n\\n> Why does data augmentation lead to poorer results in some cases in table 2 for the transformer baseline ?\\n\\nThis is indeed surprising.\\nWe explain these results with the fact that the samples in the dataset have a lot of accidental shared structure, for instance that floor and ceiling are parallel to the x-y plane. \\nE(3) augmentations remove this structure, so the network has to learn more from data.\\nGiven enough capacity and training steps, this should not be a problem, but within the fixed settings of our study this turned out to be more difficult to pick up for the transformer.\\n\\n\\n> For Rx interpolation in in-distributions experiments (l. ~329) in table 1, have the floor layouts been seen by the model during training ? If so, this should be explained more clearly, and why this setting is relevant.\\n\\nYes, in the Rx interpolation setting, the floor layouts have been seen during training.\\nHowever, receiver locations we evaluate on were not seen during training.\\nThis evaluation tests the generalization to unseen Rx positions, testing the capabilities as a wireless simulator.\\nThe real-world relevance is highlighted in Section 5.4, where the simulations tuned on sparse data are used to predict missing measurements.\\n\\nWe have rephrased the relevant parts in our revised paper to improve clarity.\\nThank you!\\n\\n\\n> Minor remarks:\\n>\\n> l. 297: |ap|^2 -> ap^2 ?\\n\\nThank you for pointing out the two typos. We have fixed them in the revision.\\nNote that |a_p|^2 is more precise as a_p denotes the complex path gain.\\nWe have added a paragraph with background on channel modelling in Section 2 to clarify the notation.\"}", "{\"comment\": \"We thank the AC and the reviewers for the fruitful discussion and their decision.\\n\\nWe want to highlight that it is not our intention to generally replace ray tracing algorithms. We showed our method can solve problems that state-of-the-art ray tracing cannot solve. We invite interested researchers to try this out themselves using our open-source code. The links to the repositories can be found in the camera-ready version of the paper and the code will be published there by the time of the conference.\"}", "{\"comment\": \"We are happy that our response has clarified your open questions. Thank you for your feedback and for raising your score.\"}", "{\"title\": \"Response to initial reviewer comments\", \"comment\": \"Thank you for your review and comments! In the following we discuss them in more detail and refer to relevant revised sections of our paper.\\n\\n> Why did we choose Adam?\\n\\nAdam is indeed a robust and efficient optimizer for training deep learning models. As Wi-GATr is a transformer, thus a deep learning model as well, the benefits of Adam during training apply here as well regardless of the exact domain. Other optimizers might also apply and lead to good Wi-GATr performance.\\n\\n\\n> Wi-GATr\\u2019s generalization capabilities come from the E(3)-equivariant design of the Geometric Algebra Transformer (GATr). It would be valuable for the authors to provide more justification or discussion regarding their contribution and novelty in implementing or improving such a design.\\n\\nThank you for this suggestion. We agree that GATr is a crucial component to this work.\\nHowever, the architecture alone is not enough to solve practical wireless problems.\\nWe add both a novel tokenization scheme that allows us to represent 3D wireless problems in geometric algebra representations as well as inference algorithms, including a new approach to wireless inverse problems based on diffusion models.\\nFor practitioners, our release of two new datasets could be equally important.\\nBeyond the technical contributions, we view our treatment of wireless channel modelling as a geometric deep learning problem as our novel key insight.\\n\\n\\n> While the authors introduced their own datasets (Wi3R and WiPTR), the authors should provide a clearer justification for their choice of benchmarks, and using more widely recognized simulators such as WinProp or Wireless InSite could strengthen their work.\\n\\nWe did use Wireless InSite to obtain simulation measurements in the datasets we propose.\\nWe have made that reference more explicit in the manuscript.\\nFor clear justification and detailed simulation parameters, we would like to refer to Appendix D.\\nPlease let us know if you are missing specific details or explanations.\"}", "{\"summary\": \"The paper introduces Geometric Algebra Transformer (GATr) into wireless channel observation problem and builds a learnable neural simulation surrogate Wi-GATr to predict channel states based on scene primitives. The authors design a Wi-GATr Backbone to exploit the inherent geometric nature of the propagation of wireless signals. Further, they apply this model to probabilistic inference and receiver localization problems. Experimental results show that Wi-GATr outperforms other methods on the two datasets they constructed.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. They design a new Wireless Geometric Algebra Transformer (Wi-GATr) backbone, which embeds the information of the wireless scene into geometric algebra while the network learns to model the channel.\\n2. They develop a learnable forward-model for channel simulation and an inverse-model for receiver localization based on the differentiable properties.\\n3. They build two new datasets with diverse scene geometry.\", \"weaknesses\": \"1. The problem of this work is not well identified. The authors only give the formulation of geometric algebra, but do not give any introduction of the wireless channel model. Wireless channels are complex and consist of many parameters. Authors need to specify what information about the channel they want to simulate and predict.\\n2. The challenges that need to be addressed are not clearly stated. The authors introduce GATr into this work and build a backbone to make it fit for the wireless channel prediction problem. However, the difficulties and challenges of model transfer are not fully introduced. \\n3. The innovation is somewhat limited. In addition to the designed backbone, the rest of the work consists only of two application experiments using the properties of the existing model. \\n4. The explanation for some of the pictures is inadequate. For example, Figure 1 shows the geometric surrogates for modeling wireless signal propagation. However, there is not enough explanation of this figure in the paper. It's hard to get the main point of it.\", \"questions\": \"1. How were the two datasets generated? Were they extracted from other datasets or were they simulated themselves using other tools. Is this sufficient as one of the contributions of the paper?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for raising your score! We have added more precise references in line 191 and line 480 to provide more details what we are predicting in our experiments. Furthermore, we have rewritten and restructured the introduction of the challenges of 3D surrogate modeling (highlighted in red). Your feedback is immensely valuable. Thus, please let us know if you are still missing essential details in the description of the wireless channel.\", \"one_additional_clarification\": \"In the background paragraph that we added in our first revision, we aimed to provide a high-level introduction of wireless channel modelling. We introduced the wireless channel from a geometric optics perspective. We believe this perspective enables the machine learning community to quickly and intuitively understand the parameters involved (power, phase, delay, polarization, material-dependence). In the paragraph, we refer to the Tse & Viswanath for a more detailed introduction to wireless channel modeling.\"}", "{\"comment\": \"The author answered my question to some extent. I would change the rating to 6 marginally above the acceptance threshold.\\nHowever, the following questions remain:\\n1. The added description of the wireless signal propagation part is not highly related to the latter wireless simulation model setup.\\n2. The challenges that need to be addressed should be reflected clearly in the paper not only in the rebuttal. It's better to use a small paragraph to state this problem.\\nIt's suggested that the manuscript should be polished further.\"}", "{\"comment\": \"I thank the authors for clarifying the points mentioned, I updated my score to 8.\"}", "{\"comment\": \"The author partially addressed my questions. I'll maintain my current rating. The ratting 6 is marginally above the acceptance threshold.\"}", "{\"title\": \"Response to initial reviewer comments\", \"comment\": \"Thank you for your review and comments! In the following we discuss them in more detail and refer to relevant revised sections of our paper.\\n\\n\\n> The problem of this work is not well identified. The authors [...] do not give any introduction of the wireless channel model.\\n\\nThank you for pointing this out.\\nWe have added a background paragraph on channel models and signal propagation to our revised paper.\\nIn our experiments, we show examples of predicting non-coherent received power, band-limited received power, and delay spread.\\n\\n\\n> The challenges [of model transfer] that need to be addressed are not clearly stated.\", \"applying_transformers_to_modelling_wireless_signal_propagation_in_3d_poses_two_challenges\": \"1. What is an adequate representation of the input data?\\n2. How can a transformer generalize to novel coordinate systems?\\n\\nTo address the first challenge, we show a comparison of a naive mesh representation to our tokenization scheme in Figure 3.\\nTo address the second challenge, we identified the symmetries of the problem and an adequate architecture, i.e. GATr.\\nIn order to apply GATr, one has to properly embed the mesh representation into the correct Geometric Algebra types (see Section 3.2).\\nIn our experiments, we show the significant impact using an equivariant architecture has on the problem, outperforming several baselines.\\n\\n\\n> ... the work consists only of two application experiments using the properties of the existing model.\\n\\nIt is true that we do not propose an entirely new architecture or algorithm in this work.\\nHowever, we do show an entirely novel geometric deep learning approach to wireless simulation neural surrogates.\\nWe do think that our performant backbone with its new geometric tokenization scheme, a new approach to wireless inverse problems based on diffusion models and conditional sampling, the demonstration on several experiments, and the release of two new datasets to the community present enough useful innovation and insights at the intersection of wireless applications and geometric deep learning research.\\nBeyond the technical contributions, we aim to impact both fields by highlighting the geometric nature of wireless channel modelling which represents a novel angle to this machine learning problem.\\n\\n\\n> ... Figure 1 shows the geometric surrogates for modeling wireless signal propagation. However, there is not enough explanation of this figure in the paper...\\n\\nThank you for bringing this to our attention! We have revised the caption of Figure 1 to improve clarity.\\n\\n\\n> How were the two datasets generated? Were they extracted from other datasets or were they simulated themselves using other tools. Is this sufficient as one of the contributions of the paper?\\n\\nThe datasets were simulated using state-of-the-art wireless ray tracing software (Remcom Wireless InSite as noted by other reviewer).\\nThe diverse scenes and detailed wireless simulation makes the dataset compelling for data-driven channel modeling and the geometric deep learning community.\", \"edit\": \"Fixed typo.\"}" ] }
9SvRqu21m7
Multi-Student Diffusion Distillation for Better One-Step Generators
[ "Yanke Song", "Jonathan Lorraine", "Weili Nie", "Karsten Kreis", "James Lucas" ]
Diffusion models achieve high-quality sample generation at the cost of a lengthy multistep inference procedure. To overcome this, diffusion distillation techniques produce student generators capable of matching or surpassing the teacher in a single step. However, the student model’s inference speed is limited by the size of the teacher architecture, preventing real-time generation for computationally heavy applications. In this work, we introduce Multi-Student Distillation (MSD), a framework to distill a conditional teacher diffusion model into multiple single-step generators. Each student generator is responsible for a subset of possible conditioning data, thereby obtaining higher generation quality for the same capacity. MSD trains multiple distilled students allowing smaller sizes and, therefore, faster inference. Also, MSD offers a lightweight quality boost over single-student distillation with the same architecture. We demonstrate MSD is effective by training multiple same-sized or smaller students on single-step distillation using distribution matching and adversarial distillation techniques. With smaller students, MSD obtains competitive results with a faster inference time for single-step generation. Using same-sized students, MSD with 4 students sets new state-of-the-art results for one-step image generation: FID 1.20 on ImageNet-64×64 and 8.20 on zero-shot COCO2014.
[ "Diffusion distillation", "One-step generative models", "Mixture of experts" ]
Reject
https://openreview.net/pdf?id=9SvRqu21m7
https://openreview.net/forum?id=9SvRqu21m7
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wPsNgEgaib", "uw1pM6qFzu", "s9Vl669WXZ", "s4D6Ch2pGU", "qIVlvx4hL2", "oPfthbq27d", "mexCQFOxP1", "m3DXFEZwZw", "lYsIbyJeUE", "lKzl34apPr", "hHvzqEoYTJ", "gwwnUudKCE", "eMIXiLXWvl", "bdahUvO51J", "YbZTg40HJD", "Xy0Ot3FGw8", "W6hMLlbfKU", "MgCfalr6hr", "HnHMedNP5g", "Aix3EStrQj" ], "note_type": [ "official_review", "decision", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731385310841, 1737523887980, 1732134022418, 1732168988418, 1732133662204, 1734762836459, 1732134208367, 1732134295972, 1732253494479, 1732133949048, 1730624840688, 1732133797287, 1732512189454, 1732509159428, 1730528929345, 1732568289896, 1730913166527, 1732133405191, 1732568316374, 1732223016086 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8104/Reviewer_7iFF" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8104/Authors" ], [ "ICLR.cc/2025/Conference/Submission8104/Reviewer_igyA" ], [ "ICLR.cc/2025/Conference/Submission8104/Authors" ], [ "ICLR.cc/2025/Conference/Submission8104/Area_Chair_nsZF" ], [ "ICLR.cc/2025/Conference/Submission8104/Authors" ], [ "ICLR.cc/2025/Conference/Submission8104/Authors" ], [ "ICLR.cc/2025/Conference/Submission8104/Reviewer_igyA" ], [ "ICLR.cc/2025/Conference/Submission8104/Authors" ], [ "ICLR.cc/2025/Conference/Submission8104/Reviewer_D8t3" ], [ "ICLR.cc/2025/Conference/Submission8104/Authors" ], [ "ICLR.cc/2025/Conference/Submission8104/Authors" ], [ "ICLR.cc/2025/Conference/Submission8104/Reviewer_D8t3" ], [ "ICLR.cc/2025/Conference/Submission8104/Reviewer_WJfY" ], [ "ICLR.cc/2025/Conference/Submission8104/Authors" ], [ "ICLR.cc/2025/Conference/Submission8104/Reviewer_igyA" ], [ "ICLR.cc/2025/Conference/Submission8104/Authors" ], [ "ICLR.cc/2025/Conference/Submission8104/Authors" ], [ "ICLR.cc/2025/Conference/Submission8104/Authors" ] ], "structured_content_str": [ "{\"summary\": \"In this work authors propose a way to distill a pre-trained diffusion model into multiple student where each student is specialized for sub-domain or specific partition of data. To perform distillation authors propose different objectives and also consider smaller architectures for student guided by target from pre-trained diffusion model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Paper is easy to read and understand the setting, focusing on domain specific student (partition of dataset).\\nDifferent objectives and better initialization to perform distillation makes sense and resultant effectiveness is demonstrated empirically. \\n\\nDemonstrates finetuning with adversarial training further improves quality of distilled model, which makes sense.\", \"weaknesses\": \"Currently this work lacks strong motivation or useful analysis.\\nThere are previous works like eDiff which specialize different diffusion models per timestep and also works exploring MoE for efficient inferen w.r.t efficiency as motivation more effective pruning, efficient architectures, caching across timesteps etc. have been proposed to achieve smaller models and/or lower latency. \\n\\nThis work explores splitting student into multiple models w.r.t dataset, while that is practical this work does not provide any novel insights nor significant performance boost.In case of text to image with SD1.5 FID boost only marginal by 0.15 combining all 3 objectives and 4 sets of parameters instead of one, which asks for more memory, more complicated orchestration etc. \\n\\nWhile FID is evaluated, it is unclear how well MSD recovers marginal data distribution i.e., diversity of generation and resultant sampled/recovered distributions (posterior) w.r.t conditional i.e., something like LPIPS_Diversity and aggregated distribution Precision-Recall or other metrics. This helps understand if there is any feature collapse, mode collapse etc?\", \"questions\": \"Why is atleast CLIP score not reported on either COCO-2017 or 2014 which could be informative as FID has its deficiencies, could consider HPSv2 or other metrics too for completeness.\\n\\nWhat is total training compute required for proposed method? How does it compare to previous methods which do not specialize to sub-sets of data?\\n\\nTo better justify and understand motivation of this work, it might be useful to consider pruning or smaller architecture of already distilled one-step model as a baseline or initialization in their work? How much of training compute can be exploited with better initialization compared to distilling from scratch, such analysis would better benefit community as it currently lacks novel insights to adopt broadly for practical applications too. \\n\\nAuthors cite EM Distillation as justification to emphasize difficulty of training one-step model from scratch? While it is known from consistency distillation, rectified flow and other works too that training a one-step models is hard not sure why cite distillation method to justify training from scratch as this is also not focus of this work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer igyA, part 2\", \"comment\": \"The authors outline several desired properties for the partitioning function in Section 4.1, yet the implemented solution simply uses consecutive classes as partitions (validated in Section 5.4). Could you compare a random partitioning strategy with your current approach? This would be valuable to determine whether the specific partitioning method offers advantages over any balanced data division.\\n\\nFollowing your suggestion, we conducted additional experiments on ImageNet64 with random partitioning and added the results in the revised version (see line 493 and table 3). Random partitioning indeed shows worse performance. We think sequential partitioning works well on ImageNet because the classes are already ordered in a semantically meaningful manner.\\n\\n The central contribution of the paper is that 'it offers a flexible framework to increase generation speed by reducing student size, and increasing generation quality by training more students. This is seen in Table 3 as well. In fact, in Table 1, the authors show that the Students outperform the Teacher. Does this observation also hold for text-to-image SD models?\\n\\nIn Table 2, we do show that the students outperform the teacher (for 50 steps ODE, not for 200 steps SDE though). Moreover, in Figure 3 we also show decent performance for a smaller SD student. That smaller student is trained only on dog-related text prompts, and we would need to train many students to cover the whole prompt set and report a valid FID number. Due to limited computational resources and the complete coverage of the prompt set by the 4-student model, we did not train the full set of students at this size. We felt that the qualitative results were sufficient to validate the method given the other quantitative evidence in the paper.\\n\\n The partition function notation F(\\u22c5)=(\\u22c5,\\u22c5|\\u22c5) needs proper definition as it resembles conditional probability notation.\\n\\nThanks for the helpful suggestion! In the revised version, we have changed the notation to a subscript to avoid confusion.\\n\\n The MSD results appear to use a Student of equal size to the Teacher. Please include results for smaller-sized Students (as used in Fig. 5c) or explain their omission.\\n\\nWe included these results in the original version, where Table 1 shows smaller student results for ImageNet (with an additional ablation study on how pruning is performed). For SD results, see our response to the point above for an explanation of the omission.\", \"just_for_clarity\": \"In Table 1, a single Student is used for generation (the Student responsible for a particular prompt), that is why the NFE is 1, right?\\n\\nThat\\u2019s correct. For better clarity, we now additionally noted this in the caption of Figure 1.\\n\\nWe greatly appreciate your efforts and hope our response adequately addresses your concerns. Please let us know if you have any unresolved issues, and thanks for your help in this process.\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"> We believe the MSD framework is conceptually compatible with any distillation method and should always boost performance. We have included consistency distillation results on the 2D mixture-of-Gaussian setting in Appendix A.2, which confirms the generality.\\n\\nThanks, intuitively, even I feel that such a technique should boost performance. But it is always a good rule to verify these things experimentally. Thanks for this result nonetheless, it is helpful.\\n\\n> Our original submission states in line 428: \\u201cwe again employed a minimalist design: pass the prompts through the pre-trained SD v1.5 text encoder, pool the embeddings over the temporal dimension, and divide into 4 subsets along 4 quadrants\\u201d, and also in line 1134: \\u201cwe partition the prompts and corresponding images by the 4 quadrants formed by the first 2 entries of the embeddings, where the embeddings are pooled from the outputs of the SD v1.5 text embedding layers.\\u201d The 4 resulting partitions are disjoint; therefore, a single student can be selected without ambiguity during inference. We use the same mechanism during training and inference.\\n\\nI now understand how this makes the text conditions disjoint. However, I would like to see a code snippet/some reference code just to be sure. Further, I request the authors to include the 'the 4 resulting partitions are disjoint' point explicitly in the paper.\\n\\n> Following your suggestion, we conducted additional experiments on ImageNet64 with random partitioning and added the results in the revised version (see line 493 and table 3). Random partitioning indeed shows worse performance. We think sequential partitioning works well on ImageNet because the classes are already ordered in a semantically meaningful manner.\\n\\nThanks for these experiments, it is helpful. Can you further verify the random partitioning with 1 student? Just to be sure that random partitioning always performs worse?\\n\\n> In Table 2, we do show that the students outperform the teacher (for 50 steps ODE, not for 200 steps SDE though). Moreover, in Figure 3 we also show decent performance for a smaller SD student. That smaller student is trained only on dog-related text prompts, and we would need to train many students to cover the whole prompt set and report a valid FID number. Due to limited computational resources and the complete coverage of the prompt set by the 4-student model, we did not train the full set of students at this size. We felt that the qualitative results were sufficient to validate the method given the other quantitative evidence in the paper.\\n\\nThanks for this clarification. I request you to mention these points explicitly in the paper.\\n\\n> In the revised version, we have changed the notation to a subscript to avoid confusion.\\n\\n> We included these results in the original version, where Table 1 shows smaller student results for ImageNet (with an additional ablation study on how pruning is performed). For SD results, see our response to the point above for an explanation of the omission.\\n\\n> For better clarity, we now additionally noted this in the caption of Figure 1.\\n\\nThanks for these changes!\\n\\nThanks for detailed response, I will change my score after these minor points are addressed.\"}", "{\"title\": \"Response to Reviewer 7iFF, part 1\", \"comment\": \"Thank you for the thoughtful and helpful feedback.\\n\\n Currently this work lacks strong motivation or useful analysis. There are previous works like eDiff which specialize different diffusion models per timestep and also works exploring MoE for efficient inferen w.r.t efficiency as motivation more effective pruning, efficient architectures, caching across timesteps etc. have been proposed to achieve smaller models and/or lower latency.\\nThere are indeed many works exploring more efficient inference for diffusion models. We discuss these in Section 3, including eDiff-I. However, we would like to emphasize the following two points: 1) This work considers **single-step** distillation. This is a challenging task that significantly reduces the latency of diffusion models. Therefore, techniques such as MoE from eDiff-I, caching across timesteps, etc, don\\u2019t apply here. 2) Other techniques, such as effective pruning and efficient architectures, can yield smaller diffusion models. However, this is the first work that **combines** single-step distillation with smaller models in a non-trivial way. To be more specific, compared to previous works like SnapFusion[1] and MobileDiffusion[2] that separately performed pruning and step-distillation (which uses the pruned model as the teacher), our work uses the larger pretrained model as the teacher, which provides stronger guidance in all three stages. The alternative idea of pruning an already distilled teacher yields significantly worse performances (see details in the later part of this response).\\n\\n[1] Li, Yanyu, et al. \\\"Snapfusion: Text-to-image diffusion model on mobile devices within two seconds.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[2] Zhao, Yang, et al. \\\"Mobilediffusion: Subsecond text-to-image generation on mobile devices.\\\" arXiv preprint arXiv:2311.16567 (2023).\\n\\n This work explores splitting student into multiple models w.r.t dataset, while that is practical this work does not provide any novel insights nor significant performance boost.In case of text to image with SD1.5 FID boost only marginal by 0.15 combining all 3 objectives and 4 sets of parameters instead of one, which asks for more memory, more complicated orchestration etc.\\nWe argue that the boost to FID is not insignificant. We successfully provide improved performance over state-of-the-art approaches for multiple teacher models on several datasets. We do so with the same set of hyperparameters for all students. Also, 4 students can be trained separately and used separately at inference. This does not require more memory nor complicated orchestration \\u2014 we simply choose the student at inference time depending on the prompt. There **is** an increase in storage, but this is vanishingly cheap compared to the compute requirements.\\n\\nFurther, for our best-performing result, which uses the same student architecture as the teacher, only stages 1 & 2 are needed and these are both derived directly from the state-of-the-art approach we build on (DMD2). The stage 0 objective is used for achieving good performance with small models \\u2014 without stage 0 this fails. In summary, we believe the induced complexity in implementation and cost is minimal.\\n\\n While FID is evaluated, it is unclear how well MSD recovers marginal data distribution i.e., diversity of generation and resultant sampled/recovered distributions (posterior) w.r.t conditional i.e., something like LPIPS_Diversity and aggregated distribution Precision-Recall or other metrics. This helps understand if there is any feature collapse, mode collapse etc?\\n Why is atleast CLIP score not reported on either COCO-2017 or 2014 which could be informative as FID has its deficiencies, could consider HPSv2 or other metrics too for completeness.\\n\\nThank you for the recommendation. We have added CLIP score results in (the new) Appendix A. CLIP score again suggests that multiple students perform better than one student. Our number is slightly lower than SOTA, possibly because we trained on the COYO dataset instead of the LAION dataset, on which the OpenCLIP-G model is trained on. However, we think the 4-students vs 1-student result conveys the message well enough. As for other metrics like LPIPS_Diversity and Precision-Recall, most previous works did not report them, so we omitted them for now. However, we are happy to include them in our final version.\"}", "{\"metareview\": \"This paper presents a method for single step diffusion model distillation by turning the student to a MoE style conditioned on the data partition. It has received mixed reviews -- reviewer igyA is overall positive about the contributions, however other three reviewers challenged its novelty, generality and effectiveness. I like the the simplicity aspect of the work, and I don't particularly agree that it should be heavily criticized for lack of novelty due to the simplicity, however I do agree that certain critiques are valid. Especially, the question regarding the generality of the method wrt distillation baselines and introduced complexities and parameter overhead are valid concerns. Based on these considerations, I think this work is not ready but I encourage the authors to keep improving it.\", \"additional_comments_on_reviewer_discussion\": \"The authors tried to address some of the concerns in the rebuttal, including the novelty aspect, clarifications on experimental settings and more results. Although the rebuttal helped, the AC believes that the work still needs substantial improvements.\"}", "{\"title\": \"Response to Riewer D8t3\", \"comment\": \"Thank you for the thoughtful and helpful feedback.\\n\\n The authors state in Line 257 that 'Conditions within each partition should be more semantically similar than those in other partitions, so networks require less capacity to achieve a set quality on their partition.' However, there are no experiments presented to support this claim. I believe that implementing this idea is challenging and will demand additional computational resources.\\n\\nThis line refers to a general guideline for partitioning the input data. We achieve this by sequentially partitioning semantically ordered ImageNet classes (Section 5.2), label clustering in our ablations (Table 3), and clustering text prompts for SD experiments (Section 5.3). We disagree that implementing this is challenging as this structure exists in many datasets of interest, and we have demonstrated several ways to achieve this.\\n\\n The statement in Line 15 that 'the student model\\u2019s inference speed is limited by the size of the teacher architecture' is misleading, as the inference speed of the student model is independent of the teacher model; the student only depends on the teacher during the distillation training phase. I recommend proofreading the entire paper to ensure clarity and professionalism.\\n\\nIn the vast majority of diffusion distillation works, the student architecture matches the teacher architecture [1,2,3,4]. In fact, without this assumption, diffusion distillation is significantly harder to achieve. We were referring to this when we said the teacher architecture constrains the inference speed.\\n\\n[1] Song, Yang, et al. \\\"Consistency Models.\\\" International Conference on Machine Learning. PMLR, 2023.\\n\\n[2] Yin, Tianwei, et al. \\\"One-step diffusion with distribution matching distillation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[3] Xie, Sirui, et al. \\\"EM Distillation for One-step Diffusion Models.\\\" arXiv preprint arXiv:2405.16852 (2024).\\n\\n[4] Liu, Xingchao, Chengyue Gong, and Qiang Liu. \\\"Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow.\\\" The Eleventh International Conference on Learning Representations (ICLR). 2023.\\n\\n The proposed method introduces multiple student models; therefore, comparisons and analyses of the model parameters should be a focal point of the paper.\\n\\nPlease clarify what you mean by this, as we are unclear on what type of parameters or comparisons/analyses you intend.\\n\\nFor our method, there is an increase in storage of model parameters, but this is vanishingly cheap compared to the compute requirements. Additional students can be trained separately and used separately at inference. This does not require more memory nor complicated orchestration \\u2014 we simply choose the student at inference time depending on the prompt. \\n\\nWe have also added additional details to the paper clarifying our method\\u2019s storage usage.\\n\\nWe intentionally kept the same set of hyperparameters to illustrate the simplicity of our framework.\\n\\nIf you provide specific examples of comparisons or analyses you\\u2019d like to see, we can try including them.\\n\\n The proposed method leverages adversarial distillation with the expectation of enhancing the distillation effect. However, there is no comparison with standard distillation methods or other variants to validate the adversarial distillation\\u2019s anticipated advantages.\\n\\nWe include results with and without adversarial objectives. See Tables 1 and 2 and Figure 8 for more detailed comparisons.\\n\\n In the ablation studies section, only a quantitative analysis of the generation effect is presented. I believe that a qualitative analysis should also be included, as the paper aims to enhance generation quality.\\n\\nWe include several figures showing samples generated from our proposed method and those of the teacher and baseline methods (e.g., Figures 4 & 5). If you feel something specific is missing, please advise us, and we would happily include additional qualitative analysis.\\n\\nWe greatly appreciate your attention and hope our response adequately addresses your concerns. Please let us know if you have any unresolved issues, and thanks for your help in this process.\"}", "{\"title\": \"Response to Reviewer WJfY\", \"comment\": \"Thank you for the thoughtful and helpful feedback.\\n\\n The technical novelty is limited, as the approach simply extends the distillation of a teacher diffusion model to multiple student diffusion models. The distillation methods used, DM and ASD, are existing techniques, so the improvement offered by this approach is marginal.\\n\\nWe note that several technical challenges must be addressed to achieve multi-student distillation. Particularly when using smaller students initialized from scratch. Specifically, we introduced a teacher score-matching stage to provide a good initialization for the student. We performed a thorough ablation study to demonstrate its necessity and efficiency (see Table 1 and lines 418-421). Moreover, our method can be applied relatively seamlessly on top of existing distillation techniques. This means that we can benefit from additional advances in the distillation literature and provide a boost on top of any approach + methods to reduce student size without compromising capacity as much. Given that MSD strictly improves the performance of the baseline methods we implemented on top of (DMD & DMD2), we argue that these innovations will interest the research community and practitioners. We clarify these details further in the paper now.\\n\\n The performance improvement is also marginal. As seen in Tables 1 and 2, the improvement of MSD over DMD2 is small, yet it requires more training and model resources. Although the paper explores smaller student models, Table 1 shows a significant drop in performance, which reduces the practical contribution of this work in terms of lowering inference costs.\\n\\nWe argue that the boost to FID is not insignificant. We successfully provide improved performance over state-of-the-art approaches for multiple teacher models on several datasets. Furthermore, since submission, we have improved the quality of the small students (see the updated Figure 5). Note that the students are significantly smaller (<20% of the teacher size) \\u2014 competing pruning methods typically provide a much smaller reduction in model size. While better pruning techniques are not the focus of this work, we believe they can further improve the performance of the smaller students.\\n\\n Does this method require more model storage for practical deployment? Do the authors have any solutions to address this issue?\\n\\nThank you for raising this question. This point is of general interest to practitioners and we have thus included a discussion in Appendix C that reads as follows.\\n\\nA naive option for deployment is to use increased GPU memory to host all models simultaneously. However, this is impractical and wasteful as only a single student model needs to be used for each user request. In settings with less GPU memory than all students\\u2019 sum memory requirement, we must swap student models on and off GPUs. This incurs extra latency, however, in the few-GPU many-users setting, there are already prominent latency issues, such as users needing to queue for usage. In few-user settings, resources are likely being taken offline to save cost and thus there is start-up latency for fresh requests too. Therefore, we argue that the more interesting setting is in large distributed deployment.\\n\\nFor settings with more GPU memory than all students\\u2019 sum memory requirements, we can distribute the student models among a cluster of GPUs (as one would the teacher) and route each generation request to the appropriate student node. The routing layer is lightweight compared to the inference cost, so we pay little for it.\\n\\nIf the data has been partitioned uniformly according to user demand then the incoming requests are distributed uniformly among the student nodes. Therefore, we achieve equal throughput compared to the teacher without more overall model storage. However, finding such a partition is challenging, and user demand may change over time. This leaves finding the optimal allocation of resources to the student nodes an open problem. In practice, we expect that a reduced student model size would lead to an overall reduction in storage requirements compared to the teacher alone.\"}", "{\"comment\": \"> We have added the word \\\"disjoint\\\" in line 429. As for the code, unfortunately we are unable to share our original code. However we provide a pseudo code for the corresponding partition mechanism here:\\n\\n> We now explicitly mention this on line 473 in the revised version.\\n\\nThanks for these changes and code-snippet. If possible, it would be nice to have this snippet in supplementary somewhere, for reproducibility in future.\\n\\n> For one student, no partition is needed as it handles all input classes right? Please clarify if we misunderstood what you meant.\\n\\n\\nRight, my bad! \\n\\nThanks for the quick response and changes. I am satisfied with the response and changes. Hence, I am raising my score.\"}", "{\"title\": \"Response to Reviewer igyA, part 1\", \"comment\": \"Thank you for the thoughtful and helpful feedback.\\n\\n The paper focuses exclusively on DMD (Distribution Matching Distillation) and its extension ADA, which limits the demonstration of the method's generality. While the authors acknowledge this limitation, can the authors demonstrate preliminary results with other distillation approaches, particularly Consistency Distillation [1-3], on simple datasets like Mixture-of-Gaussian. Such experiments would better establish MSD's generality beyond DMD/ADA.\\n\\nThank you for raising this point. We believe the MSD framework is conceptually compatible with any distillation method and should always boost performance. We have included consistency distillation results on the 2D mixture-of-Gaussian setting in Appendix A.2, which confirms the generality.\\n\\n There is insufficient clarity regarding the text condition partitioning process in the latent space of the text-encoder during inference. As I understand, the authors partition the text conditions in latent space of text-encoder. In that case during inference, how is the appropriate Student model selected during inference? Specifically, given that text conditions are not naturally disjoint (unlike ImageNet-style datasets), could the authors provide details on how they determine which Student to use during inference for text-to-image generation? Do they use same text-encoder partitioning technique as in training, or is there a different mechanism?\", \"our_original_submission_states_in_line_428\": \"\\u201cwe again employed a minimalist design: pass the prompts through the pre-trained SD v1.5 text encoder, pool the embeddings over the temporal dimension, and divide into 4 subsets along 4 quadrants\\u201d, and also in line 1134: \\u201cwe partition the prompts and corresponding images by the 4 quadrants formed by the first 2 entries of the embeddings, where the embeddings are pooled from the outputs of the SD v1.5 text embedding layers.\\u201d The 4 resulting partitions are disjoint; therefore, a single student can be selected without ambiguity during inference. We use the same mechanism during training and inference. Please let us know if this remains unclear; we\\u2019d happily revise the text.\"}", "{\"summary\": \"This paper addresses the high computational cost associated with multistep inference in diffusion models by focusing on the speed-quality tradeoff in distillation. The authors propose a Multi-Student Distillation (MSD) framework to enhance both generation speed and output quality. In this framework, a teacher model is distilled into several single-step student models, each specialized for generating data under specific input conditions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Overall, this paper is well-written and easy to follow, with relatively new comparison methods.\", \"weaknesses\": \"1. The authors state in Line 257 that 'Conditions within each partition should be more semantically similar than those in other partitions, so networks require less capacity to achieve a set quality on their partition.' However, there are no experiments presented to support this claim. I believe that implementing this idea is challenging and will demand additional computational resources. I recommend including relevant experiments and source code to facilitate a comprehensive review.\\n\\n2. The statement in Line 15 that 'the student model\\u2019s inference speed is limited by the size of the teacher architecture' is misleading, as the inference speed of the student model is independent of the teacher model; the student only depends on the teacher during the distillation training phase. I recommend proofreading the entire paper to ensure clarity and professionalism.\\n\\n3. The proposed method introduces multiple student models; therefore, comparisons and analyses of the model parameters should be a focal point of the paper.\\n\\n4. The proposed method leverages adversarial distillation with the expectation of enhancing the distillation effect. However, there is no comparison with standard distillation methods or other variants to validate the adversarial distillation\\u2019s anticipated advantages.\\n\\n5. In the ablation studies section, only a quantitative analysis of the generation effect is presented. I believe that a qualitative analysis should also be included, as the paper aims to enhance generation quality.\\n\\n6. I can not conduct a comprehensive review of the technological accuracy of this paper, as it is empirical rather than theoretical, and the implementation code is not provided.\", \"questions\": \"Please refer to the Weaknesses and Questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 7iFF, part 2\", \"comment\": \"What is total training compute required for proposed method? How does it compare to previous methods which do not specialize to sub-sets of data?\\n\\nWe have already included the training compute details in Appendix D, and now we added reference to them in the main text. Although the total compute is higher, we used significantly less compute per student than previous methods (ImageNet: 33% for stage 1, 57% for stage 2; SD1.5: 50% for both stages).\\n\\n To better justify and understand motivation of this work, it might be useful to consider pruning or smaller architecture of already distilled one-step model as a baseline or initialization in their work? How much of training compute can be exploited with better initialization compared to distilling from scratch, such analysis would better benefit community as it currently lacks novel insights to adopt broadly for practical applications too.\\n\\nThanks for raising this point. We mention in line 418 that models distilled from scratch \\u201cfail to reach competitive performance\\u201d, meaning that the model doesn\\u2019t even converge properly. Therefore, better initialization is a **necessity** rather than an improvement. Regarding pruning an already-distilled one-step model, we conducted additional experiments using the same amount of training compute and added the results in Table 1 and line 419. The results indicate **pre-pruning works better than post-pruning (2.88 FID vs 11.67 FID)**.\\n\\n Authors cite EM Distillation as justification to emphasize difficulty of training one-step model from scratch? While it is known from consistency distillation, rectified flow and other works too that training a one-step models is hard not sure why cite distillation method to justify training from scratch as this is also not focus of this work.\\n\\nThank you for pointing this out. We meant \\u201cdistilling one-step models from scratch\\u201d, meaning without initializing the student from the teacher\\u2019s weights. Since we are training students with smaller architectures than the teacher, we can not initialize them with the teacher's weights and consider training the students from scratch (with a trained teacher). This has been corrected in the revised version.\\n\\nWe greatly appreciate your attention and hope our response adequately addresses your concerns. Please let us know if you have any unresolved issues, and thanks for your help in this process.\"}", "{\"title\": \"Response to Reviewer D8t3\", \"comment\": \"Thanks for your response.\", \"we_detailed_in_table_3_as_well_as_line_492_505_that_we_performed_an_ablation_study_on_clustering_for_imagenet64\": \"we compared three different approaches: 1) random splitting 2) sequential splitting 3) splitting by **K-means clustering**. The numbers in Table 3 suggest that sequential splitting and splitting by clustering have similar performance, both better than random splitting. This indicates that sequential splitting does ensure semantic similarity (this can be confirmed by looking at the numeric order of ImageNet classes). For SD1.5, we also explicitly mentioned how to perform **clustering** in line 427 and line 1167. Please let us know if you still find it confusing.\\n\\nWe would also like to point out that a **significant innovation** of our approach is that we are the first to distill into a single-step student with **a smaller architecture** (i.e. the teacher score matching stage). We demonstrated that this stage is both **necessary** and **efficient** by conducting relevant ablation studies (see Table 1 and line 418-421).\\n\\nWe greatly appreciate your attention and hope our response adequately addresses your concerns. Please let us know if you have any unresolved issues, and thanks for your help in this process.\"}", "{\"title\": \"Confusion about the Response\", \"comment\": \"> This line refers to a general guideline for partitioning the input data. We achieve this by sequentially partitioning semantically ordered ImageNet classes (Section 5.2), label clustering in our ablations (Table 3), and clustering text prompts for SD experiments (Section 5.3). We disagree that implementing this is challenging as this structure exists in many datasets of interest, and we have demonstrated several ways to achieve this.\\n\\nThank you for your response. However, I feel that it did not fully address my question clearly and constructively. To confirm my understanding, Table 3 presents the ablation study on the number of students, and the ablation experiment on batch size is correct, correct? Additionally, Tables 1 and 2 focus on ablation studies for the student parameter count as well as teacher score matching, distribution matching, and adversarial distribution matching. Is that right?\\nI still do not see an ablation experiment that directly addresses my original question\\u2014whether clustering methods were used to classify different categories. Could you clarify how your response relates to this specific aspect? I would appreciate further clarification to better understand your approach to my concern. Thank you!\\n\\n\\n\\nI believe your article as a whole lacks significant innovation, but there is one innovative idea you mentioned that caught my attention, which is on line 257 of the paper. You stated that the division of **y labels** determines the input condition group that each student model is responsible for. You also mentioned that the conditions within each partition should be semantically more similar than those in other partitions, and clustering methods were used to ensure that the content generated by each student model is semantically more coherent. However, your subsequent experiments did not provide further confirmation of the effectiveness of this approach.\\n\\nWhat I suggest is that you could directly compare the method of **not using clustering** (and not ensuring that the student model generates semantically similar content) with the method of **using clustering** (and ensuring semantic similarity in the content generated by the student model). Specifically, you could compare the FID (Fr\\u00e9chet Inception Distance) values of the images generated by the model under these two different training settings. By demonstrating and analyzing the generated results, you would be able to verify the effectiveness of the clustering methods. Additionally, this would quantitatively validate how much improvement the clustering approach brings to the model's performance. This comparison could significantly strengthen your argument and substantiate the innovative value of the clustering approach.\\n\\nBased on the provided response, I do not feel that my concerns were fully addressed. As such, I prefer to keep my score unchanged. Thank you for your efforts.\"}", "{\"summary\": \"The paper introduces Multi-Student Distillation (MSD), an approach for diffusion model distillation, which improves existing single-step methods by increasing effective model capacity without added inference latency. MSD uses multiple student models, each optimized for a subset of conditioning inputs, to generate samples in a single step. This framework enhances flexibility by supporting multiple smaller student models to reduce generation time and enables initialization without requiring teacher weights. The authors validate MSD through experiments, achieving improved FID scores on various benchmarks with reduced parameters and comparable generation quality.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The organization and writing of the paper are clear, making it easy to understand and follow, and the review of related work is thorough.\", \"The discussion on data partitioning is valuable and aligns well with the positioning of the proposed method. I also suggest the authors conduct more comprehensive experimental validation on this aspect.\"], \"weaknesses\": [\"The technical novelty is limited, as the approach simply extends the distillation of a teacher diffusion model to multiple student diffusion models. The distillation methods used, DM and ASD, are existing techniques, so the improvement offered by this approach is marginal.\", \"The performance improvement is also marginal. As seen in Tables 1 and 2, the improvement of MSD over DMD2 is small, yet it requires more training and model resources. Although the paper explores smaller student models, Table 1 shows a significant drop in performance, which reduces the practical contribution of this work in terms of lowering inference costs.\"], \"questions\": \"Does this method require more model storage for practical deployment? Do the authors have any solutions to address this issue?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Request for further discussion\", \"comment\": \"Hello,\\n\\nThank you for taking the time to provide a careful review of our submission. As the discussion period is nearing an end, we hope that you will evaluate our revisions and response. We believe that we have addressed all points of concern and clarification in your original review and would greatly appreciate the opportunity to discuss these with you further.\\n\\nThank you.\"}", "{\"summary\": \"This paper introduces a 'Multi-Student Diffusion Distillation' framework. The core idea behind the proposed method stems from Mixture-of-Experts. Particularly, the paper proposes to distill a pre-trained Diffusion model (Teacher model) into multiple Student model, where each Student is responsible for learning of a subset of conditions. This effectively increases the model capacity by amortizing the set of conditions into smaller subsets where a smaller Student model is responsible for corresponding subset. There are few keypoints pertaining to the proposed method: (a) partitioning/filtering function to partition the set of conditions into subsets, some of the desired features of such function are described in Section 4.1; (b) distillation into multiple Student models, where each Student is responsible for a subset of conditioning variables; (c) support for smaller-sized Student model unlike previous methods which employ same-sized Student model; (d) a teacher score matching phase for smaller-sized Student networks for initialization and better training. The paper primarily deals with Distribution Matching Distillation (DMD) and its extension Adversarial Distribution Matching (ADM). The proposed method SoTA FID on ImageNet 64x64 for one-step generation. The paper is well written and presented. The idea is very intuitive, however, it is interesting to see it working in practice on models like StableDiffusion.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well written and presented. I enjoyed reading the paper. Though MoE is not a new idea, using it for Distillation is new, further using it to accelerate inference is commendable.\\n2. The idea of using Multiple-Students for distillation for inference time-quality tradeoff is quite intuitive. Moreover, assigning a student to a subset of conditions is a smart choice to increase the capacity of overall model.\\n3. Authors solve the obvious problem with above choice - initialization from scratch - by introducing an additional TSM stage which gives a good initialization, allowing for further distillation stage.\\n4. The empirical results are quite strong and encouraging. The proposed method achieves SoTA FID on Imagenet 64x64. Further, it shows encouraging results on distilling StableDiffusion performing better than several one-step generation methods.\", \"weaknesses\": \"1. The paper focuses exclusively on DMD (Distribution Matching Distillation) and its extension ADA, which limits the demonstration of the method's generality. While the authors acknowledge this limitation, can the authors demonstrate preliminary results with other distillation approaches, particularly Consistency Distillation [1-3], on simple datasets like Mixture-of-Gaussian. Such experiments would better establish MSD's generality beyond DMD/ADA.\\n2. There is insufficient clarity regarding the text condition partitioning process in the latent space of the text-encoder during inference. As I understand, the authors partition the text conditions in latent space of text-encoder. In that case during inference, how is the appropriate Student model selected during inference? Specifically, given that text conditions are not naturally disjoint (unlike ImageNet-style datasets), could the authors provide details on how they determine which Student to use during inference for text-to-image generation? Do they use same text-encoder partitioning technique as in training, or is there a different mechanism?\\n3. The authors outline several desired properties for the partitioning function in Section 4.1, yet the implemented solution simply uses consecutive classes as partitions (validated in Section 5.4). Could you compare a random partitioning strategy with your current approach? This would be valuable to determine whether the specific partitioning method offers advantages over any balanced data division.\\n4. The central contribution of the paper is that 'it offers a flexible framework to increase generation speed by reducing student size, and increasing generation quality by training more students. This is seen in Table 3 as well. In fact, in Table 1, the authors show that the Students outperform the Teacher. Does this observation also hold for text-to-image SD models? \\n5. Minor:\\n\\t1. The partition function notation $F(\\\\cdot) = (\\\\cdot, \\\\cdot | \\\\cdot)$ needs proper definition as it resembles conditional probability notation.\\n\\t2. The MSD results appear to use a Student of equal size to the Teacher. Please include results for smaller-sized Students (as used in Fig. 5c) or explain their omission.\\n\\t3. Just for clarity: In Table 1, a single Student is used for generation (the Student responsible for a particular prompt), that is why the NFE is 1, right? \\n\\n\\nOverall, the paper has merit. Albeit a simple idea, using MoE for distillation and accelerating inference is commendable. However, I am worried about the scope of the paper (see Weaknesses). I would like the authors to address the questions/doubts listed above. I am resorting to score of 6, I am open to increase it once these comments are addressed.\\n\\n[1] Song, Yang, et al. \\\"Consistency models.\\\" arXiv preprint arXiv:2303.01469 (2023).\\n\\n[2] Zheng, Jianbin, et al. \\\"Trajectory consistency distillation.\\\" arXiv preprint arXiv:2402.19159 (2024).\\n\\n[3] Luo, Simian, et al. \\\"Latent consistency models: Synthesizing high-resolution images with few-step inference.\\\" arXiv preprint arXiv:2310.04378 (2023).\\n\\n\\n----------------------------------------\\n\\n**Post Rebuttal**\\n\\nI am satisfied with the author\\u2019s response and rebuttal, as they address my concerns. While the method may initially appear straightforward, the paper tackles subtle yet significant challenges, which leads me to support its acceptance.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response\", \"comment\": \"We\\u2019d like to thank all of the reviewers for taking the time to carefully read our work and provide their feedback. Following your advice and questions, we have made several improvements to our submission.\\n\\nWe added some additional results, including improved small student models on SD1.5 (Figure 5), random ImageNet class partitioning (Table 3), post-distillation on single-step students (Table 1), a brief exploration of consistency distillation (Appendix A.2), and CLIP scores (Appendix A.1), to provide a more complete picture.\\n\\nWe appreciate your continued engagement and hope our responses to each of you adequately address any remaining concerns.\"}", "{\"title\": \"Request for further discussion\", \"comment\": \"Hello,\\n\\nThank you for taking the time to provide a careful review of our submission. As the discussion period is nearing an end, we hope that you will evaluate our revisions and response. We believe that we have addressed all points of concern and clarification in your original review and would greatly appreciate the opportunity to discuss these with you further.\\n\\nThank you.\"}", "{\"title\": \"Response to Reviewer igyA\", \"comment\": \"Thanks for the kind response!\\n\\n I now understand how this makes the text conditions disjoint. However, I would like to see a code snippet/some reference code just to be sure. Further, I request the authors to include the 'the 4 resulting partitions are disjoint' point explicitly in the paper.\\nWe have added the word \\\"disjoint\\\" in line 429. As for the code, unfortunately we are unable to share our original code. However we provide a pseudo code for the corresponding partition mechanism here:\\n```\\nstudent_id = 0 # 0-3, as an input argument\\n\\ndef filter_fn(item): # item is a datapoint, where item['embedding'] is the text embedding from SDv1.5 text encoder. In training these are pre-computed to save resources, in inference these are computed on-the-fly.\\n centroids = torch.zeros(4, dim) # dim is the embedding dimension, 784 in the case of SDv1.5\\n centroids[:,0:2] = torch.tensor([[1,1], [1,-1], [-1,1], [-1,-1]]) # centroids along the 4 quadrants in first two entries, doing nearest neighbor partition on these centroids is equivalent to dividing along the 4 quadrants\\n return torch.argmin(torch.norm(centroids - item['embedding'], dim=1)).item() == int(student_id) # filter out relevant datapoint\\n\\ndataset = dataset.filter(filter_fn) # Then proceed with dataloader construction, etc\\n\\n# During inference, we do something like\\nstudent_id = torch.argmin(torch.norm(centroids - item['embedding'], dim=1)).item() \\nmodel.load_state_dict(state_dicts[student_id]) # Then proceed with inference\\n```\\n\\n Can you further verify the random partitioning with 1 student? Just to be sure that random partitioning always performs worse?\\n\\nFor one student, no partition is needed as it handles all input classes right? Please clarify if we misunderstood what you meant.\\n\\n Thanks for this clarification. I request you to mention these points explicitly in the paper.\\nWe now explicitly mention this on line 473 in the revised version.\\n\\nThanks again for these suggestions!\"}" ] }
9SmukfhJoF
3DGS-Det: Empower 3D Gaussian Splatting with Boundary Guidance and Box-Focused Sampling for 3D Object Detection
[ "Yang Cao", "Yuanliang Ju", "Dan Xu" ]
Neural Radiance Fields (NeRF) is a widely adopted class of methods for novel view synthesis. Some works have introduced it into the 3D Object Detection (3DOD) task, paving the way for promising exploration of 3D object detection based on view synthesis representation. However, NeRF has inherent limitations: (1) limited representational capacity for 3DOD as an implicit representation, and (2) slow rendering speed. Recently, 3D Gaussian Splatting (3DGS) emerged as an explicit 3D representation with faster rendering, overcoming these limitations. This paper is the first to introduce 3DGS into 3DOD and identifies two primary challenges: (a) 3DGS mainly focuses on 2D pixel-level parsing instead of 3D geometry, leading to unclear 3D spatial distribution and indistinct differentiation between objects and background, which hinders 3DOD; (b) 2D images often contain many background pixels, resulting in densely reconstructed 3DGS with noisy points representing the background, impacting detection. To address (a), we consider that 3DGS reconstruction originates from 2D images and design an elegant and efficient solution by incorporating **2D Boundary Guidance** to enhance the spatial distribution of 3DGS. Specifically, we perform boundary detection on posed images, overlay the boundaries on the images, and then train 3DGS. Interestingly, as shown in figure 1, this precise strategy significantly improves the spatial distribution of Gaussians and brings clearer differentiation between objects and background. For (b), we propose a **Box-Focused Sampling** strategy using 2D boxes to establish object probability spaces, allowing probabilistic sampling of Gaussians to retain more object points and reduce background noise. Benefiting from 2D Boundary Guidance and Box-Focused Sampling, our final method, **3DGS-DET**, achieves significant improvements (**5.6 points** on mAP0.25, **3.7 points** on mAP0.5) over the baseline version without the proposed two strategies, with introducing **zero** additional learnable parameters. Furthermore, 3DGS-DET significantly outperforms the state-of-the-art NeRF-based method, NeRF-Det, on both ScanNet and ARKITScenes. We commit to releasing all codes and data within one month of paper acceptance.
[ "3D Gaussian Splatting", "3D Object Detection", "Neural Radiance Fields" ]
https://openreview.net/pdf?id=9SmukfhJoF
https://openreview.net/forum?id=9SmukfhJoF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "sALyYLvKsF", "pE551tRQXA", "g29qYUSYk1", "cEnHfSVLKB", "QPa55Lyamp" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730678301120, 1730701791082, 1730462993954, 1730431013381, 1731654412256 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3361/Reviewer_VGvD" ], [ "ICLR.cc/2025/Conference/Submission3361/Reviewer_fDRh" ], [ "ICLR.cc/2025/Conference/Submission3361/Reviewer_drQV" ], [ "ICLR.cc/2025/Conference/Submission3361/Reviewer_XFhh" ], [ "ICLR.cc/2025/Conference/Submission3361/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The manuscript introduces an approach to 3D object detection based on Gaussian Splats (GS) reconstructions. The method uses two mechanisms to guide the 3D GS reconstruction towards reconstructing the objects in the scene with high boundary fidelity before using a standard sparse 3D convolution-based 3D object detector on the GS parameters. The GS reconstructions are guided by (1) coloring object boundaries in the images and (2) resampling Gaussians with higher probability if they fall within an object frustum. The object boundary coloring leads to strong edges in the images which via the GS optimization translate into more Gaussians on object boundaries. The frustum resampling down-weighs background surfaces and focuses Gaussians to reconstruct primarily the objects in the scene. As a result the 3D detector has an easier job and performs well.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The guidance of the GS reconstruction via boundary coloring and object frustum weighted resampling is clever and effective as can be observed qualitatively in the figures in the paper. These also quantitatively lead to improved 3D object detection as shown via ablation studies. These guidances in effect make condition the GS reconstruction on available 2D object evidence which is a very interesting paradigm. In the extreme one would aim to only reconstruct the objects?\", \"The improvement of 3D detection performance on ARKitScenes relative to NeRF-Det is impressive. I am quite curious to see some qualitative examples showing the improvements. For example the chair class jumped from 4% to 70.3%. I would love to see a qualitative comparison on a scene with many chairs.\", \"The paper is well written and has good figures to support the claims (effects of the two guidances on GS centers). As well as good qualitative figures showing the 3D bounding box detections (although I would really like to see some on ARKitScenes - see weaknesses).\"], \"weaknesses\": [\"The ablation studies are effective but could be improved by adding lower bounds (no guidance mAP to Table 3) and upper bounds (sampling GS according to GT OBBs to table 4). The center-point guidance seems unnecessary - clearly a pixel-level optimization algorithm like GS will not be able to leverage the guidance since it wont be multi-view consistent.\", \"To some degree that the 2D boundary guidance works is surprising since those 2D boundaries often stem from occlusion boundaries where multiview consistency across larger baselines is not given. I would love to see some examples where the 2D boundary is a occlusion boundary and the camera observes it from different angles - does this guidance still work?\", \"It is unclear why the box-focused sampling cannot re-use the segmentation masks? All that is needed is to assign object probabilities to Gaussian splats which does not need frustra to be unprojected. The GS centers can simply be projected into all images, to assign mask confidence values from Grounded SAM. This would make for a more simple story and system.\", \"The main results Table 1 incorrectly identifies only NeRF-Det as a baseline for the proposed approach. Other methods like ImGeoNet, CN-RMA, ImVoxelNet also only rely on posed images and are valid baselines for the proposed method. At a minimum each related method needs to have called out the input modalities. As is the table is not useful in providing comparisons to the right kind of related work using also multiview posed images.\", \"Feature-metric GS/NERF reconstructions such as in LangSplat, EgoLifter, LERF could in principle be prompted for 3D segmentations that of course can be used to extract 3D bounding boxes. Any one of those would be a great additional point of comparison.\"], \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes to use 3D Gaussian Splatting (3DGS) as the representation to do 3D object detection. To make 3DGS works well for the 3D object detection task, the authors tackle two obstacles: (1) due to the natural of Gaussian, the object boundaries are ambiguous and hard to distinguish, and (2) excessive amount Gaussian blobs in the background. To solve (1), the authors rely on 2D image boundary guidance, and to resolve (2), the authors propose \\\"box-focused\\\" sampling. Experimental results show that the proposed method outperform the current SotA (NeRF-Det) by a reasonable margin on ScanNet and ARKit datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"To my best knowledge, this is the first paper that got 3D object detection working using the 3D Gaussian Splatting representation\", \"The performance of the method is good. The proposed method outperforms SotA method such as NeRF-Det by a reasonable margin\", \"The presentation of the paper is good. The paper is relatively easy to follow.\"], \"weaknesses\": [\"The proposed method seems to be relying on some heuristics and some hyper-parameters seem to be set to work well for the particular test sets. For example, \\\"Gaussian blobs not belonging to any frustum are assigned a small probability pbg, set to 0.01 in practice.\\\" lines (315-316)\", \"The paper does not discuss anything related to latency. The proposed \\\"boundary guidance\\\" and \\\"box-focused sampling\\\" could be taking more computational time than the baseline NeRF-Det\", \"The upper-bound performance of the proposed method is bounded by various methods used to provide \\\"boundary guidance\\\" and \\\"box-focused sampling\\\", such as Grounded SAM, Suzuki-Abe algorithm etc. If NeRF-Det is aided by these segmentation method, it could potentially perform better as well\"], \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a guidance approach for 3DGS that facilitates the separation of 3D Gaussians representing distinct 3D object instances. Additionally, it introduces guided downsampling of the generated Gaussians, ensuring that most are associated with object instances rather than the background. The method shows improvements in two frequently used datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper proposes the first method that directly considers 3D Gaussians for 3D object detection.\", \"The method achieves good accuracy on two public datasets.\"], \"weaknesses\": [\"I have mixed feelings about the paper. On the one hand, the contributions appear questionable or minor. On the other, the method achieves competitive accuracy compared to baselines, nearly matching methods that do not rely on Gaussians. My primary concerns are as follows:\", \"The boundary guidance lacks justification. In Eq. 9, the authors incorporate a region constraint by linearly mixing the RGB color and a region-specific color at each pixel. However, they do not clarify how this region color is chosen \\u2014 a critical detail, as the choice of color heavily impacts the optimization. For example, if the chosen region color is similar to nearby content (e.g., a white region color on a brown table near a white wall), the boundary constraint would fail. Additionally, the region's color likely interferes with both the object and surrounding colors, ultimately altering the Gaussians' colors, which may no longer represent the original scene. Maybe I misunderstood, but this choice seems odd; an alternative could have been to increase the number of channels per pixel or use the approach of LangSplat [a] to get the guidance.\", \"Section 3.4 describes Gaussian subsampling to retain the most representative objects. However, it is unclear how this impacts final accuracy. This approach seemingly does not enhance object detection but merely reduces the Gaussian cloud density. While this could affect the metrics, the practical value remains uncertain. Retaining a single well-localized Gaussian per object might optimize segmentation accuracy but would likely impair novel view synthesis and geometric accuracy. This raises the following point:\", \"It would be essential to assess novel view synthesis and geometric accuracy. Does this approach compromise other objectives of 3DGS, or do they remain intact?\"], \"minor_points_and_typos\": [\"Section 3.4: The process here is somewhat unclear. To my understanding, Gaussian splatting with boundary guidance runs first, followed by Gaussian subsampling to retain those likely corresponding to objects. This appears to involve guided sampling based on unspecified probabilities. The precise calculation of these \\\"probability\\\" scores is not explained. Regardless of my interpretation, the authors should clarify this part, as much of it is speculative.\", \"L080/L097: \\\"empower\\\" \\u2192 \\\"improve\\\"\", \"L085: \\\"distribution that is more differentiable\\\" > I don't understand what the authors want to say here.\", \"L087: \\\"to establish 3D object probability spaces\\\" > \\\"object probability spaces\\\" is misleading. This term refers to a 3D subspace where objects are located, and it is unrelated to probabilities. The authors use \\\"probability\\\" inconsistently, such as in Eq. 14, to justify heuristics. Heuristics are acceptable, but mislabeling them as probabilities is not.\", \"L155: \\\"significantly enhances the spatial distribution\\\"\\u2014the opposite is true; it restricts distribution to independent object representations, losing background information.\", \"L316: What is \\\"independent probabilistic sampling\\\"?\", \"Fig. 2 does not effectively illustrate the pipeline. It should be improved.\", \"[a] Qin, M., Li, W., Zhou, J., Wang, H. and Pfister, H., 2024. Langsplat: 3d language gaussian splatting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 20051-20060).\"], \"questions\": \"The authors can find the most important questions under the Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper pioneers using 3DGS for 3DOD, tackling two main challenges: (i) unclear spatial distribution of Gaussians, which affects object-background separation, and (ii) excessive background noise. They propose the 2D boundary guidance to improve spatial clarity and a Box-Focused sampling strategy for efficient object-focused sampling. The experiments show the improvement of object detection compared to their baseline and nerf-based method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper solves 3DOD problem on 3DGS at the first time.\\n2. The improvement over NeRF-based detection is significant.\", \"weaknesses\": \"1. Combining neural rendering and 3D object detection is a relatively new direction. In the introduction, I think the authors should discuss more about the significance or application of performing 3D object detection with novel view synthesis, especially for the paper which first uses 3D Gaussians for object detection. In NeRF-based methods(NeRF-Det), their neural rendering and object detection mutually enhance each other; accurate geometry improves object detection, while the object detection task promotes geometric learning, ultimately enhancing the quality of both object detection and neural rendering. However, in this paper, the experiments only analyze 3D object detection, without evaluating the quality of rendering. Therefore, it is not possible to thoroughly investigate the impact between the rendering of 3D Gaussians and the object detection tasks under this method. In conclusion, why not independently perform the tasks of rendering and 3D object detection, if both yield better results when performed independently? Besides, for NeRF-based 3DOD, they train a feed-forward network for generalizable neural rendering, which could be benificial for perception. But this paper still need per-scene optimization (to my understanding).\\n\\n2. The paper mentions that performing 3D object detection on 3DGS has higher rendering speed, but it does not compare the differences with NeRF-based methods in terms of training time, rendering time, and detection time.\", \"questions\": \"1. The paper mentions that background Gaussians affect detection, but there are backgrounds and target objects in point clouds as well. Do the point cloud-based 3DOD face the same issue? If not, why not directly perform point cloud object detection and rendering tasks separately?\\n\\n2. From my understanding, this paper enhances the accuracy of 3D object detection by unprojecting 2D priors, i.e. edges and 2D detection information, into 3D space based on 3DGS. This enhancement seems reasonable. However, if this process only improves 3D object detection while not enhancing or even degrading neural rendering, it can be seen as a strategy solely for improving 3D object detection. The use of 3D Gaussians might not be necessary. For instance, could converting 3D Gaussians into point clouds for 3D object detection after training achieves similar improvements?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We are grateful to the reviewers, ACs and PCs for your time, comments, and interest in our work. After discussion, we've decided to withdraw the submission this time. We will incorporate valuable suggestions and address misunderstandings in the next version. Thank you once again.\"}" ] }
9SYczU3Qgm
Meta Flow Matching: Integrating Vector Fields on the Wasserstein Manifold
[ "Lazar Atanackovic", "Xi Zhang", "Brandon Amos", "Mathieu Blanchette", "Leo J Lee", "Yoshua Bengio", "Alexander Tong", "Kirill Neklyudov" ]
Numerous biological and physical processes can be modeled as systems of interacting entities evolving continuously over time, e.g. the dynamics of communicating cells or physical particles. Learning the dynamics of such systems is essential for predicting the temporal evolution of populations across novel samples and unseen environments. Flow-based models allow for learning these dynamics at the population level - they model the evolution of the entire distribution of samples. However, current flow-based models are limited to a single initial population and a set of predefined conditions which describe different dynamics. We argue that multiple processes in natural sciences have to be represented as vector fields on the Wasserstein manifold of probability densities. That is, the change of the population at any moment in time depends on the population itself due to the interactions between samples. In particular, this is crucial for personalized medicine where the development of diseases and their respective treatment response depend on the microenvironment of cells specific to each patient. We propose *Meta Flow Matching* (MFM), a practical approach to integrate along these vector fields on the Wasserstein manifold by amortizing the flow model over the initial populations. Namely, we embed the population of samples using a Graph Neural Network (GNN) and use these embeddings to train a Flow Matching model. This gives MFM the ability to generalize over the initial distributions, unlike previously proposed methods. We demonstrate the ability of MFM to improve the prediction of individual treatment responses on a large-scale multi-patient single-cell drug screen dataset.
[ "Flow matching", "Dynamics", "Cell dynamics" ]
Accept (Poster)
https://openreview.net/pdf?id=9SYczU3Qgm
https://openreview.net/forum?id=9SYczU3Qgm
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ywcTlc2M0h", "yqvpp3eWwU", "x5sYU7Tp0s", "wiN8CTC0GP", "wBgwC5NvFi", "vnH8n2rJMB", "uAXM9hE5lJ", "tv8xz9dI0f", "szmCwbrVtu", "rlYJXDSGwj", "pgGElcjWln", "nRhQ5RJvJO", "m2YDvHTcax", "fayDd1KWgl", "cRhPU4lWzR", "c0jEwhA2O5", "XolH9747AL", "PQzjLxPyxe", "IlPW77XpQD", "GkFSDyZg4C", "GVyHS4yCWl", "FSrcDn46ER", "F0TJ2s0X1c", "EgQlHDJxR7", "DkY6oJUEcw", "CNp3FWMq3B", "A9tJAJq8sA", "8syZjQC5CA", "7iiVIZt8gc", "7KoBWSXZ50", "6DiA7JVnpM", "41LAAjb6A9", "3PbcpZOBWc", "39OQTPMb3F", "19SF1aFYS2", "0YqB4B5kIf" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732231210460, 1732230912461, 1732231122611, 1733165973496, 1732477864222, 1732629400912, 1732477094716, 1730659060722, 1732771840981, 1730063064713, 1732633769819, 1732681643750, 1732477747299, 1733226283682, 1730794756784, 1732231554627, 1732771394577, 1734893612798, 1732771676575, 1732231746226, 1732230808482, 1737523829433, 1732673333793, 1733259356066, 1733078690191, 1732476973387, 1732231337517, 1732231597805, 1732824583412, 1732231705818, 1733094418489, 1732773889453, 1732772157951, 1730697585018, 1733078766041, 1732231446511 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Reviewer_jcAc" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Reviewer_Y1RT" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Reviewer_kKjh" ], [ "ICLR.cc/2025/Conference/Submission7287/Reviewer_kKjh" ], [ "ICLR.cc/2025/Conference/Submission7287/Reviewer_Y1RT" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Reviewer_jcAc" ], [ "ICLR.cc/2025/Conference/Submission7287/Reviewer_jcAc" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Area_Chair_hFaM" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7287/Reviewer_Y1RT" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Reviewer_Y1RT" ], [ "ICLR.cc/2025/Conference/Submission7287/Reviewer_Kj6P" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Reviewer_Kj6P" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ], [ "ICLR.cc/2025/Conference/Submission7287/Authors" ] ], "structured_content_str": [ "{\"title\": \"(2/3)\", \"comment\": \">Further, are there ways to better understand the overall quality of the predictions? Are there downstream conclusions of the dataset paper that can or cannot be reproduced by the various predictive algorithms? Could you plot some of the predicted distributions just as they did in Figure 3?\\n\\nThis is an excellent question. To draw insights towards downstream conclusions on the organoid drug-screen dataset, we compute and plot 2-D projections of embeddings outputted from the population embedding model $\\\\varphi(p_0; \\\\theta)$ (see Figure 7 in Appendix F.1). We observe that we recover grouping of patient populations which are generally consistent with the findings of (Zapatero et al. 2023). For instance, populations from patients 99 and 109 (both chemorefractory) are grouped together, and populations from patients 23 and 27 (both chemosensitive) show lower pairwise distances. We discuss these results on lines 1200-1205 in Appendix F.1.\\n\\n>The generalizability of the results might be compromised by the authors' picking specific patients for the patient holdout setup (see Section C.2). Can the experiment be performed over several splits?\\n\\nWe thank the reviewer for bringing up this point. We agree and have updated the results for the patient split to include evaluation across left-out populations from 3 different patients. We report the mean and standard deviation of performance metrics across these 3 splits (please see updated Table 3). We observe that MFM outperforms baselines in this setting while also exhibiting robustness across splits.\\n\\n>In the same vein, I do not understand how the authors arrive at the error estimates for their experimental results if only one split is considered. I am afraid these estimates could be vastly underestimating the actual uncertainty. This is important since MFM sometimes only shows a small edge compared to FM.\\n\\nFor the replica split (Table 2) uncertainties over 3 independent model seeds (please refer to the last sentence of the paragraph on lines 365-373). Specifically, we use this to show that models are robust to changes in model initialization. Following the helpful suggestion of the reviewer, for the patients split (Table 3) we report mean and standard deviation over 3 different left-out patient splits to show performance robustness across different patients.\\n\\n>**Independent matching may be suboptimal for all considered methods:** Tong et al (2023) propose to train flow matching by coupling base and target distribution via Optimal Transport instead of an independent coupling. This is a simple tweak that could be applied to all considered FM methods, i.e., FM, CFM, and MFM. I am curious if this would improve results across the board or specifically help FM and CFM because those methods currently have no way of accounting for different source distributions. Potentially, this could be a much simpler fix than MFM.\\n\\nWe thank the reviewer for their valuable point. We have added experiments that incorporate optimal transport (OT) couplings between source and target distributions for the synthetic letters dataset (Table 4) and for the organoid drug-screen dataset (Table 2). \\n\\nWe observe that using OT couplings generally improves performance across all methods, with MFM or MFM-OT still yielding the best performance on the left-out test data. We note that adding minibatch OT couplings alone does not provide a way of accounting for generalizing across source distributions.\\n\\nWe thank the reviewer again for this comment. We believe these additional experiments help clarify that it is not the use of optimal transport that allows generalization across initial populations, but the population embedding in MFM.\\n\\n>How is CFM set up on the patient data set? I assume you condition on the drug, while ignoring the base distribution/patient identity. Is this correct, or are you also feeding the patient identity as condition? If the latter, I would recommend the former.\\n\\nIn short, we report results for both. Namely, all the methods are conditioned on the treatment, then FM ignores the base distribution/patient identity (the former in your question), but CFM (as we indicate via CGFM) additionally conditions on the population identity conditions (the latter in your question). Each patient has numerous $(p_0, p_1)$ population pairs. We have updated lines 483-485 and Appendix D.2 to clarify this. For CGFM, we condition on these population identities as known conditions. Population identity conditions for CGFM are represented as one-hot vectors, hence the model cannot generalize to unseen population identities. We state this on lines 426-427.\"}", "{\"comment\": \"We thank the reviewer for their detailed feedback and constructive comments, which gave us an opportunity to improve the clarity of our work significantly. We are pleased to see that the reviewer found our work intriguing and intuitively meaningful. We now address key clarification questions raised by the reviewer.\\n\\n>First and foremost, the article is written in a very confusing way.\\n\\nThank you for bringing this up, we incorporated the changes you suggested into the manuscript.\\n\\n>It is unclear what a Wasserstein manifold is to start with.\\n\\nThank you for the suggestion, we updated the manuscript correspondingly (see Appendix A, lines 157-161). Note that the Wasserstein manifold is \\\"the Riemannian interpretation of the Wasserstein distance developed by Otto\\\" (Ambrosio et al., page 168) and is a well-defined and a well-studied abstract formalism. Indeed, the metric of the Wasserstein manifold is the Wasserstein metric, and the tangent space is defined by the gradient flows as we discuss in lines 157-161. Figure 2 is just an illustration rather than a precise depiction of the Wasserstein manifold. Note that we clearly state in the caption and describe in the text that this is the space of distributions $\\\\mathcal{P}_2(X)$. We are confident that under no circumstances the reader might think that a sphere serves as a precise depiction of the infinite-dimensional space of distributions.\\n\\n>At least an appendix with all definitions should be added.\\n\\nThank you for your suggestion, we added Appendix A including the definitions of the concepts we are using in the paper. We believe this increases the clarity of our method.\\n\\n>The table included in the text seems to show good results, but those in the appendix seem to show that the model is not particularly outperforming.\\n\\nThank you for raising this concern. We respectfully disagree that the appendix shows that the model is not particularly outperforming. We believe this misunderstanding maybe due to the inclusion of evaluation on the train data for reference in the appendix. We note that we primarily care about model performance on the test data and not the train data in this work. To clarify this, we have added bolding to the best performing method in the appendix tables to further illustrate that our model outperforms baseline methods. We hope this clarifies that MFM outperforms baselines in all settings on the test data.\\n\\nWe thank the reviewer for their valuable feedback and great questions. We believe through incorporating the feedback provided by the reviewer, we have improved the clarity and quality of our manuscript. We hope that our rebuttal fully addresses all the salient points raised by the reviewer and we kindly ask the reviewer to potentially upgrade their score if the reviewer is satisfied with our responses and updated manuscript. We are also more than happy to answer any further questions that arise.\"}", "{\"title\": \"(1/3)\", \"comment\": \"We thank the reviewer for their time in effort in constructing this in-depth review of or paper, offering meaningful suggestions, and insightful questions. We are happy to see that the reviewer found our work well motivated, that our \\\"idea to solve this problem is simple and elegant\\\", and found our manuscript \\\"overall well-written and organized\\\". Below we address the clarifying questions and suggestions brought up by the reviewer.\\n\\n>First of all, the authors consider ICNN/CellOT as a baseline that can accommodate varying baseline distributions and conditioning. I think the introduction to ICNN presented in Section 4 is slightly misleading: in lines 348-350, it is sometimes not clear what \\\"the method\\\" refers to. ICNN can generalize to new distributions, but the novelty in MFM is to take additional interactions into account.\\n\\nThank you for pointing this out. What we refer to as the \\\"method\\\" here is \\\"CellOT\\\", while the base architecture of CellOT is an ICNN. Throughout the paper and results, we use ICNN to denote the CellOT baseline (see lines 480-483). We also clarify that the CellOT/ICNN model can generalize to new cells, but is not designed to generalize to unseen distributions. We have updated lines 348-350 to clarify this in the text. Further, we reinforce this through our empirical results, where akin to FM, CellOT (ICNN) struggles to generalize across unseen distribution/populations relative to MFM.\", \"we_wish_to_clarify_that_the_novelty_of_mfm_is_2_fold\": \"(1) we can train a generative model to condition on entire distributions through the use of a population embedding model which learns embeddings of entire distributions/populations, and (2) MFM can take into account additional interactions of cells/particles. This differs from CellOT which does not condition on entire distributions, does not learn embeddings for entire populations, and does not take into account interactions of cells. We also refer to the paragraph starting on line 57 where we discuss the difference between MFM and existing methods (including CellOT). On line 63-66, we state one of the limitations of these existing methods in that they are restricted to operating on a single measure.\\n\\n>Since ICNN is conceptually the closest competitor to MFM, why did the authors not benchmark their method on the dataset consider in that manuscript? The chosen dataset in Section 5.2 seems well suited for the task, but it leaves me wondering why they swapped out datasets. Did their method not give them the desired results on previously considered datasets?\\n\\nWe thank the reviewer for bringing up this important related work. The CellOT datasets are not suitable for the MFM task/objective as there is no opportunity to generalize between initial distributions. The datasets in CellOT have a single control (initial) distribution with multiple treatment conditions. In contrast, the organoid drug-screen dataset considered in our work contains many distribution/population pairs $(p_0, p_1)$ -- specifically, after pre-processing, we have 927 control and treated distribution pairs $(p_0, p_1)$. The problem addressed by MFM is analogous to distributional regression, where each data point is an entire distribution, with the task to generalize across unseen distributions. This problem is ill-posed if there are not sufficient quantity of pairs $(p_0, p_1)$, hence rendering the datasets from CellOT unsuitable.\\n\\n>I suggest reporting $\\\\mathcal{W}_1$, $\\\\mathcal{W}_2$, and MMD for the following very simple uninformed baselines to account for trivial underlying biological phenomena:\\n - The base distribution itself, i.e., distance of the unperturbed patient cells to the perturbed cells without applying any model.\\n - The base distribution shifted by a constant vector that is the mean over all perturbed cells (e.g., those in the train dataset)\\n\\nThis is a great suggestion. We have added the two additional baselines suggested by the reviewer. Namely, we denote $d(p_0, p_1)$ for comparison of the unperturbed source distribution $p_0$ with the base target distribution $p_1$. We use $d(p_0 - \\\\mu_0 + \\\\tilde{\\\\mu}_1, p_1)$ to denote a comparison between the base target distribution with the source distribution shifted by the difference in means between the perturbed cells $\\\\tilde{\\\\mu_1}$ and the unperturbed cells $\\\\mu_0$. We compute $\\\\tilde{\\\\mu_1}$ by taking the average of all $p_1$ population means in the training set. In Table 2 and Table 3, we see that all models perform better than these simple baselines.\\n\\n>The authors often claim that one method, in particular FM, \\\"fails to fit the train data\\\" (e.g., lines 472-473). This is a strong statement given that FM often surprisingly scores second-best out of the baseline methods.\\n\\nWe thank the reviewer for pointing this. We agree this is perhaps too strong of a statement and have removed this statement from the text.\"}", "{\"comment\": \"Thank you for your constructive feedback!\\n\\n>If the text is correct, then I think the axis labels should say PCA instead of UMAP.\\n\\nThank you for pointing this out, this is correct, the axis labels should read PCA instead of UMAP. We have fixed this typo in the manuscript.\\n\\n>if it's a matrix you calculate in eq. (39), you would want to add a transpose somewhere. I also believe that the notation $X_i$ is overloaded, once referring to an observation and then to a gene.\\n\\nThank you for identifying this. We have adjusted this in the text.\\n\\nWe will keep improving the presentation for the next version of the manuscript. We thank the reviewer for their meaningful suggestions that have helped improve the quality of the paper and are glad to see that our rebuttal and updates have improved the reviewer's evaluation of our work. We are happy to answer any new questions that may arise.\"}", "{\"comment\": \"Dear reviewer,\\n\\nWe are very grateful for your time, effort, and constructive comments. As the end of the rebuttal period is quickly approaching, we would like to have the opportunity to answer any remaining questions or clarify any points. We would like to note that we have followed your suggestion to conduct a detailed analysis of the population embeddings, in both the synthetic letters and biological experiments, strengthening our empirical findings. We also tried to clarify the differences between MFM and existing methods, such as \\\"Learning single-cell perturbation responses using neural optimal transport\\\" and \\\"Deep Generalized Schr\\u00f6dinger Bridge\\\", and how MFM differs from the methods and problem settings considered in these works.\\n\\nWe would be happy to continue to engage on these points or any other additional points that may arise. We again thank the reviewer for their constructive review of our paper. If the reviewer finds that our rebuttal addresses their questions and concerns, we would be grateful if the reviewer would potentially consider a fresh assessment of our work and possibly consider increasing their score.\"}", "{\"title\": \"Comment by reviewer\", \"comment\": \"Thanks for taking the time to answer my questions.\\n\\nI usually try to build Figure 1 without any formulas to keep it easily understandable and ensure the reader is not wondering what $h,i,\\\\phi, ...$ actually mean, but I guess everyone has different preferences on that.\\n\\n> In Eq.17, shouldn't the condition be part of or as well?\\n\\nThanks for clarifying this. I had the code line corresponding to l.281 in mind when I was thinking about Eq.17. But I agree it is fine to introduce this step by step and not everything at once.\\n\\nI was also thinking a bit more about when and when not to use MFM. I guess extrapolation is also quite tricky since there isn't a strong bias (the network $v$ will probably show an arbitrary behaviour outside the training domain).\"}", "{\"comment\": \"Dear reviewer,\\n\\nWe are very thankful for your time and insightful suggestions. As the end of the rebuttal period is fast approaching we would like to have the opportunity to answer any remaining questions. We would like to note that in our rebuttal we followed your great suggestions and added clarifications in the text to improve clarity and reduce confusion. To note, we added Appendix A, which includes definitions of the concepts we use in this work. We also clarified the presentation of our results and our depiction of the Wasserstein manifold in the rebuttal.\\n\\nWe would be happy to engage in any further discussion on these points or any additional points that the reviewer may find important, please let us know! We thank the reviewer again for their time and if the reviewer finds our rebuttal and additions to the manuscript satisfactory, we would also appreciate it if the reviewer could potentially consider revising their assessment of our paper and improving their score.\"}", "{\"summary\": \"The paper introduces an extension to Flow Matching (FM) called Meta Flow Matching (MFM) that efficiently accommodates generalizing to varying base distributions by featurizing the base distribution with a nearest-neighbor graph neural network (GNN).\\n\\nMore specifically, FM is a recent popular framework for generative modeling that works by first matching up a target distribution with a base distribution (e.g., a standard normal distribution, coupled to the target distribution via an independent coupling) and then training a vector field to match the resulting particle vector fields via minimizing the Mean Squared Error (MSE). A natural extension of this is Conditional Flow Matching (CFM) where the vector fields can be conditioned on some covariate, such as a generative prompt for image generation. The authors argue that there are compelling reasons to extend the problem setup to include varying base distributions. First, naturally occurring dynamics such as interacting particles or diffusion equations have dynamics that are explicitly distribution dependent and thus their particle versions cannot be modeled by vector fields that only depend on the particle location alone. Second, problems in molecular biology naturally lend themselves to considering different base distributions, such as predicting the drug response for different baseline cell distributions per patient.\\n\\nTherefore, the authors suggest expanding the FM formulation with a distributional embedding that captures the base distribution and that is used as input to the vector field neural network. To obtain the embedding, they first calculate a k-nearest-neighbor graph on the base distribution and then embed it with a GNN, leading to the proposed MFM formulation. They show that MFM can be trained in the same manner as FM and show that it outperforms baseline architectures in experimental results on synthetic data (denoising rotated letter profiles) and on real data (mass-cytometry cell profiles on patient-derived cultures under drug treatments).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The manuscript is overall well-written and organized.\", \"Flow matching is a popular generative modeling framework and is of significant interest to the ML community. I like the authors' motivation of the distribution dependence through the continuity equation and interacting particle systems. Their idea to solve this problem is simple and elegant and deserves exploration.\"], \"weaknesses\": [\"**Empirical results are not fully convincing and need strengthening:** The presentation of the empirical results leave me slightly puzzled, as explained in the following.\", \"First of all, the authors consider [ICNN/CellOT](https://www.nature.com/articles/s41592-023-01969-x) as a baseline that can accommodate varying baseline distributions and conditioning. I think the introduction to ICNN presented in in Section 4 is slightly misleading: in lines 348-350, it is sometimes not clear what \\\"the method\\\" refers to. ICNN *can* generalize to new distributions, but the novelty in MFM is to take additional interactions into account.\", \"Since ICNN is conceptually the closest competitor to MFM, why did the authors not benchmark their method on the [dataset consider in that manuscript](https://onlinelibrary.wiley.com/doi/epdf/10.1111/exd.12683)? The chosen dataset in Section 5.2 seems well suited for the task, but it leaves me wondering why they swapped out datasets. Did their method not give them the desired results on previously considered datasets?\", \"The authors often claim that one method, in particular FM, \\\"fails to fit the train data\\\" (e.g., lines 472-473). This is a strong statement given that FM often surprisingly scores second-best out of the baseline methods. This makes me wonder about the underlying dataset and the employed metrics. At a minimum, I suggest reporting $\\\\mathcal{W}_1$, $\\\\mathcal{W}_2$ and MMD for the following very simple uninformed baselines to account for trivial underlying biological phenomena:\", \"The base distribution itself, i.e., distance of the unperturbed patient cells to the perturbed cells without applying any model.\", \"The base distribution shifted by a constant vector that is the mean over all perturbed cells (e.g., those in the train dataset)\", \"Further, are there ways to better understand the overall quality of the predictions? Are there downstream conclusions of the dataset paper that can or cannot be reproduced by the various predictive algorithms? Can the authors plot some of the predicted distributions just as they did in Figure 3?\", \"The generalizability of the results might be compromised by the authors' picking specific patients for the patient holdout setup (see Section C.2). Can the experiment be performed over several splits?\", \"In the same vein, I do not understand how the authors arrive at the error estimates for their experimental results if only one split is considered. I am afraid these estimates could be vastly underestimating the actual uncertainty. This is important since MFM sometimes only shows a small edge compared to FM.\", \"**Independent matching may be supoptimal for all considered methods:** [Tong et al (2023)](https://arxiv.org/abs/2302.00482) propose to train flow matching by coupling base and target distribution via Optimal Transport instead of an independent coupling. This is a simple tweak that could be applied to all considered FM methods, i.e., FM, CFM, and MFM. I am curious if this would improve results across the board or specifically help FM and CFM because those methods currently have no way of accounting for different source distributions. Potentially, this could be a much simpler fix than MFM.\", \"**(minor)** The text contains quite a few typos. I suggest the authors do another round of proof-reading for the final version.\"], \"questions\": [\"I am recapitulating some of the problems outlined in \\\"Weaknesses\\\" above as questions to the authors here, as well as some other things that were not clear to me:\", \"How are uncertainties on your results calculated? If they are not derived from independent splits/seeds, I would suggest you do that.\", \"Why did you not benchmark MFM on the [dataset consider in the CellOT manuscript](https://onlinelibrary.wiley.com/doi/epdf/10.1111/exd.12683)?\", \"How is CFM set up on the patient data set? I assume you condition on the drug, while ignoring the base distribution/patient identity. Is this correct, or are you also feeding the patient identity as condition? If the latter, I would recommend the former.\", \"Can you add the following uninformed baselines to the experiments in Sections 5.1 and 5.2?\", \"The base distribution itself, i.e., distance of the unperturbed patient cells to the perturbed cells without applying any model.\", \"The base distribution shifted by a constant vector that is the mean over all perturbed cells (e.g., those in the train dataset)\", \"Are there ways to better understand the overall quality of the predictions? Are there downstream conclusions of the paper that introduced that can or cannot be reproduced by the various predictive algorithms? Could you plot some of the predicted distributions just as they did in Figure 3?\", \"I am surprised by the dynamic range of the Wasserstein distances vs MMD. For example, in Table 2, ICNN on the patient holdout achieves MMD of 74.00 and $\\\\mathcal{W}_2$ of 4.681 vs MFM (k=100) with 8.96 and 4.269, respectively. Is there any intuition about this discrepancy in dynamic range?\", \"How is the $r^2$ calculated here? The model does not actually predict an output for a single target cell, but a distribution instead, so I am confused as to how this is done.\", \"Would you consider running experiments with Optimal Transport matching instead of independent coupling as considered in [Tong et al (2023)](https://arxiv.org/abs/2302.00482)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply (2/2)\", \"comment\": \">I still don't fully understand how $r^2$ is calculated. Would it be possible to add to the description the specific handling of the dimensionality involved? I assume one correlation is applied on genes and the other one on cells, but I'm not sure which one is which.\\n\\nTo make this clearer, in Appendix D.3 lines 884-900 we have further elaborated on how the $r^2$ metric is computed. We follow the implementation done by Bunne et al. 2023. In Appendix D.3 we have added formal definitions of the quantities being computed and the handling of the dimensionality. We are happy to further elaborate and answer any remaining questions regarding the computation of the $r^2$ metric.\\n\\n>I'm surprised to see the shifted mean baseline perform much more poorly than the simple control population baseline on the biological dataset. This suggests to me that 0 is a better mean estimate than the mean of the training samples. Do you have an idea of what's going on here?\\n\\nThis behaviour is expected from this datatype since it is a mixture of two cell types (cancer cells and fibroblasts) where the proportion between the two populations changes drastically. This is because one of the major treatment effects is to kill the cancer cells (reducing their proportion relative to the fibroblasts). This means that the shift $\\\\tilde \\\\mu_1 - \\\\mu_0$ is a transformation towards a fibroblast state (because $p_1$ has relatively more fibroblasts as the cancer cells have been killed by the treatment). This creates very weird predictions where it looks like the cancer cells are turning into fibroblast cells (which is not possible biologically). This makes the mean shift estimate quite poor here as it creates cells that are very unlikely to appear in the dataset. We can see this effect is particularly pronounced in the relative change of the MMD metric between the null shift and the mean shift baselines, which measure local differences in densities.\\n\\n>Did you calculate $\\\\mu_1$ based on all populations, or only those that share the same source population? For the replicate split, the latter would be more appropriate I think.\\n\\nTo further clarify this, we have added an additional baseline $d(p_0 - \\\\mu_0 + \\\\mu_1, p_1)$ that uses the *individual* population means $\\\\mu_1$ of treated populations $p_1$ for each coupled pair $(p_0, p_1)$. We report this in updated Tables 5, 6, 7. Here, $d(p_0 - \\\\mu_0 + \\\\mu_1, p_1)$ differs from $d(p_0 - \\\\mu_0 + \\\\tilde{\\\\mu}_1, p_1)$ as $\\\\tilde{\\\\mu}_1$ is estimated as the average mean (or mean of means) of the treated populations present in the training data. Specifically, we compute $\\\\mu_1$ for each individual treated population $p_1$ in the train data. Then, $\\\\tilde{\\\\mu}_1$ is estimated as $\\\\tilde{\\\\mu}_1 = 1/N \\\\sum_i^N \\\\mu^{(i)}_1$, where $N$ is the number of $p_1$ populations in the train data. The $d(p_0 - \\\\mu_0 + \\\\tilde{\\\\mu}_1, p_1)$ baseline shows that one cannot trivially estimate some constant shift $\\\\tilde{\\\\mu}_1$ to predict $p_1$. In regards to $d(p_0 - \\\\mu_0 + \\\\mu_1, p_1)$ (which uses the true $\\\\mu_1$'s for each $(p_0, p_1)$ pair), we observe that indeed this baseline does perform better than the trivial baseline $d(p_0, p_1)$, as one might expect. Note, we report $d(p_0 - \\\\mu_0 + \\\\mu_1, p_1)$ only on the train data, since in practice you do not have access to $p_1$ at test-time, so this is not a practical baseline to consider in the test condition. \\n\\nWe once again thank the reviewer for their valuable comments and questions. We hope this discussion addresses the reviewer's remaining points. We are more than happy to keep clarifying and addressing any salient points that may remain. We believe that through this discussion, we have improved the overall quality of our work and strengthened our empirical findings. We kindly ask that if the reviewer views our responses and additions to the manuscript as satisfactory, to consider increasing their rating of our paper.\"}", "{\"summary\": [\"The paper proposes a new diagram of conditional flow matching models named meta flow matching.\", \"The high-level understanding is: meta flow matching represents the conditional input in a new way -- by embedding the information of the whole population (or saying distributions).\", \"In math, this can be formulated as constructing the vector field in the Wasserstein manifold.\", \"The authors then provide the technical construction of the model, including objective functions and parametrization.\", \"Numerical experiments are conducted on synthetic and organoid drug-screen datasets.\", \"I am happy to see this conceptual novelty in the approach.\", \"I didn't thoroughly check the technical part in the paper while I didn't see anything odd.\", \"I am a little concerned on certain claims and the effectiveness of the method, both conceptually and numerically.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"I am listing both strengths (+), weakness (-) and questions (?) in the following to make the review more logistic.\", \"(+) The meta flow matching diagram is clearly novel. The specific new part is the population/distribution embedding that the model can distinguish and evolve differently in different population, for the same particle.\", \"(?) Question on ODE: Then what is the particle-level ODE corresponding to the PDE (12)? Is it similar to FP-equation that dx/dt = vt(x, pt)? Can the author prove it or give the detailed reference?\", \"(?) Can you method be extent to SDE case?\", \"(?) Analysis of population embeddings: To learn the population embeddings, I believe you should have enough example of populations otherwise the generalization is less promised. I would like to see a detailed analysis of population embeddings for a sanity check in both experiments.\", \"(-) Reduced number of data points: Following the last point, for the organoid drug-screen experiment where only contains 10 patients, does it mean there are only 10 populations (data point for patient population embeddings)? It is out of expectation that model can capture the right population embeddings from 10 data points.\", \"(?) Why replicate-split in the organoid drug-screen experiment is meaningful? In my understanding the conclusion drawn from different replicates usually would be consistent (that they has similar patterns).\", \"(?) I also do not quite understand why other baselines under-perform in the replicate-split since if I understand correctly, generalization would not be a big challenge here.\", \"(?) Can you provide a random baseline for your experiments? Random guess baseline and random initialized model baseline.\", \"(-) An extremely important reference of \\\"Deep Generalized Schr\\u00f6dinger Bridge\\\" is missed here. Conceptually this paper should be sth ahead of yours and need to be discussed in details. Population embedding is also adopted in the paper in a simpler way.\", \"(?) To further illustrate the effectviese, I think comparison with the numbers and experiments in \\\"Deep Generalized Schr\\u00f6dinger Bridge\\\" is necessary.\", \"(?) Comparison with the number and experiment in \\\"Learning single-cell perturbation responses using neural optimal transport\\\" is also necessary in my eye. This is a classical paper and one of your competitor method.\", \"(?) Complexity: Can you provide a complexity analysis here? Your method need additional embedding of the whole population and I think the scalability performance needed to be shown somehow (w.r.t. the size of population) which is very useful to the community for method selection.\", \"(?) How do you construct the graph in the organoid drug-screen experiment? How different graph construction impact your model results?\"], \"weaknesses\": \"See Strengths.\", \"questions\": \"See Strengths.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The rebuttal addresses many of my questions. I am adjusting my rating 5 --> 6.\"}", "{\"comment\": \"> $d(p_0, p_1)$ and $d(p_0 - \\\\mu_0 + \\\\tilde \\\\mu_1, p_1)$ baselines\\n\\nThanks for adding these! I'm surprised to see the shifted mean baseline perform much more poorly than the simple control population baseline on the biological dataset. This suggests to me that 0 is a better mean estimate than the mean of the training samples. Do you have an idea of what's going on here? Did you calculate $\\\\mu_1$ based on all populations, or only those that share the same source population? For the replicate split, the latter would be more appropriate I think.\"}", "{\"comment\": \"Dear reviewer,\\n\\nWe are very grateful for your time, constructive comments, and insightful questions. As the end of the rebuttal period is fast approaching, we would like to have the opportunity to answer any remaining questions or concerns. We would like to note that in our rebuttal we followed your great suggestions and included several new experiments to strengthen our empirical results. We also tried to highlight in both our global response and the rebuttal response the differences between the approach and problem setting in MFM compared to the method and problem setting considered in Bunne et al. 2023.\\n\\nWe would be happy to engage in any further discussion on these points or answer any additional questions that the reviewer finds important. We thank the reviewer again for their time and effort. If the reviewer finds our rebuttal and new experimental findings satisfactory, we would also appreciate it if the reviewer could potentially consider a fresher evaluation of our paper and kindly ask the reviewer to possibly consider improving their score.\"}", "{\"title\": \"Official Comment by the Reviewer\", \"comment\": \"Thanks for your rebuttal.\\n\\nI increased my Presentation and Contribution Score after the rebuttal because the manuscript became better in that sense, and I strongly believe this is an important contribution to learning dynamics. I am sure many people in my research field will build upon this, and I hope the other reviewers will see it similarly.\"}", "{\"summary\": \"This paper introduces Meta Flow Matching (MFM), a novel approach for modeling the evolution of systems consisting of interacting samples/populations. Unlike previous flow-based models, MFM can generalize to unseen initial populations by using a Graph Neural Network to embed the population and amortizing the flow matching model over these embeddings The authors demonstrate MFM's effectiveness on an interesting synthetic letter denoising task and a large-scale single-cell organoid screening dataset, showing its ability to predict patient-specific responses to treatments.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"Novel approach: The paper proposes a new method for integrating vector fields using the full of probability densities - allowing the modeling of the evolution of entire distributions of samples (which is important for many biological and physical processes). I also believe this amortization effect is especially helpful in small-sample regimes.\", \"Theoretical foundation: The paper provides a solid theoretical basis for the proposed method, including the connections to existing approaches like conditional generative flow matching.\", \"Readability: The paper is *very* well written and easy to read and understand (although it's covering a complex topic)\"], \"weaknesses\": [\"Honestly, no weaknesses come to my eye. I've been working on Flows for a quite some time now, and have to say this is a good and well-executed idea.\", \"Figure 1 (c) is not that helpful in my opinion.\"], \"typos\": [\"\\\"a a standard\\\" (l420)\", \"\\\"of of model\\\" (l464)\"], \"questions\": [\"In Eq.17, shouldn't the condition `c` be part of `\\\\phi` or `v_t` as well?\", \"I am wondering in what cases I should *not* use MFM to model the change of a distribution using a flow? I can see that in Exp. 5.1 and 5.2, the samples have strong dependencies (e.g., to create the silhouettes), but in case there are no dependencies, would MFM also model this well?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"(2/3)\", \"comment\": \">For the organoid drug-screen experiment where only contains 10 patients, does it mean there are only 10 populations (data point for patient population embeddings)? It is out of expectation that model can capture the right population embeddings from 10 data points.\\n\\nFor the organoid drug-screen dataset, there are 10 patients with many replicated conditions/experiments, hence leading to a large quantity of $(p_0, p_1)$ population pairs to use for training and evaluation. In total, we have 927 $(p_0, p_1)$ population pairs, which we divide into training and testing splits. For the patient split, we leave out all $(p_0, p_1)$ pairs fully from 1 patient to evaluate how methods perform when predicting population dynamics in an unseen patient. \\n\\n>Why replicate-split in the organoid drug-screen experiment is meaningful? In my understanding the conclusion drawn from different replicates usually would be consistent (that they has similar patterns).\\n\\nAlthough the generalization task in the replicate split is easier, since as the reviewer reasonably pointed out, there should arguably be less diversity between populations in the train and test sets. However, there is still sufficient diversity between populations due to inherit biological heterogeneity that exists across cell populations. We can thus use this inherent biological heterogeneity across cells (which exists in the replica split) to pose a generalization problem across unseen populations and learn meaningful embeddings. \\n\\n>Can you provide a random baseline for your experiments? Random guess baseline and random initialized model baseline.\\n\\nThank you for the suggestion. We have included two additional baselines. We use $d(p_0, p_1)$ to denote comparison of the unperturbed source distribution $p_0$ with the base target distribution $p_1$. We also consider $d(p_0 - \\\\mu_0 + \\\\tilde{\\\\mu}_1, p_1)$ to denote the comparison between the base target distribution with the source distribution shifted by a constant vector $\\\\tilde{\\\\mu_1}$. We compute $\\\\tilde{\\\\mu_1}$ by taking the average of all $p_1$ population means in the training set. In Table 2 and Table 3, we see that all models perform better than these trivial baselines. We see that MFM is consistently outperforming these baselines. This further supports the fact that MFM is able to generalize across initial distributions relative to FM, CGFM, and ICNN baselines.\\n\\n>I also do not quite understand why other baselines under-perform in the replicate-split since if I understand correctly, generalization would not be a big challenge here.\\n\\nRe-iterating from the reviewer's previous question regarding the replica split and generalization. We agree that generalization is easier in replica split relative to the patient split. However, it is still not trivial to generalize across populations in the left-out replicas. This split tests the model's ability to generalize across the underlying biological heterogeneity of different cell populations. For reference, please refer to the updated Table 2, where we include two additional baselines on the replica split. Through these results, we observe that there exists sufficient biological diversity between experimental conditions such that models learn something meaningful, and in particular that MFM can better generalize relative to baseline.\\n\\n>An extremely important reference of \\\"Deep Generalized Schr\\u00f6dinger Bridge\\\" is missed here. Conceptually this paper should be sth ahead of yours and need to be discussed in details. Population embedding is also adopted in the paper in a simpler way. To further illustrate the effectviese, I think comparison with the numbers and experiments in \\\"Deep Generalized Schr\\u00f6dinger Bridge\\\" is necessary.\\n\\nWe thank the reviewer for bringing this important reference to our attention. The work \\\"Deep Generalized Schr\\u00f6dinger Bridge\\\" (DeepGSB) also considers interacting terms between particles, specifically entropy and congestion but does not consider embeddings of populations. Furthermore, the focus of this work is generalization over initial distributions, which DeepGSB does not consider. For this reason, the experiments in DeepGSB are outside of the scope of this work. We add a discussion of DeepGSB in our related work section.\"}", "{\"comment\": \"Thank you for taking the time to read our rebuttal and for your valuable comments.\\n\\nRegarding Figure 1, we have updated the figure to provide a better and more clear overview of MFM and how it differs from FM. We would be happy to incorporate any additional suggestions that the reviewer may have.\\n\\n>I was also thinking a bit more about when and when not to use MFM. I guess extrapolation is also quite tricky since there isn't a strong bias (the network $v$ will probably show an arbitrary behaviour outside the training domain).\\n\\nWe leave the out-of-distribution (OOD) generalization abilities of MFM for the future studies. Namely, in the current paper, we show that MFM can learn the representations that generalize within the training distribution, while the OOD generalization should be approached in a broader context of representation learning, e.g. the representation learning for the single cell data or, more generally, representation learning of distributions.\"}", "{\"metareview\": \"The paper proposes a 'meta flow matching approach' for modelling a family of distributions, conditioned on the population index i;\\ninstead of previous approaches (conditional generative flow matching) instead of embedding a sequence i the meta-flow matching approach embeds to population (i.e. the set of points) into the conditioning vector for a vector field. This is very neat and mimics, for example, mean field approaches. The paper is well-written. The applicability is shown for small-dimensional distributions, but overall, this is a good work. Some of the concerns are the actual graph neural network design for particular applications and correct comparison to the baselines, but it also has been A.\", \"additional_comments_on_reviewer_discussion\": \"Most of the reviewers agree this is a good work, except for reviewer Kj6P who found the work confusing, but some of his concerns were addressed by the appendix A.\"}", "{\"title\": \"Reply (1/2)\", \"comment\": \"Thank you for your time and effort in giving constructive and meaningful feedback for our work!\\n\\n>could you clarify whether there are actually multiple replicates for the same patient-drug pair, or do you use replicate interchangeably with drug treatments (per patient) here? \\n\\nThis is an excellent question. For each patient, **there are multiple replicates** for the same patient-drug pair -- i.e. multiple $(p_0, p_1)$ pairs. There are approximately three technical replicates for every experimental setting. There is slight variation due to experimental variability (particularly of antibody effectiveness leading to reduced yield in some experiments).\\n\\n>I'm getting a bit confused looking at the new Figure 7 on why there are multiple dots per patient. Are you also plotting the treated populations, or multiple replicates for the control population? \\n\\nIn Figure 7, we are only plotting the population embeddings for the replicated control populations $p_0$. Following from our clarification in the previous point, what you observe in Figure 7 as the multiple dots per patient is due to the numerous control populations for each patient. This is the central artifact of this dataset that lets us approach the problem of learning embeddings of entire populations and generalizing to unseen $(p_0, p_1)$ (given unseen $p_0$ predict treatment response $\\\\hat{p}_1$).\\n\\nWe note that this somewhat complicated setting of controls was done in the previous paper to control for batch effects. Specifically, each treatment has a matched control on the same 96-well plate to match the experimental conditions as closely as possible. Due to the size of the dataset, roughly 4 treatments can fit on a single plate. These will all use the same matched control. However, since the drug screen is so large, experiments have to be done on multiple plates. This means that there are a number of separate controls for each patient depending on the exact setup of plates and treatments, and reruns of failed data capture. We hope this clarifies the reasoning behind the control setup of the prior work. \\n\\n>If there are multiple ones for the control, how do you handle that in the prediction setup?\\n\\nDuring prediction, we observe a new (unseen) control population $p_0$, and we ask the question of how well we can predict $p_1$ (for treatments seen during training). Specifically, MFM learns to represent entire populations $p_0$. At test-time, the vector field model predicts conditional population dynamics, conditioned on population embeddings $\\\\varphi(p_0; \\\\theta)$ for an *unseen* test control population $p_0$ and additionally conditioned on a known treatment, to recover $p_1$.\\n\\nGiven that population pairs $(p_0, p_1)$ are coupled, we are able to validate and evaluate how well we predict $p_1$ given said treatment for an *unseen* $p_0$. Regarding Figure 7, we are not plotting embeddings of the predicted and treated $p_1$ populations, we just plot the learned population embeddings $p_0$ across the entire dataset to demonstrate our model can recover known biological artifacts which are also found by (Zapatero et al. 2023). \\n\\n>Is there a similar way to understand the quality of the predictions? I.e., are there simple binary conclusions one would draw from the data (i.e., a specific subset of drugs is effective for a specific subset of patients) that you would be able to answer from the predictions?\\n\\nYes, we similarly provide embedding plots of the ground truth target and predicted populations separately in Figure 8. We can see here that for all three test patients, the general structure is preserved, with the treatment, Oxaliplatin (Green), being the furthest away both for the target and ground truth datasets. This is because (as shown in the original paper by Zapatero et al. 2023) Oxaliplatin has a large effect on these cancer cells for this subset of patient-derived organoids (PDOs), as we can see from the plots this is more pronounced for PDO 21 and 27 than for PDO 75. This reflects the conclusions drawn from Figure 4 in the original dataset paper (Zapatero et al. 2023). In this way, we are able to draw similar conclusions from the predicted distributions as from the data. We add additional details on this experiment in Appendix F.2.\"}", "{\"comment\": \"In the remainder of this general response we address two points: (1) an overview of the new experiments we ran to address shared questions raised in the reviews; (2) an overview of changes to the manuscript to answer clarifying questions. Please refer to the updated manuscript for all changes mentioned in the general response and responses to individual reviewers. *Changes are presented as blue text in the updated manuscript*.\\n\\n1. **Experiments:** \\n - Reviewer Y1RT raised questions regarding the empirical experiments and provided meaningful suggestions to strengthen our results. We incorporated all suggestions from reviewer Y1RT, in turn improving the overall quality of our empirical results. Firstly, we added patient split experiments over 3 independent patient splits (Table 3 in the updated manuscript) to show MFM that is robust across different settings with differing left-out patient populations. Secondly, we added experiments with optimal transport (OT) couplings between samples of source and target distributions for a replica split (updated Table 2). This addition helped improve the overall performance of MFM while also increasing the strength of comparison to stronger baselines, i.e. FM-OT, CGFM-OT. Lastly, for experiments on the organoid drug-screen dataset, we added trivial baselines to demonstrate that the models learn meaningful population dynamics beyond trivial biological phenomena. \\n - Reviewer kKjh asked about analyzing the population embeddings to verify that our population embedding model learns valuable representations. We added a section to the appendix to confirm this (Appendix F.1, Figure 7). Through this, we also addressed reviewer Y1RT's question regarding using our method to draw meaningful insights in the biological dataset.\\n2. **Clarifications:** Reviewers had clarifying questions which we addressed through the individual responses, and in some cases adding clarification to the manuscript. Below we outline and clarify some shared questions asked by reviewers. \\n - Reviewer's Y1RT and kKjh asked clarifying questions regarding comparisons with Bunne et al. 2023 [1]. We note that in our work we indeed do compare with the method in [1]. In the individual responses, we clarified how MFM differs from the method in [1] and how datasets in [1] are not suitable for the setting which MFM is addressing. We made brief changes in the text to further clarify these points. \\n - Reviewer Kj6P and jcAc asked clarifying questions pertaining the theoretical ideas presented in our work. Kj6P asked about our use of the Wasserstein manifold and the respective theory surrounding it. To help with understanding and improve clarity, we provided a clarifying explanation in the individual response, briefly clarified some notation in section 2.3, and added formal definitions in Appendix A of the concepts we are using in the paper. We note that our depiction of the Wasserstein manifold in Figure 2 is only an illustration. We use this figure to illustrate the notion that MFM learns to integrate vector fields on $\\\\mathcal{P}_2(X)$, and because of this, can generalize to unseen populations.\\n \\nWe once again thank all the reviewers for their valuable time and insightful feedback. We believe that through the meaningful feedback and suggestions raised by the reviewers, we have improved the clarity, impact, and significance of our work. Through this, we believe that we have addressed all the concerns and questions posed by the reviewers. If the reviewers agree, we hope that the they will consider increasing their scores.\\n\\n[1] Bunne, Charlotte, et al. \\\"Learning single-cell perturbation responses using neural optimal transport.\\\" Nature methods 20.11 (2023): 1759-1768.\"}", "{\"comment\": \"We thank the reviewer for their time and positive appraisal of our work. We are thrilled that the reviewer viewed our work to be \\\"very well written and easy to understand\\\" and \\\"provides a solid theoretical basis for the proposed method\\\". We now provide responses to the main questions raised by the reviewer.\\n\\n> Figure 1.c is not that helpful in my opinion.\\n\\nWe are happy to clarify and amend the visual illustrations in our manuscript. In regards to Figure 1, we have updated the caption to try and clarify the advantages of MFM and how it differs from current approaches. We are open to suggestions on how to improve the visual depiction in this figure to further reinforce the ideas in our work. In regards to Figure 2-c, in the general response, we provide a brief clarification regarding our illustration of the Wasserstein manifold and how it is considered in MFM. We are happy to incorporate any suggestions the reviewer may have to further improve and simplify visual depictions of our framework and method.\\n\\n> In Eq.17, shouldn't the condition $c$ be part of $\\\\phi$ or $v_t$ as well?\\n\\nWe thank the reviewer for pointing to this important detail. In Theorem 1, $c$ corresponds to the *ideal* condition on the population that we\\u2019re trying to learn. Eq. 17 defines an objective function used to optimize a $v(\\\\cdot; \\\\varphi(p_0; \\\\theta) \\\\omega)$, where $\\\\varphi(p_0; \\\\theta)$ is a population embedding model that learns to represent the entire initial population, approximating the *ideal* condition. Hence, Eq. 17 defines, in a sense, an *unconditional* MFM objective in so far that it does not contain any known conditions (such as treatments in our biological setting). From here it is trivial to extend Eq. 17 to the *conditional* setting with any amount of known conditions. For instance, we show this in Algorithm 1 and Algorithm 2, where we use $c^i$ to denote a known treatment condition for population $i$. The condition $c^i$ and the embedding of the population play different roles, e.g. the condition $c^i$ carries the information about the treatment, while the embedding has the information about the patient through the population of cells. \\n\\n> I am wondering in what cases I should not use MFM to model the change of a distribution using a flow?\\n\\nExcellent question. MFM relies on there existing a learnable relationship between the source and target distributions. If there is no dependence between the initial and target distributions we would not expect MFM to work well. For instance, we do not expect MFM to be anyhow helpful for the classical setting of generative modelling because there is no any dependence between a given empirical distribution (e.g. natural images) and the Gaussian prior. \\n\\nWe thank the reviewer again for their valuable feedback and great questions. We have implemented the reviewer's suggestions and hope that our rebuttal addresses their questions. We are also more than happy to answer any further questions that arise.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thank you for your reponse!\", \"comment\": \"Thank you for addressing many of the points that I raised! I still have a few comments:\\n\\nRegarding the organoid dataset, could you clarify whether there are actually multiple replicates for the same patient-drug pair, or do you use replicate interchangeably with drug treatments (per patient) here? I'm getting a bit confused looking at the new Figure 7 on why there are multiple dots per patient. Are you also plotting the treated populations, or multiple replicates for the control population? If there are multiple ones for the control, how do you handle that in the prediction setup?\\n\\n\\n> Ask for downstream conclusions, addition of Figure 7\\n\\nThanks for adding this! I appreciate that this helps to understand the quality of the embedding. But is there a similar way to understand the quality of the predictions? I.e., are there simple binary conclusions one would draw from the data (i.e., a specific subset of drugs is effective for a specific subset of patients) that you would be able to answer from the predictions?\\n\\n\\n> $r^2$ explanation\\n\\nFrankly, I still don't fully understand how it is calculated. Would it be possible to add to the description the specific handling of the dimensionality involved? I assume one correlation is applied on genes and the other one on cells, but I'm not sure which one is which.\"}", "{\"comment\": \"We are excited to see that our work was so positively received and that the reviewer strongly believes that our paper provides an important contribution to learning dynamics! We again thank the reviewer for their positive appraisal of our work and for their constructive and insightful feedback on our manuscript.\"}", "{\"comment\": \"Dear reviewer,\\n\\nWe thank you again for your time and effort in providing a constructive and insightful review of our work. As the end of the discussion period is fast approaching, we hope we have addressed all your questions and concerns. If you have any more comments, we would be happy to engage in further discussion.\"}", "{\"comment\": \"Dear reviewer,\\n\\nWe are very appreciative of your time and positive comments. As the end of the rebuttal period is quickly approaching, we would be happy to answer any remaining or additional questions, please let us know! Again, we thank the reviewer for their time and effort in reviewing our paper.\"}", "{\"title\": \"(3/3)\", \"comment\": \">I am surprised by the dynamic range of the Wasserstein distances vs MMD. For example, in Table 2, ICNN on the patient holdout achieves MMD of 74.00 and $\\\\mathcal{W}_2$ of 4.681 vs MFM (k=100) with 8.96 and 4.269, respectively. Is there any intuition about this discrepancy in dynamic range?\\n\\nWe thank the reviewer for pointing to this. We note that MMD measures the local discrepancy while Wasserstein distances measures global distance variations and that we report MMD measures times $10^{-3}$ (as specified in the table header). It is also more sensitive to small differences in density. It is therefore possible for MMD to be large while Wasserstein distance to be only mildly affected when there are local density variations. It should be noted that while we used unweighted Euclidean distance as the cost for calculating the Wasserstein distance, for our MMD metric, we followed past literature setting the bandwidth hyperparameter $\\\\sigma$ used in MMD to cover different scales. To clarify this we have added Appendix D.3. where we describe and explicitly define these metrics. \\n\\n>How is the $r^2$ calculated here? The model does not actually predict an output for a single target cell, but a distribution instead, so I am confused as to how this is done.\\n\\nThe $r^2$ is calculated as the correlation of the correlation of gene expression within the sample distribution between ground truth and the generated sample. For a full mathematical description see Appendix D.3 where we have added further description of metrics. It reflects that genes are expressed at a similar level between cells in the ground truth should also be similar in generated samples. This is a somewhat strange evaluation for a distribution, as it calculates the correlation of correlation, but is a typical evaluation of single-cell prediction methods (Bunne et al. 2023) as it reflects downstream tasks such as gene set enrichment analysis. \\n\\n>(minor) The text contains quite a few typos. I suggest the authors do another round of proof-reading for the final version.\\n\\nWe thank the reviewer for eluding to this. We have gone through the text and fixed typos that were originally present in the manuscript.\\n\\nWe again thank the reviewer for their very valuable feedback, insightful questions, and time spent in reviewing our work. We believe that through incorporating all the meaningful suggestions brought up by the reviewer, we have improved the overall quality of our work and strengthened our empirical results. We hope that our rebuttal fully addresses the reviewer's concerns, and we kindly ask the reviewer to consider increasing their score if they are satisfied with our response. We are happy to answer any further questions that may arise.\"}", "{\"title\": \"(3/3)\", \"comment\": \">Comparison with the number and experiment in \\\"Learning single-cell perturbation responses using neural optimal transport\\\" is also necessary in my eye. This is a classical paper and one of your competitor method.\\n\\nThank you for this point. We would like to clarify that we in fact do compare with the method in \\\"Learning single-cell perturbation responses using neural optimal transport\\\" (CellOT, which we denote as the ICNN). Moreover, we point out that the experiments in CellOT consider a *single* control distribution. For this reason, the experiments in that work are not applicable to our setting, which considers generalization over *multiple* control distributions. We compare MFM to the CellOT model in Table 2 and Table 3 (updated) referred to by its underlying architecture an input convex neural network (ICNN). We have clarified this in the text.\\n\\n>Can you provide a complexity analysis here? Your method need additional embedding of the whole population and I think the scalability performance needed to be shown somehow (w.r.t. the size of population) which is very useful to the community for method selection.\\n\\nWe thank the reviewer for asking these important questions. Complexity depends on the model architecture. For MFM, we use a graph convolutional network (GCN) with $k$-nearest neighbors edge pooling to construct particle interaction graphs (lines 369-371). Here, as $k$ increases, training time and memory usage also increase. There are other approaches to do this, but the focus of this work is in the general idea, theory, and execution of learning population/distribution embeddings and generalizing across unseen populations. We leave exploring improvements in this regard for future work. \\n\\n>How do you construct the graph in the organoid drug-screen experiment? How different graph construction impact your model results?\\n\\nWe do not explicitly construct a particle interaction graph. Rather, we let the population embedding model handle interactions for us -- i.e., using knn edge pooling layers in GCN model. Explicit modeling of the particle interactions is in general a very challenging problem. We leave an investigation of explicit modeling of the particle interactions for future work. \\n\\nWe once again appreciate your time and effort in this rebuttal period. We believe we have addressed the concerns brought up by the reviewer, and through the reviewer's insightful suggestions, we have improved the overall quality of our work. If the reviewer deems our responses detailed enough and satisfactory we encourage the reviewer to potentially consider a fresher evaluation of our paper with these responses in context and potentially upgrade their score.\"}", "{\"comment\": \"Thank you for your time and effort in reading our rebuttal and for your helpful feedback. Please, note that we can no longer update the manuscript on Openreview. Hence, all the introduced changes will appear in the camera-ready version of the manuscript.\\n\\n>I suggest to add the definition clearly in the appendix, saying something like \\\"a Wasserstein space with the bundle of tangent spaces defined by ... is called Wasserstein manifold\\\"\\n\\nThank you for your suggestion, it significantly improves the clarity of our presentation! We have added the corresponding definition to the paper and stated that by the Wasserstein manifold, throughout the paper, we simply mean the Riemannian interpretation of the Wasserstein space.\\n\\n>Once the tangent space becomes an L2 space, some problems might arise. Is there a reason to assume that it does not give rise to issues?\\n\\nIndeed, the optimized objective is an average over the marginals in the state space, which is a common thing in the literature (Ho et al, 2020, Lipman et al, 2023). Note that no \\u201cexploration\\u201d actually takes place since we are not optimizing over the marginals but rather learning the predefined dynamics. To our knowledge this does not lead to any issues with blowing up errors since it requires the simulation within the training distribution rather than out-of-distribution generalization.\\n\\n> Is there a reason why OT does not seem to help the other models? \\n\\nIn theory, the optimal transport coupling of individual marginals should not significantly affect the predictions because our goal is to generate the correct distribution, e.g. the correct population of cells. Also, note that the metrics we consider do not depend on the coupling between the initial distribution $p_0$ and the final $p_1$. In practice, we observe that it leads to marginal improvements, likely due to the simpler form of the vector fields that the model has to approximate. However, note that although the improvements are small, all methods, FM, CGFM, and MFM, do exhibit improvement when using OT relative to their non-OT counterpart (Table 2 in the main text). In the letters experiment (Table 4 in the Appendix), the lack of improvement when using OT is likely because the joint distribution $\\\\pi(x_0,x_1) = \\\\mathcal{N}(x_0|x_1,\\\\sigma_t)p_1(x_1)$ corresponds to the samples from the \\\"sharp\\\" silhouette $p_1(x_1)$ with added normal noise $\\\\mathcal{N}(x_0|x_1,\\\\sigma_t)$, which is much closer to OT than independent coupling $\\\\pi(x_0,x_1) = p_0(x_0)p_1(x_1)$.\\n\\nWe again thank the reviewer for their constructive comments and questions, and for engaging in fruitful discussion. We believe through implementing the reviewer's helpful suggestions and through this discussion, the quality of our manuscript has improved. We kindly ask, that if the reviewer deems our response satisfactory, to possibly consider improving their rating of our paper. We are more than happy to further engage in discussion and answer any salient points that the reviewer may have.\\n\\nHo, Jonathan, et al. \\\"Denoising diffusion probabilistic models.\\\" Advances in neural information processing systems, (2020).\\n\\nLipman, Yaron, et al. \\\"Flow matching for generative modeling.\\\" International conference on learning representations, (2023).\"}", "{\"title\": \"General Response\", \"comment\": \"We thank all the reviewers for their thoughtful feedback, constructive questions, and valuable time spent on reviewing our work, which has helped improve our submission.\\n\\nIn this work, we introduced **Meta Flow Matching (MFM)**, a novel framework for learning the dynamic evolution of populations with the objective to learn population dynamics and generalize to unseen populations/distributions. By amortizing the flow model over initial populations -- using a GNN architecture to learn conditional embeddings of entire populations -- **on synthetic and real (biological) experiments we show that MFM can successfully generalize across previously unseen test populations**, relative to state-of-the-art baselines. \\n\\nWe are excited to see the positive feedback provided by the reviewers. To summarize, the reviewers found our work novel (jcAc, kKjh), well written (jcAc, Y1RT), and well motivated and meaningful (Kj6P, Y1RT). In particular, reviewer jcAc outlined that MFM is novel and that our work provides a strong theoretical foundation for our proposed framework; also stating that our method is a \\\"well-executed idea\\\" with no \\\"no weaknesses\\\". Reviewer Kj6P found that MFM addresses a problem that \\\"is known to have plagued science for a long time\\\" and states that our approach to modeling vector fields on the Wasserstein manifold is \\\"intuitive and meaningful\\\". In a similar vein, reviewer Y1RT likes our motivation for modeling dynamics of interacting particles through the use of distribution conditional continuity equations, describing our work as \\\"elegant, deserving of exploration\\\". Lastly, reviewer kKjh outlined that MFM is \\\"clearly novel\\\".\"}", "{\"comment\": \"Thanks again for the clarifications and additional analyses! I'm adjusting my rating from 5 to 6.\", \"i_would_still_recommend_a_few_edits_regarding_the_last_points_we_discussed\": \"> Figure 8\\n\\nIf the text is correct, then I think the axis labels should say PCA instead of UMAP.\\n\\n> $r^2$\\n\\nThank you for clarifying this a bit more in the paper. If I understand it correctly, you are calculating the correlation matrix over genes within each population and then calculating the overall correlation between the genes, comparing the two samples. I would take another look at the math: for example, if it's a matrix you calculate in eq. (39), you would want to add a transpose somewhere. I also believe that the notation $X_i$ is overloaded, once referring to an observation and then to a gene.\"}", "{\"comment\": \"Few more points.\\n\\nOverall, including the appendix seems to have helped, but still there are some vague things. \\n\\n-- At page 168, referenced in the answer by the authors, it does not seem that the phrase states that \\\"a Wasserstein manifold is...\\\". Actually, it seems that the word \\\"manifold\\\" appears only four times in the book (at least the version I have). So, the concept seems to have been introduced in this article mistakingly, as an equivalent definition to \\\"Wasserstein space\\\" which seems to appear several times in the book by Ambrosio et al. I suggest to add the definition clearly in the appendix, saying something like \\\"a Wasserstein space with the bundle of tangent spaces defined by ... is called Wasserstein manifold\\\". It seems absurd that the concept that is at the base of the whole work is not defined anywhere explicitly. \\n\\n-- Once the tangent space becomes an L2 space, some problems might arise. For instance, the matching now is a sort of average. Errors might be small \\\"on average\\\", but might blow up in certain regions that have not been explored. This is not considered in the article. Is there a reason to assume that it does not give rise to issues?\\n\\n-- I have now noticed that some of the errors were train errors, as pointed out. I am unsure why these are included, since it is not usually a metric of particular interest, but this clarifies my previous comment. Is there a reason why OT does not seem to help the other models? Is there a theoretical reason, or just lack of hyperparameter fine-tuning?\"}", "{\"comment\": \"We thank the reviewer for their time and effort in reviewing our work and reading our rebuttal. We are glad to see that our rebuttal and updates to the manuscript have improved the reviewer's evaluation of our work. We are happy to answer any lingering questions or address any new comments that may arise.\"}", "{\"summary\": \"This article deals with the problem of learning many-body dynamics via flow matching. The main objective is to model the interaction between components (e.g. particles) rather than modeling the entities separately. The resulting methodology allows to correctly model the evolution of distributions from the time evolution of observations of sample populations.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"-- The problem is known to have plagued science for a long time. Therefore, new approaches and solutions are important contributions.\\n-- The idea of modeling the fields in the Wasserstein setting is intriguing, and intuitively meaningful. \\n-- The authors relate their approach with mean-field limits and diffusion processes. This is certainly something of interest.\", \"weaknesses\": \"-- First and foremost, the article is written in a very confusing way. It seems that the narrative is scattered, and this really makes it difficult to follow.\\n-- The problem seems to boil down to embedding populations using graph neural networks and then flow matching in Wasserstein manifolds. \\n-- It is unclear what a Wasserstein manifold is to start with. It clearly should not be a finite dimensional manifold, as depicted in their Figure 2. This is an infinite dimensional manifold, but the atlases are not clearly specified. For instance, a Banach manifold is a well understood object. In this case, the authors do not bother to introduce what they are talking about. P_2(X) is not generally even a Banach manifold. My understanding is that P_2(X) has a geometric structure only in some relatively special cases (Otto calculus), and usually people talk about metric over P_2(X), and the notion of curvature in Wasserstein spaces is often not the curvature of a manifold in the traditional sense of differential geometry. How is the tangent space defined in this context? In section 2.3 it seems that the author suggest that using the Wasserstein setup is rather important. As a consequence, I think it should not be left to the interpretation of the reader, but it should be clearly explained (at least defined). \\n-- In general, there is a terrible lack of definitions. Even the theorem, which should be a formalization of the work, is unclear because the definitions are either absent, or scattered throughout the article. At least an appendix with all definitions should be added. \\n-- The results do not seem overall particularly good. The table included in the text seems to show good results, but those in the appendix seem to show that the model is not particularly outperforming. One might not care much about this if the paper were a very well written piece of theoretical work, but as explained above it does not seem to be that case either.\", \"questions\": \"All weaknesses as stated above are my main questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer,\\n\\nThank you for your time and effort in reviewing our work and for providing helpful feedback to improve our manuscript. As the end of the discussion period is fast approaching, we hope we have addressed all your questions and concerns. If you have any more salient points or questions, we would be happy to engage in further discussion.\"}", "{\"title\": \"(1/3)\", \"comment\": \"We thank the reviewer for their time and effort in reviewing our paper. We are heartened to hear that the reviewer views that MFM is \\\"clearly novel\\\" and is \\\"happy to see the conceptual novelty\\\" of our method. We next answer all the important questions raised by the reviewer while refer to the general response to all reviewers for any additional experiments.\\n\\n>Question on ODE: Then what is the particle-level ODE corresponding to the PDE (12)? Is it similar to FP-equation that dx/dt = vt(x, pt)? Can the author prove it or give the detailed reference?\\n\\nThe FP-equation is a special case of the PDE in equation (12) when, for instance, we take $v_t(x, p_t) = -\\\\frac{1}{2}\\\\nabla_x \\\\log p_t(x)$, as we discuss in Example 2. In general, equation (12) is our _assumption_ on the dynamics of the system (for letters we know it is diffusion, for cells it is unknown). To be precise, we assume that the vector field $v_t(x, p_t)$ can be represented as a function of density $p_t$, and try to learn this functional dependency via a GNN. However, clearly, we do not know the analytic or any other form of how $v_t(x, p_t)$ depends on $p_t$.\\n\\n> Can you method be extent to SDE case?\\n\\nYes. It is straightforward to extend MFM to the stochastic setting. We leave exploration of empirical performance of MFM in the stochastic setting to future work.\\n\\n>Analysis of population embeddings: To learn the population embeddings, I believe you should have enough example of populations otherwise the generalization is less promised. \\n\\nWe agree that generalization is likely to improve with increasing data. We refer the reviewer to Figure 5 for an ablation over the number of letter populations and its effect on generalization performance in the synthetic letters setting. We indeed observe that as we increase the number of training populations, the generalization performance of MFM also improves. In contrast, the performance of the baseline methods, which are not designed to operate in the regime of generalizing to unseen distributions, does not meaningfully improve. \\n\\n>I would like to see a detailed analysis of population embeddings for a sanity check in both experiments.\\n\\nWe thank the reviewer for their helpful suggestion. To illustrate the ability of MFM to learn meaningful population embeddings, we have added a detailed analysis of population embeddings both in the synthetic data setting and in the organoid drug-screen dataset in Appendix F.1. To do this, we first use Uniform Manifold Approximation and Projection (UMAP) to project embeddings in 2 dimensions. We then compute the pairwise distances between the distributions (letters silhouette populations for the synthetic data and control populations for organoid drug-screen data). Through this analysis, we found that the embeddings reflect data characteristics and clusters of similar groups. Please see Appendix F.1 and Figure 7 for details in the updated text. On the organoid drug-screen data, we found that cell populations from patients with similar chemotherapeutic drug responses also cluster together. Ramos Zapatero et al., (2023) identified patients who are chemosensitive (responsive to chemotherapy), chemorefractory (not responsive to chemotherapy). In turn, we observed PDO11 and PDO141 clustering together which are derived from different patients and both classified as chemorefractory.\"}" ] }
9RnTw9YiXV
Demystifying the Underappreciated Long-Tail Problems in Large Vision Language Models
[ "Mingyang Song", "Xiaoye Qu", "Jiawei Zhou", "Yu Cheng" ]
Recently, Large Vision-Language Models (LVLMs) have made significant progress, seamlessly integrating the visual comprehension capabilities of vision encoders with the language generation strengths of language models (LMs). Despite the success of LVLMs, the training or aligning data of LVLMs suffers from the $\textit{Long-Tail (LT)}$ problems, which is a special type of data with highly imbalanced distributions, and a large number of tail (minority) instances. A significant amount of research has focused on mitigating LT through data adjustment or network structure reorganization, however, efforts targeting generative LVLMs remain limited. In this paper, we present an in-depth analysis of the LT issues persisting in LVLMs' training data and build a distribution of four perspectives, addressing both visual and language aspects. To mitigate the aforementioned challenges, we propose an $\textbf{A}$daptive $\textbf{D}$ata $\textbf{R}$efinement Framework ($\textbf{ADR}$), which consists of two stages: $\textbf{D}$ata $\textbf{R}$ebalancing (DR) and $\textbf{D}$ata $\textbf{S}$ynthesis (DS). In the DR stage, we adaptively rebalance the redundant data based on entity distributions, while in the DS stage, we leverage the latent representations of scarce images to adaptively supplement the underrepresented portions. To validate the effectiveness of our approach, we conduct experiments on a series of comprehensive benchmarks, including the GPT-assisted evaluations to assess the overall performance variations introduced by our method. Through comprehensive evaluations, ADR effectively mitigates the long-tail problem in the training data, improving the average performance of LLaVA 1.5 relatively by $\textbf{2.62\%}$ across 10 benchmarks, without increasing the training data volume.
[ "LVLMs", "Long-Tail Issue", "Data Synthesis" ]
https://openreview.net/pdf?id=9RnTw9YiXV
https://openreview.net/forum?id=9RnTw9YiXV
ICLR.cc/2025/Conference
2025
{ "note_id": [ "v9vTWixJOV", "dD28sgNrzs", "Z6xw84TJTq", "VbZaRGHkvF", "DnusMqD17b", "10Mv2pRJyO" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1730273579508, 1731595048196, 1730134794454, 1730534580208, 1730650265191, 1730045763087 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2832/Reviewer_KfdZ" ], [ "ICLR.cc/2025/Conference/Submission2832/Authors" ], [ "ICLR.cc/2025/Conference/Submission2832/Reviewer_E1cG" ], [ "ICLR.cc/2025/Conference/Submission2832/Reviewer_Q7sJ" ], [ "ICLR.cc/2025/Conference/Submission2832/Reviewer_HeWS" ], [ "ICLR.cc/2025/Conference/Submission2832/Reviewer_DBbC" ] ], "structured_content_str": [ "{\"summary\": \"This paper focuses on the long-tail problem in MLLMs, which is critical for both academic research and practical application. The authors propose an adaptive data refinement framework to address this issue, which combine the strategies of data rebalancing and data synthesis. The experimental results well validate the effectiveness of this approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The studied problem of this paper, i.e., long-tail data distribution, is important and significant in existing MLLMs.\\n\\n2. The proposed ADR can alleviate this issue to some extend even by simply using data rebalancing without new training data.\\n\\n3. The overall presentation is good, and the paper is easy to follow.\", \"weaknesses\": \"1. The novelty of the proposed ADR method is not well described. Rebalancing and data synthesis are common solutions for long-tail problem, but what are the main differences and advantages of the proposed methods, it is not very clearly stated. Besides, it is better to introduce the principle and methodology of the compared data-rebalancing methods for comparison.\\n\\n2. The effectiveness of ADR's rebalancing scheme is obvious compared to the default LLaVA SFT data scheme, but its advantages seem marginal compared to the rebalancing baselines. In tab.5, ADR's rebalancing is slightly better than perplexity but close to Random, especially for the long-tail task VizWiz. The authors are expected to answer and analyze this case.\", \"minor\": \"I would like to suggest the authors to gain more in-depth insights into the long-tail problems of MLLMs. In addition to performance improvement, what insights can we obtain from the long-tail study? For instance, visual hallucination is often regarded as the main problem of MLLMs, and most people think that it is related to the visual feature quality. But in early VL study like VQA, visual hallucination is also related to language bias, i.e., the model guess the answer according to the data distribution. In some case, it is also a problem of long-tail data distribution. \\n\\nSo, it would be bette if we can see that the ADR can solve some important problems of MLLMs in addition to performance.\", \"questions\": \"Most of my concerns and questions are given in the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"After further review of our work, we have identified areas that require significant improvement. To ensure the accuracy and quality of our research, we have decided to withdraw our submission. We sincerely thank the reviewers for their valuable feedback, which has provided us with insightful suggestions and helped us identify key areas for improvement. We appreciate your understanding.\"}", "{\"summary\": \"This paper proposes an approach to mitigate the long tail distribution problem in the training data of large vision-language models. The approach named Adaptive Data Refinement Framework (ADR) consists of two stages: Data Rebalancing (DR) and Data Synthesis (DS).\\nThe approach is applied on the training data of LLaVA 1.5 and ShareGPT-4V separately. The new models trained on the rectified training data are evaluated across 10 benchmarks and demonstrate performance improvement over the original models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1) This paper tackles an interesting and important problem - long-tail distribution in training data of large vision-language models.\\n2) The paper analyzes the long-tail distribution in the training data of LLaVA 1.5, and shows the correlation between prediction errors on benchmarks and the long-tail distribution \\n3) The proposed data refinement framework leads to performance improvement on LLaVA 1.5 and ShareGPT-4V\", \"weaknesses\": \"1) In the summary of contributions (line 91, line 96), the paper claims that the proposed framework ADR enhances the performance of large VLMs \\u201cwithout introducing any additional training\\u201d. This claim might be misleading as additional visual instruction tuning is required to train a new model on the data refined with ADR. The authors could consider clarifying or rephrasing this claim mentioning the required visual instruction tuning step.\\n2) In line 402, an off-the-shelf vision captioner is used to generate a caption for the synthetic image. Then the caption is extended into a conversation using a language model.\\nIn Section C2, it is mentioned that LLaVA 1.5 13B and ShareCaptioner are used as the captioner, and LLaMA3 is used as the LM for generating the conversation. How dense are the captions generated by LLaVA 1.5 and ShareCaptioner? How to prevent hallucination when generating the conversation from the caption without passing the image as input? \\nThe authors could provide details on a) the average length or level of detail in the captions generated by LLaVA 1.5 and ShareCaptioner b) techniques or safeguards used to ensure the generated conversations remain faithful to the original image content without introducing hallucinations. \\n3) Qualitative examples that show the quality of the generated data from the Data Synthesis step are missing. The authors could include a few representative examples of synthesized data showing the original image, generated caption, and resulting conversation.\", \"questions\": \"1) How to interpret the x-axis Q_r in Figure 4. Are these the indices of words that are ordered according to the occurrence frequency?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a comprehensive analysis of the long-tail problem in the instruction tuning data of LVLM from four perspectives: token, object, co-occurrence, and interrogation. Based on this analysis, the authors propose an Adaptive Data Refinement Framework, which consists of two stages: Data Rebalancing and Data Synthesis. To validate the framework's effectiveness, the authors conduct extensive experiments on 10 benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe experiments are sufficient. The authors conducted experiments on multiple benchmarks and on different baselines, such as LLaVA 1.5 and ShareGPT4V.\\n2.\\tThe writing is fluent and easy to understand.\", \"weaknesses\": \"1.\\tThe significance of addressing the long-tail problem during the instruction-tuning stage remains ambiguous.\\nDuring instruction-tuning stage, the LLM is usually trainable, leading to a large scale of trainable parameters, therefore, LVLM can be easily fitting to the training data. A typical phenomenon is that the loss will suddenly decrease after the first epoch. Given this, the long-tail problem is unlikely to be a pivotal concern during the instruction-tuning stage for LVLM. Instead, the long-tail problem may assume greater importance during the pretraining stage for LVLM.\\n2.\\tAbsence of visualization results.\\nAs Data Synthesis Stage is an important part of the proposed method, the authors should provide examples of synthetic data, including images and conversations.\\n3.\\tThe impact of the Data Balance Stage is inconspicuous.\\nAs shown in Tab 5., when compared to randomly sampled training data, the performance of \\u2018our-balance\\u2019 does not show a significant improvement. In contrast, it even declines in some benchmarks, such as VizWiz, TextVQA, and MMstar. Based on the comparisons in Table 5, the improvement attributed to the Data Balance Stage is unlikely due to the balanced nature of the training data, but rather the reduction in training data redundancy.\\n4.\\tInsufficient discussion of related works.\\nThere exist other studies that explore leveraging synthetic data to tackle the long-tail problem. The authors should discuss the difference between the proposed method and these related works[1][2].\\n[1]\\tLTGC: Long-tail Recognition via Leveraging LLMs-driven Generated Content\\n[2]\\tBalancing the Picture: Debiasing Vision-Language Datasets with Synthetic Contrast Sets\", \"questions\": \"1.\\tIt is unclear why addressing the long-tail (LT) problem in instruction-tuning data would be beneficial for test benchmarks.\\nAs the authors mention, the distribution of training and test data differs. I am curious about why tackling the LT problem in instruction-tuning data is advantageous for test benchmarks. The test benchmarks can be considered as extreme long-tail cases within the training data, occurring zero times during training. From this perspective, the long-tail issue in test benchmarks still persists.\\n2.\\tWhat is the performance of synthesizing data with the balance strategy of random sampling?\\n3.\\tWhat is the performance when only the Data Synthesis Stage is implemented?\\n4.\\tHow can the author ensure that the synthetic data is correct? Hallucinations in captions may lead to errors in the conversations generated by LLMs.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this work, the authors present an approach to tackle the long-tail distribution problem present in the training data of modern vlms. They have two steps of the methodology which first balances the data in the long tail and then second can generate the data to mitigate the long tail. Experiments are performed on a two VLMs and they show some improvements.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The proposed approach tackles a relevant problem, at least for some of the present vlms, however, questions still remain over the overall usefulness.\\n2. The method involves two parts to solve the problem. Which seems logical. \\n3. The experiments show some improvements, however, minor.\", \"weaknesses\": \"1. The paper can be written more clearly. It is quite dense in some parts, such as the main method section.\\n2. The experiments are performed on two vlms. Due to the rapidly evolving landscape. I find these experiments a little less. Maybe it could've been more useful to experiment with other vlms as well. \\n3. The section 3.4 has some trivial findings. All three findings are pretty general and well-known to the community. I am wondering about the usefulness of this section. \\n4. I am also wondering about the overall motivation of the paper. More specifically, wouldn't scale just solve the problem of long-tail distributions in the data?\", \"questions\": \"I have a few questions:\\n\\n1. Importantly, with the ever-growing interest of the community in VLMs. Is this problem really important? For example, in the recent 'Molmo and PixMo' paper - they proposed a large-scale dataset. Would the scale automatically solve this problem? \\n2. Can the authors also report results with more datasets? \\n3. I am wondering about some of the choices in the 'Entity Distribution Construction' - did the authors try some other ways? How did they choose these particular ways? \\n4. Interrogation: how did the authors choose the categories of these questions in the data? Is there some existing taxonomy which they followed?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses the issue of imbalanced data distributions in training Large Vision-Language Models (LVLMs) by proposing an Adaptive Data Refinement Framework (ADR). The approach consists of two stages: Data Rebalancing (DR) and Data Synthesis (DS). In the DR stage, redundant and low-quality instances from overrepresented head data are filtered to create a more balanced dataset, enhancing the model's generalization capabilities. The DS stage enriches the training set by synthesizing new instances for scarce tail data using latent representations, adding diversity without increasing the overall data volume or training effort. Together, these stages aim to improve LVLM performance, particularly on tail concepts, by addressing data imbalance efficiently.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Balanced Data Representation: The ADR framework effectively filters redundant head data, ensuring more equitable representation of both head and tail instances, thus enhancing model generalization.\\n2. Tail Data Synthesis: By synthesizing new instances of underrepresented tail data, the framework enriches the training set, boosting performance on less frequent concepts.\\n3. Efficiency: ADR improves LVLM performance without increasing training data volume or requiring additional training, offering a resource-efficient solution to the long-tail problem.\", \"weaknesses\": \"1. Limited Impact of Certain Perspectives: The ablation study shows that some perspectives, such as co-occurrence and interrogation, do not significantly enhance performance, suggesting not all data perspectives are equally beneficial.\\n2. Limited Novelty: The approach largely relies on well-established data rebalancing and synthesis techniques based on object and token frequencies. While effective, it doesn't introduce fundamentally new concepts or methods, making the contribution incremental rather than groundbreaking.\", \"questions\": \"1. It appears that tokens and objects drive most of the performance gains. Given this, what is the rationale for including perspectives like interrogation and co-occurrence, which show limited impact?\\n2. If only the textual data were parsed for objects and tokens, without processing the images, how would the results be affected? It seems possible to achieve similar outcomes without image data, which raises questions about the necessity of visual inputs for the observed improvements.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
9RcofuNF5p
Contextualized Messages Boost Graph Representations
[ "Brian Godwin Lim", "Galvin Brice Sy Lim", "Renzo Roel Tan", "Kazushi Ikeda" ]
Graph neural networks (GNNs) have gained significant attention in recent years for their ability to process data that may be represented as graphs. This has prompted several studies to explore their representational capability based on the graph isomorphism task. These works inherently assume a countable node feature representation, potentially limiting their applicability. Interestingly, only a few study GNNs with uncountable node feature representation. In the paper, a novel perspective on the representational capability of GNNs is investigated across all levels—node-level, neighborhood-level, and graph-level—when the space of node feature representation is uncountable. More specifically, the strict injective and metric requirements are *softly* relaxed by employing a *pseudometric* distance on the space of input to create a *soft-injective* function such that distinct inputs may produce *similar* outputs if and only if the *pseudometric* deems the inputs to be sufficiently *similar* on some representation. As a consequence, a simple and computationally efficient *soft-isomorphic* relational graph convolution network (SIR-GCN) that emphasizes the contextualized transformation of neighborhood feature representations via *anisotropic* and *dynamic* message functions is proposed. A mathematical discussion on the relationship between SIR-GCN and widely used GNNs is then laid out to put the contribution into context, establishing SIR-GCN as a generalization of classical GNN methodologies. Experiments on synthetic and benchmark datasets then demonstrate the relative superiority of SIR-GCN, outperforming comparable models in node and graph property prediction tasks.
[ "deep learning", "graph neural network", "representational capability", "soft-isomorphic relational graph convolution network" ]
https://openreview.net/pdf?id=9RcofuNF5p
https://openreview.net/forum?id=9RcofuNF5p
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xRxjrqBjCs", "wFQBIibEXu", "tUv3cqopCO", "qnxvybePAw", "o8cOwfxLtu", "h8xfH9tU2g", "h5nb7AjvFc", "gAjvbuslDm", "dNG47UoMmV", "ZB2V94ivjR", "YUJ16I7tvR", "Twi8JjXO5n", "PyeRScWqXN", "Hl6F8ehHH3", "GREgQCcbOu", "EtjXR6sXwC", "BQa2w2VKRt", "BAWXvrrJDp", "9PSgxrh8Be", "1FJvp68XKT" ], "note_type": [ "official_review", "official_review", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730544128438, 1731202111473, 1731654859293, 1731654122128, 1732451930853, 1738646735895, 1731654907881, 1731863657628, 1731667730360, 1731653498166, 1730684506374, 1732456119823, 1732255732818, 1732246828640, 1732541605881, 1731668257177, 1732250474198, 1733149145760, 1731667566306, 1732508639564 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7053/Reviewer_YvUg" ], [ "ICLR.cc/2025/Conference/Submission7053/Reviewer_NmsR" ], [ "ICLR.cc/2025/Conference/Submission7053/Authors" ], [ "ICLR.cc/2025/Conference/Submission7053/Authors" ], [ "ICLR.cc/2025/Conference/Submission7053/Reviewer_NmsR" ], [ "ICLR.cc/2025/Conference/Submission7053/Authors" ], [ "ICLR.cc/2025/Conference/Submission7053/Authors" ], [ "ICLR.cc/2025/Conference/Submission7053/Authors" ], [ "ICLR.cc/2025/Conference/Submission7053/Authors" ], [ "ICLR.cc/2025/Conference/Submission7053/Authors" ], [ "ICLR.cc/2025/Conference/Submission7053/Reviewer_rjgm" ], [ "ICLR.cc/2025/Conference/Submission7053/Reviewer_YvUg" ], [ "ICLR.cc/2025/Conference/Submission7053/Authors" ], [ "ICLR.cc/2025/Conference/Submission7053/Authors" ], [ "ICLR.cc/2025/Conference/Submission7053/Reviewer_rjgm" ], [ "ICLR.cc/2025/Conference/Submission7053/Authors" ], [ "ICLR.cc/2025/Conference/Submission7053/Reviewer_NmsR" ], [ "ICLR.cc/2025/Conference/Submission7053/Authors" ], [ "ICLR.cc/2025/Conference/Submission7053/Authors" ], [ "ICLR.cc/2025/Conference/Submission7053/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper introduces a new perspective on the representational capability of GNNs by presenting a soft-injective function using a pseudometric distance. It then proposes a new message-passing scheme that performs competitively across various datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents an interesting approach, offering clear insights into relaxing the injective constraint in message-passing processes by defining a pseudometric distance, which captures differences within data effectively.\\n\\n2. The theoretical basis is solid, providing a coherent and accessible framework that helps explain the introduced ideas.\\n\\n3. Although the proposed method is limited by the 1-WL test, it shows strong representational capabilities. Its effectiveness is supported by experiments conducted on both synthetic and real-world datasets, confirming its practical usefulness.\", \"weaknesses\": \"1. The experimental section is somewhat unclear, especially regarding the number of parameters in Table 4 compared to Table 3. It is confusing why the same model appears to have twice the number of parameters. The authors emphasize \\\"a single layer\\\" in line 507 but do not mention this detail in Table 4, which could lead to potential inconsistencies. To improve clarity and fairness, the authors should add explanations of any differences in the model architecture between datasets. This would help readers better understand the experimental setup and assess the fairness of the comparisons.\\n\\n\\n2. Although the authors claim to achieve \\\"a balance between computational complexity and model expressivity\\\" (line 820), the experiments related to computational efficiency (Tables 6,7) are not convincing enough. Adding runtime analysis on larger-scale datasets, such as ogbg-molhiv/ogbn-arxiv, would support these claims and better demonstrate the method's scalability.\", \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper extends PNA, which considers uncountable node features, by incorporating anisotropic and dynamic. The difference between PNA and the proposed SIR-GCN is the nonlinear mapping outside the message creation. To motivate this method, the authors introduce the concept of soft-injective function. This paper shows some existing methods are variants of the proposed SIR-GCN. Evaluations on synthetic and real datasets demonstrate its effectiveness.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The uncountable feature is an important topic in GNN.\", \"The writing and organization are good to follow.\", \"The insight of existing methods under the framework of SIR-GCN is interesting.\"], \"weaknesses\": [\"The novelty seems weak. Both the anisotropic and dynamic are not novel. This paper can be seen as a combination of PNA and GATv2.\", \"The motivation and the proposed SIR-GCN are not closely connected. It is not clear the connection between the soft-injective function, dynamic transformation, and anisotropic message.\", \"The description of the GraphHeterophily is not clear. Thus, it is not obvious why the proposed SIR-GCN significantly outperforms existing ones.\", \"The derivation from Eq. 15 to Eq. 16 seems incorrect. First, the definition of $A$ is not given. Secondly, the anisotropic of GAT is on the edge weight, while that of Eq. 16 is on the message. It is not obvious.\", \"Figure 2 is not described clearly. What is the meaning of the horizontal and vertical coordinates? Why the contour of MLP is as in Figure 2(c) and 2(d).\", \"The evaluations are not convincing. Firstly, the ablation study and illustrative examples are not given. So, the effect of the proposed SIR-GCN is not justified. Secondly, it is not knowns whether the proposed SIR-GCN can be applied to complex models, whose performance is higher.\"], \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We thank the reviewer for the positive feedback!\\n\\n1. The key novelty of SIR-GCN lies in its *anisotropic* and *dynamic* message function, allowing it to better handle uncountable node features. Unlike other GNNs which only employ *isotropic* or monotonic/linear transformations to neighborhood features, SIR-GCN is among the first MPNN instance to introduce *dynamic* transformations to neighborhood features that also account for the center node features. This design specifically enables SIR-GCN to learn the complex, nuanced relationships between pairs of neighboring nodes before aggregating them into a single feature. This is also key to how SIR-GCN handles uncountable node features, as explained in Line 527 of the conclusion. An illustration of this feature is further elaborated in 3 below.\\n\\n2. Similar to the results presented in Corso et al. (2020), Table 3 also directly presents the available results from Dwivedi et al. (2023), Corso et al. (2020), and Tailor et al. (2021). The missing values indicate that these works have not considered the particular dataset for their model. This is added as a footnote in Table 3. We also emphasize that in addition to the datasets considered in Corso et al. (2020) for PNA, our paper further includes the WikiCS, PATTERN, and CLUSTER datasets from the Benchmarking GNNs (Dwivedi et al., 2023) as an additional evaluation. \\n\\n3. We thank the reviewer for pointing us to the work of Mao et al. (2024). While the study does analyze real-world graphs with varying degrees of heterophily, the graphs considered are specific to node property prediction tasks. In contrast, GraphHeterophily is designed specifically for graph property prediction which makes the graphs considered in Mao et al. (2024) not directly compatible. Nonetheless, we emphasize that the directed graphs in GraphHeterophily are uniformly generated using DGL's `rand_graph` function, with class labels also uniformly assigned using PyTorch's `randint` function. These measures, highlighted in Line 775, ensure that the graphs are sufficiently diverse in terms of graph structure and heterophily degrees, supporting the robustness of the results.\\n\\nFurthermore, we would also like to highlight that while other GNNs fail in the simple task of GraphHeterophily, SIR-GCN excels in the task. Its remarkable performance is nevertheless expected, as explained in Line 430, since if node features are one-hot encodings of class labels, SIR-GCN with sum aggregation, $\\\\boldsymbol{W_Q} = \\\\boldsymbol{I}$, $\\\\boldsymbol{W_K} = - \\\\boldsymbol{I}$, $\\\\sigma = \\\\text{ReLU}$, and $\\\\boldsymbol{W_R} = \\\\boldsymbol{1}^\\\\top$ can consistently produce accurate outputs, regardless of graph structure or heterophily degrees. This success may be attributed to the ReLU activation applied along edges, enabling SIR-GCN to \\\"reason\\\" along edges, based on the labels of pairs of neighboring nodes, and produce a *dynamic* message accounting for this learned relationship. This illustration further highlights the significance of *anisotropic* and *dynamic* message functions, underscoring the novelty of SIR-GCN as the first MPNN instance to satisfy this requirement.\\n\\n4. The attention mechanism of the original GAT is *anisotropic* but not *dynamic* as highlighted by Brody et al. (2021). GATv2 addresses this limitation by making the attention mechanism *dynamic*. Nevertheless, both GAT and GATv2 still only linearly transform the messages (neighborhood features) as stated in Line 305. Thus, the features of the center node only affect the aggregated neighborhood features by determining the degree of contribution through the scalar attentional weight. In contrast, SIR-GCN is the first MPNN instance to leverage this idea and make the actual messages *anisotropic* and *dynamic*, rigorously grounded in theory. The experimental results, particularly in Table 3 where SIR-GCN significantly outperforms GAT across all datasets with the same parameter budget, underscore the significance of this idea and the novelty of SIR-GCN.\", \"references\": \"Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? arXiv preprint arXiv:2105.14491, 2021.\\n\\nGabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Lio, and Petar Velickovic. Principal neighbourhood aggregation for graph nets. Advances in Neural Information Processing Systems, 33:13260-13271, 2020.\\n\\nVijay Prakash Dwivedi, Chaitanya K Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Benchmarking graph neural networks. Journal of Machine Learning Research, 24 (43):1-48, 2023.\\n\\nHaitao Mao, Zhikai Chen, Wei Jin, Haoyu Han, Yao Ma, Tong Zhao, Neil Shah, Jiliang Tang. Demystifying structural disparity in graph neural networks: Can one size fit all?. Advances in neural information processing systems, 2024.\\n\\nShyam A Tailor, Felix L Opolka, Pietro Lio, and Nicholas D Lane. Do we need anisotropic graph neural networks? arXiv preprint arXiv:2104.01481, 2021.\"}", "{\"title\": \"Rebuttal by Authors (cont.)\", \"comment\": \"3. We further highlight that the runtime analyses on synthetic datasets in Tables 6 and 7 provide a more controlled assessment of computational complexity, illustrating changes in model runtime as model complexity and problem size increase (*i.e.*, as $n$ and $c$ increase). This comparison allows for more meaningful insights into the scalability of each model in isolation. To further strengthen our claim, we have included an additional comparison of the asymptotic complexities of each model in Appendix C and in Line 825 to complement the results in Tables 6 and 7, specifically with regards to PNA which is also designed for uncountable node features but requires significantly longer runtime. The table further highlights how SIR-GCN has a computational complexity comparable to GCN, GraphSAGE, GAT, GATv2, and GIN while outperforming these models across all benchmarks. Notably, SIR-GCN also demonstrates a lower complexity than PNA, yet delivers superior performance across all datasets. These additional analyses underscore how SIR-GCN effectively balances computational efficiency and model expressivity, further demonstrating its novelty and practical utility.\\n\\n| Model | Model Complexity |\\n| ----------- | :--------------------------------------------------------------------------------------------------------------------------: |\\n| GCN | $O(V \\\\times d_\\\\text{out} \\\\times d_\\\\text{in} + E \\\\times d_\\\\text{out})$ |\\n| GraphSAGE | $O(V \\\\times d_\\\\text{out} \\\\times d_\\\\text{in} + E \\\\times d_\\\\text{out})$ |\\n| GAT / GATv2 | $O(V \\\\times d_\\\\text{out} \\\\times d_\\\\text{in} + E \\\\times d_\\\\text{out})$ |\\n| GIN | $O(E \\\\times d_\\\\text{in} + V \\\\times \\\\texttt{MLP})$ |\\n| PNA | $O(E \\\\times d_\\\\text{in}^2 + E \\\\times d_\\\\text{in} \\\\times k + V \\\\times d_\\\\text{out} \\\\times d_\\\\text{in} \\\\times k)$ |\\n| SIR-GCN | $O(V \\\\times d_\\\\text{hidden} \\\\times d_\\\\text{in} + E \\\\times d_\\\\text{hidden} + V \\\\times d_\\\\text{out} \\\\times d_\\\\text{hidden})$ |\"}", "{\"comment\": \"Thanks for your further responses.\\n1. I acknowledge the novel theoretical framework which results in the proposed SIR-GCN.\\n2. GATv2 also provides a discussion on the representational capability as in Theorems 1 and 2 in that paper.\\n3. The performance improvement should be based on the SOTA instead of the basic GNNs. Thus, it is incremental. \\n4. Being universal to other complex models may demonstrate the impact of the proposed theory on the graph machine learning field. Thus, it is important to me.\\n\\nAccording to the above considerations, I tend to keep my ratings.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Rebuttal by Authors (cont.)\", \"comment\": \"5. It may be possible to improve the performance of GAT and GraphSAGE in Table 4 with more parameters. However, as noted in Line 806, the results presented are taken directly from the OGB leaderboard. Extending GAT and GraphSAGE with more parameters for potential performance improvements would require significant tuning efforts and is beyond the scope of our work. Our primary focus is to highlight the performance of SIR-GCN relative to publicly released results in the leaderboard as stated in Lines 806 and 816.\"}", "{\"title\": \"Clarifications on Novelty and Significance\", \"comment\": \"We would like to further clarify the novelty and significance of our work, especially in relation to Velickovic et al. (2017), Brody et al. (2021), and Corso et al. (2020).\\n\\nFirst, while Velickovic et al. (2017) introduced the original GAT by integrating attention mechanisms into GNNs, their work primarily focused on heuristic and empirical validation. It lacked a rigorous theoretical analysis of how attention mechanisms impact the representational capability of GNNs. Brody et al. (2021) later extended GAT by specifically analyzing the role of *dynamic* attention mechanisms, providing theoretical insights into their utility. However, their contributions were still limited to refining the intuition behind GAT and did not explore its implications for handling uncountable node features or the broader representational capability of GNNs. Moreover, given the focus on attention mechanisms, both studies simply employ linear transformations to the actual messages. \\n\\nMeanwhile, Corso et al. (2020) specifically tackled the problem of uncountable node features by providing theoretical justifications for using multiple aggregators and scalers in GNNs to enforce injectivity for graph isomorphism tasks, resulting in the PNA. Nonetheless, the *anisotropic* nature of PNA emerges as a consequence of applying the theoretical result heuristically within a linear/*static* MPNN framework as we noted in Line 343, rather than through theoretical insight. In fact, our analysis in Line 332 reveals a critical limitation in the *anisotropic* nature of PNA, specifically in the limited influence of center node features on the aggregated neighborhood features.\\n\\nIn contrast, our work fundamentally diverges from these prior studies by introducing a **novel and comprehensive perspective** on the representational capability of GNNs with uncountable node features. Specifically, we are the first to introduce **new theoretical results**, based on *pseudometrics* and *soft-injective* functions, demonstrating how GNNs can preserve their representational capability even by relaxing the strict injective and metric constraints in previous works, such as Corso et al. (2020). Our main theoretical result in Corollary 1 directly leads to the emergence of the *anisotropic* and *dynamic* properties of *soft-injective* message functions, as outlined in Lines 192 and 216. While these properties have been studied independently by Brody et al. (2021), their presence here only serves to underscore how existing works complement and validate our theoretical findings. Crucially, our work is the first to theoretically justify how these properties, when applied to message functions (in contrast to attention mechanisms), enable GNNs, specifically SIR-GCN, to effectively handle uncountable node features, as highlighted in Line 531. To support our theoretical findings, we also provide **intuitive illustrations** through two synthetic datasets, demonstrating the limitations of existing GNNs in simple node and graph property prediction tasks and how SIR-GCN, with its novel design, specifically addresses these weaknesses, offering an intuitive understanding of the practical utility of our theoretical results. Additionally, our framework is validated by **extensive experimental results** on benchmark datasets, where the theoretical concepts and intuition presented concretely translate to SIR-GCN consistently outperforming conventional GNNs across diverse tasks and domains, highlighting the significance of our theoretical results. Overall, our work introduces a novel theoretical foundation for a key problem in GNNs that is supported by both intuition and experimental results, thereby advancing our understanding of the representational capabilities of GNNs.\", \"references\": \"Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? arXiv preprint arXiv:2105.14491, 2021.\\n\\nGabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Lio, and Petar Velickovic. Principal neighbourhood aggregation for graph nets. Advances in Neural Information Processing Systems, 33:13260\\u201313271, 2020.\\n\\nPetar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.\"}", "{\"title\": \"Rebuttal by Authors (cont.)\", \"comment\": \"3. The GraphHeterophily dataset is designed to test the ability of models to reason about heterophilous relationships in directed graphs. The graphs are uniformly generated using DGL's `rand_graph` function with each node uniformly assigned one of $c$ class labels using PyTorch's `randint` function. This approach ensures diversity in graph structures and degrees of heterophily, making the dataset robust for evaluation. Detailed descriptions are provided in Appendix B1. The task is then to count the total number of directed edges in each graph connecting nodes with different class labels, as clarified in Line 417. A sample graph is illustrated in Fig. 5, where four nodes (labeled A or B) are connected by six directed edges. Models must then correctly identify/count the four edges (highlighted in blue) that connect nodes with distinct labels.\\n\\nThe results in Table 2 show that SIR-GCN consistently achieves near-zero MSE loss for this simple illustrative task while other models (including PNA and GATv2) obtained large losses. This performance is nevertheless expected, as explained in Line 431, since if node features are one-hot encodings of class labels, SIR-GCN with sum aggregation, $\\\\boldsymbol{W_Q} = \\\\boldsymbol{I}$, $\\\\boldsymbol{W_K} = - \\\\boldsymbol{I}$, $\\\\sigma = \\\\text{ReLU}$, and $\\\\boldsymbol{W_R} = \\\\boldsymbol{1}^\\\\top$ can consistently produce accurate outputs, regardless of graph structure or heterophily degrees. The key to this success lies in the *anisotropic* and *dynamic* message function enabled by the ReLU activation along edges, which allows SIR-GCN to \\\"reason\\\" based on pairs of neighboring node labels and generate nuanced, context-aware messages. It is worth noting that while GATv2 can also somewhat \\\"reason\\\" along edges due to its *dynamic* attention mechanism, its reliance on attentional (softmax) aggregation, which fails to make sharp decisions (Velickovic, 2024) and preserve graph structure as noted in Line 433, hinders its performance. This distinction further highlights the flexibility of SIR-GCN in handling such challenges. Overall, this task underscores the importance of *anisotropic* and *dynamic* message functions (in contrast to attention mechanism), demonstrating SIR-GCN as the first MPNN instance to meet these requirements, further solidifying its contribution and novelty with respect to existing GNNs.\\n\\n4. We thank the reviewer for pointing out the lack of definition for $\\\\boldsymbol{A}$. In response, we have changed $\\\\boldsymbol{W_R} = \\\\boldsymbol{a_\\\\text{GAT}^\\\\top}$ for clarity. To clarify, Eq. 16 simply demonstrates how the unnormalized attention mechanism in Eq. 15 for GATv2 may be interpreted as the contextualized message in the SIR-GCN model, as mentioned in Line 313. We do not intend to suggest that Eq. 16 is equivalent to GAT. Instead, the illustration shows how the concept of *anisotropic* and *dynamic* functions in the attention mechanism of GATv2 are adapted to message functions in SIR-GCN. This aligns with the explanation above in 1. Furthermore, we also emphasize that only the attention weights in GATv2 are *anisotropic* and *dynamic*, its message functions are still only linearly transformed, which can limit its expressivity, as discussed in Line 305. In contrast, SIR-GCN introduces a contextualized, non-linear transformation to the message functions, improving its ability to capture complex relationships. The significance of this contribution is evident in the experimental results, particularly in Table 3 where SIR-GCN significantly outperforms GAT across all datasets with the same parameter budget. Nevertheless, in Line 314, we clarify how GATv2, up to a normalizing constant, can be obtained from SIR-GCN by selecting the appropriate parameters: $\\\\boldsymbol{W_Q} = \\\\begin{bmatrix} \\\\boldsymbol{W_{Q,\\\\text{GAT}}} \\\\\\\\\\\\\\\\ \\\\boldsymbol{0} \\\\end{bmatrix}$, $\\\\boldsymbol{W_K} = \\\\begin{bmatrix} \\\\boldsymbol{W_{K,\\\\text{GAT}}} \\\\\\\\\\\\\\\\ \\\\boldsymbol{W_{K,\\\\text{GAT}}} \\\\end{bmatrix}$, $\\\\sigma\\\\left(\\\\begin{bmatrix} \\\\boldsymbol{h_1} \\\\\\\\\\\\\\\\ \\\\boldsymbol{h_2} \\\\end{bmatrix}\\\\right) = \\\\exp\\\\left(\\\\boldsymbol{a_\\\\text{GAT}^\\\\top} ~ \\\\text{LeakyReLU}\\\\left(\\\\boldsymbol{h_1}\\\\right)\\\\right) \\\\cdot \\\\boldsymbol{h_2}$, and $\\\\boldsymbol{W_R} = \\\\boldsymbol{I}$.\", \"references\": \"Petar Velickovic, Christos Perivolaropoulos, Federico Barbero, and Razvan Pascanu. softmax is not enough (for sharp out-of-distribution). arXiv preprint arXiv:2410.01104, 2024.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We thank the reviewer for the constructive feedback!\\n\\n1. We would like to clarify that both SIR-GCN models in Tables 4 and 5 largely employ the same architectural design with only a single GNN layer, as noted in Line 485 for Table 4 and Line 512 for Table 5. Appendix B2 presents a more detailed description of the architecture where the key difference between the two models lies in Table 5 including a graph readout function (since ogbg-molhiv is a graph property prediction task) whereas Table 4 does not (since ogbn-arxiv is a node property prediction task). The large discrepancy in parameter counts is primarily due to the input and output dimensions of each dataset. For ogbn-arxiv in Table 4, the input node features are 768-dimensional with a 40-dimensional output, while for ogbg-molhiv in Table 5, the input node features are only 174-dimensional with a scalar output. This difference in the input node feature dimension, stated in Lines 800 and 809, naturally affects the parameter count due to the Linear layer multiplying the input dimension by the hidden state dimension, even if they have the same architectural design. Specifically, a single Linear layer in ogbn-arxiv requires ~200,000 (768 x 256) parameters while a single Linear layer in ogbg-molhiv only requires ~53,000 (174 x 300) parameters. We also emphasize that, given the distinct nature and domains of these datasets (citation network vs molecular network), a comparison between parameter counts is not meaningful for fairness. Instead, we highlight that SIR-GCN can outperform larger models using fewer parameters in both datasets, particularly with LGGNN in Table 4 and GSN in Table 5. This highlights the utility and expressivity of SIR-GCN.\\n\\n2. We have performed additional runtime analysis based on the reviewer's suggestion. The models below follow a similar architecture as reported in Appendix B2 for SIR-GCN using the same hyperparameters. The results still underscore how SIR-GCN effectively balances computational efficiency and model expressivity, particularly when compared against PNA which is also designed for uncountable node features. Notably, PNA resulted in an out-of-memory (OOM) error for ogbn-arxiv as it requires $O(|\\\\mathcal{E}| \\\\times d_\\\\text{in})$ memory, where $|\\\\mathcal{E}|$ is significantly larger for this graph and $d_\\\\text{in} = 768$ as noted above in 1. On the other hand, PNA requires significantly longer (more than 50%) runtime compared to the other models in ogbg-molhiv, consistent with the results in Tables 6 and 7. Meanwhile, SIR-GCN requires approximately the same runtime as the other models across both datasets.\\n\\n| Model | ogbn-arxiv | ogbg-molhiv |\\n| :-------: | :----------------: | :----------------: |\\n| GCN | 0.0729s \\u00b1 0.0008s | 0.8029s \\u00b1 0.1171s |\\n| GraphSAGE | 0.1161s \\u00b1 0.0039s | 0.8062s \\u00b1 0.1166s |\\n| GATv2 | 0.1006s \\u00b1 0.0022s | 0.6835s \\u00b1 0.1620s |\\n| GIN | 0.0830s \\u00b1 0.0015s | 0.7574s \\u00b1 0.1620s |\\n| PNA | OOM | 1.2786s \\u00b1 0.0664s |\\n| SIR-GCN | 0.1265s \\u00b1 0.0039s | 0.7345s \\u00b1 0.1300s |\"}", "{\"summary\": \"This paper investigates the representational capacity of Graph Neural Networks (GNNs) with uncountable node features and introduces an MLP-based MPNN named SIR-GCN, which generalizes to several popular GNN architectures. The authors demonstrate that for uncountable node features, it is possible to identify a soft-injective function corresponding to a specific pseudometric that quantifies dissimilarity in the node feature space. They then model this soft injective function using an MLP and design an architecture that maintains both anisotropic and isotropic properties. Experimental results highlight the model\\u2019s superiority across various scenarios, including cases with countable and uncountable node features, as well as graphs exhibiting heterophily.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.\\tIt is interesting to see how the author tackles the problem of uncountable node features using pseudo metric and soft-injective functions.\\n2.\\tThe designed model has certain flexibility in terms of anisotropic and isotropic properties. The model architecture also generalizes easily to some popular GNNs.\\n3.\\tThe experiment shows the SIR-GCN performance well on the Dictionary Lookup task and against other baseline models during the benchmarking test.\", \"weaknesses\": \"1.\\tSince the SIR-GCN can generalize to other GNNs, it would be better if the author can explain why it is hard for other GNNs to handle the problem of uncountable node features.\\n2.\\tIt would be nice if the author can explain why there are some missing values in Table.\\n3.\\tFor the graph heterophily experiment, it would be better if the author can use some real world datasets with different degrees of heterophily[1].\\n\\n[1] Mao H, Chen Z, Jin W, Han H, Ma Y, Zhao T, Shah N, Tang J. Demystifying structural disparity in graph neural networks: Can one size fit all?. Advances in neural information processing systems, 2024.\", \"questions\": \"1.\\tAccording to the description of GAT, does GAT also preserve both anisotropic and isotropic?\\n2.\\tFor Table 4, Can GAT or GraphSAGE achieve similar performance with more parameters?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your clarification. It basically addresses my concern. I believe this paper offers some interesting perspectives on the representational capabilities of GNNs, and I will maintain my positive score.\"}", "{\"title\": \"Additional Clarifications\", \"comment\": \"We thank the reviewer for the follow-up feedback.\\n\\n1. We would like to clarify that the novelty of our work is **two-fold**. First, we are the first to provide a **novel theoretical framework** based on *pseudometrics* and *soft-injective* functions to enhance the representational capability of GNNs. This theoretical contribution addresses a critical area in GNN research on uncountable node features, providing an **alternative perspective from the multiple aggregators of PNA**. Second, we further provide a detailed discussion bridging this novel theoretical framework into the MPNN framework, resulting in the SIR-GCN, which is **empirically demonstrated to outperform conventional GNNs across several synthetic and benchmark datasets**.\\n\\n2. While GATv2 explores the *anisotropic* and *dynamic* properties of GAT **attention weights**, its original motivation for using attention mechanisms are nevertheless **heuristic**. The authors **do not provide a discussion on the representational capability of GATv2**. Moreover, we also highlight the **limitations of both GAT and GATv2**, attributed to their **linear transformation** of neighborhood features and use of **softmax** (Velickovic et al., 2024), which limits their expressivity. In contrast, our work fundamentally differs in context. In particular, our focus is on applying *anisotropic* and *dynamic* properties directly to the **message function**. This allows SIR-GCN to **effectively handle uncountable node features**, as highlighted in Line 531. Critically, we are the first to provide a **rigorous theoretical justification for this approach**, specifically to **enhance GNN representational capability**. This is complemented by **extensive empirical results** showing the contribution of this novel idea.\\n\\n3. We respectfully disagree with the assessment that performance improvements on real-world datasets are incremental. **SIR-GCN consistently outperforms conventional GNNs** (including GCN, GraphSAGE, GAT, and GIN) by a **significant margin**, as shown in Tables 3, 4, and 5. To further put this into perspective, **SIR-GCN also achieves substantial improvements over PNA** in real-world datasets like MNIST, CIFAR10, ZINC, and ogbg-molhiv while employing a **simpler and computationally efficient design**, as highlighted in Appendix C, **further highlighting the utility and novelty of SIR-GCN**.\\n\\n4. We would like to emphasize that the complex frameworks, such as grouped reversible residual connections (Li et al., 2021) and graph stochastic attention (Miao et al., 2022), are introduced simply to provide **concrete steps for future works** to build upon our theoretical and empirical results. This analysis is beyond the scope of the current work, similar to **foundational works** such as GATv2 and PNA, since our primary objective is to **demonstrate the core contributions of our proposed SIR-GCN** without employing additional tricks or techniques, as explicitly stated in Line 381. This ensures a fair evaluation where performance gains are **solely attributed to the key features of SIR-GCN**, in line with the reviewer's earlier remark of **justifying the effect of SIR-GCN**.\\n\\n5. Overall, we firmly believe that our work makes a **significant and novel contribution to GNN research** by providing a **comprehensive theoretical perspective for handling uncountable node features**, supported by empirical validation, and a **practical and efficient SIR-GCN that consistently outperforms conventional GNNs**. Together, these contributions provide a ***new, relevant, and impactful* advancement toward understanding GNN representational capabilities**.\", \"references\": \"Guohao Li, Matthias Muller, Bernard Ghanem, and Vladlen Koltun. Training graph neural networks with 1000 layers. In International Conference on Machine Learning, pp. 6437-6449. PMLR, 2021.\\n\\nSiqi Miao, Mia Liu, and Pan Li. Interpretable and generalizable graph learning via stochastic attention mechanism. In International Conference on Machine Learning, pp. 15524-15543. PMLR, 2022.\\n\\nPetar Velickovic, Christos Perivolaropoulos, Federico Barbero, and Razvan Pascanu. softmax is not enough (for sharp out-of-distribution). arXiv preprint arXiv:2410.01104, 2024.\"}", "{\"title\": \"Additional Experiments\", \"comment\": \"7. Based on the reviewer's feedback, we have conducted additional experiments in Appendix D to further highlight the utility and novelty of SIR-GCN as the first MPNN instance to theoretically and empirically justify the use of *anisotropic* and *dynamic* message functions. Specifically, we consider SIR-GCN (*static*), which uses linear messages by setting $\\\\sigma$ as identity and $\\\\boldsymbol{W_R} = \\\\boldsymbol{I}$, and SIR-GCN (*isotropic*), which removes the dependency of messages on center node features by setting $\\\\boldsymbol{W_Q} = \\\\boldsymbol{0}$. Although SIR-GCN achieves lower accuracy on WikiCS compared to the two simpler SIR-GCNs (*static* and *isotropic*), this result is consistent with the dataset's characteristics. As noted by Dwivedi et al. (2023), WikiCS is a single-graph dataset with denser node neighborhoods and shorter average path lengths, which can make more expressive models like SIR-GCN prone to overfitting and oversmoothing. Thus, the simpler SIR-GCNs are naturally less expressive and achieve higher accuracies for this small dataset. In contrast, on larger and more complex datasets such as PATTERN, CLUSTER, MNIST, CIFAR10, and ZINC, SIR-GCN consistently outperforms both the simpler SIR-GCNs and conventional GNNs. This underscores the strong utility of **both** *anisotropic* and *dynamic* message functions in improving GNN representational capability. Overall, these additional results highlight the novelty of SIR-GCN and further confirm the theoretical and practical contributions of our work in advancing GNN research.\\n\\n| Model | WikiCS (\\u2191) | PATTERN (\\u2191) | CLUSTER (\\u2191) | MNIST (\\u2191) | CIFAR10 (\\u2191) | ZINC (\\u2193) |\\n| :---------------------- | :------------: | :-----------: | :------------: | :------------: | :------------: | :-------------: |\\n| SIR-GCN (*static*) | 78.52 \\u00b1 0.57 | 85.72 \\u00b1 0.02 | 61.90 \\u00b1 0.25 | 95.65 \\u00b1 0.84 | 50.09 \\u00b1 3.20 | 0.334 \\u00b1 0.014 |\\n| SIR-GCN (*isotropic*) | 78.73 \\u00b1 0.63 | 85.74 \\u00b1 0.03 | 62.60 \\u00b1 0.38 | 97.44 \\u00b1 0.11 | 68.88 \\u00b1 0.27 | 0.281 \\u00b1 0.024 |\\n| SIR-GCN | 78.06 \\u00b1 0.66 | 85.75 \\u00b1 0.03 | 63.35 \\u00b1 0.19 | 97.90 \\u00b1 0.08 | 71.98 \\u00b1 0.40 | 0.278 \\u00b1 0.024 |\", \"references\": \"Vijay Prakash Dwivedi, Chaitanya K Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Benchmarking graph neural networks. Journal of Machine Learning Research, 24 (43):1-48, 2023.\"}", "{\"comment\": \"Thank you for the response. I tend to maintain my positive score.\"}", "{\"title\": \"Rebuttal by Authors (cont.)\", \"comment\": \"5. The horizontal and vertical axes of Fig. 2 represent the scalar neighborhood node features $\\\\boldsymbol{h_{v_1}}$ and $\\\\boldsymbol{h_{v_2}}$ of node $u$, where $v_1$ and $v_2$ are its neighbors. Line 193 provides context where these features represent zero-mean scores for anomaly detection. Fig. 2 illustrates how these features are transformed and aggregated using different *soft-injective* message functions. As clarified in Line 192, for arbitrary *pseudometrics* $d$, the corresponding *soft-injective* message function $g$ must be *dynamic* or non-linear. This insight motivates the use of an MLP to model the *dynamic* nature of the *soft-injective* message function, similar to Brody et al. (2021). Figs. 2(c) and 2(d) then highlight how MLPs may be used to model *anisotropic* and *dynamic* (*soft-injective*) message functions within the SIR-GCN model, aligning with the theoretical motivation discussed in the paper.\\n\\n6. The results from the DictionaryLookup and GraphHeterophily synthetic datasets serve as key illustrative examples of the practical utility and novelty of SIR-GCN. In the DictionaryLookup task, both GATv2 and SIR-GCN achieved perfect accuracy, highlighting the utility of a *dynamic* attentional or relational mechanism in capturing the relationships between *query* and *key* nodes, as explained in Line 413. In contrast, the GraphHeterophily dataset clearly exposes the limitations of existing GNNs (including GATv2), as only SIR-GCN achieved near-zero MSE loss while all other GNNs obtained large errors, as detailed above in 3. This dataset thus underscores the utility and novelty of SIR-GCN with its *anisotropic* and *dynamic* message functions (in contrast to attention mechanism).\\n\\nFurthermore, the results on benchmark datasets further highlight the superior performance of SIR-GCN over existing GNNs in more complex, real-world problems across various domains. In these evaluations, the SIR-GCN models implemented follow standard model design for GNNs, as outlined in Appendix B2. Specifically, we directly replace the GNN component (from GCN, GraphSAGE, GAT, GIN, and PNA) with SIR-GCN. Since we focused on this single replacement, performing an ablation study is neither appropriate nor feasible, similar to how no ablation study was conducted by Brody et al. (2021) when GAT was replaced with GATv2. Nevertheless, Section 4 provides a detailed mathematical discussion of how modifying specific parameters in SIR-GCN may recover conventional GNNs, whose performance is already included in our results. Hence, the reported results and performance improvements are solely attributed to the novel aspects of SIR-GCN. We are open to suggestions for further improving the results section.\\n\\nFinally, we emphasize that SIR-GCN can be easily integrated into more complex frameworks, such as grouped reversible residual connections (Li et al., 2021) and graph stochastic attention (Miao et al., 2022), to further improve performance. Previous works, particularly in Li et al. (2021) and Miao et al. (2022), have demonstrated that any GNN backbone, such as GCN, GraphSAGE, GAT, GIN, and PNA, can be seamlessly and easily incorporated into their frameworks to employ additional/advanced techniques for enhanced performance. While this also holds for SIR-GCN, exploring such integrations with SIR-GCN is beyond the scope of this paper and left for future works as explained in the conclusion, as it primarily focuses on introducing the key contributions and foundations of SIR-GCN.\", \"references\": \"Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? arXiv preprint arXiv:2105.14491, 2021.\\n\\nGuohao Li, Matthias Muller, Bernard Ghanem, and Vladlen Koltun. Training graph neural networks with 1000 layers. In International Conference on Machine Learning, pp. 6437-6449. PMLR, 2021.\\n\\nSiqi Miao, Mia Liu, and Pan Li. Interpretable and generalizable graph learning via stochastic attention mechanism. In International Conference on Machine Learning, pp. 15524-15543. PMLR, 2022.\"}", "{\"comment\": \"Thanks for the feedback from the authors. Although their comments alleviate some of my concerns, they do not clarify my concerns about the novelty. I also believe it is very similar to GATv2. GATv2 explores anisotropic and dynamic functions on edge weight function, while this paper explores them on the message function. Besides, the performance improvement on real-world datasets is incremental and the extension to complex methods is not provided. Thus, I tend to keep my ratings.\"}", "{\"title\": \"Summary of Rebuttal Discussions\", \"comment\": \"We sincerely thank the reviewers for their time and valuable feedback!\\n\\nIn summary, our paper introduces a novel theoretical framework based on *pseudometrics* and *soft-injective* functions for understanding the representational capabilities of GNNs, with a specific focus on uncountable node features. Through rigorous theoretical analysis, we highlight that message functions in the MPNN framework must be *anisotropic* and *dynamic*. We then translate this theoretical insight into our proposed SIR-GCN, which is the first MPNN instance to satisfy these properties. Empirically, we also demonstrate how this key feature allows SIR-GCN to generalize and consistently outperform existing foundational GNNs, including GCN, GraphSAGE, GAT, GIN, and PNA, across various synthetic and benchmark datasets spanning diverse domains. Our work thus provides new, relevant, and impactful theoretical and empirical advancements toward understanding GNN representational capabilities.\\n\\nNotably, our work fundamentally differs from Brody et al. (2021) by offering a comprehensive theoretical analysis of the broader representational capabilities of GNNs, extending beyond the GAT attention mechanism investigated by Brody et al. (2021). While GATv2 applies *anisotropic* and *dynamic* functions on its attention mechanism, its messages nevertheless remain linear, potentially limiting its expressivity. In contrast, motivated by our novel theoretical results, the *anisotropic* and *dynamic* messages of SIR-GCN are explicitly designed to boost GNN representational capability. Similarly, while Corso et al. (2020) also examined the representational capabilities of GNNs with uncountable node features, our theoretical framework is fundamentally distinct in being the first to *softly* relax the injective and metric requirements of prior works. This approach allows SIR-GCN to achieve computational efficiency with a single aggregator while still outperforming PNA and GATv2 despite its simple design. These key differences in scope and approach underscore the novelty and significance of our work.\\n\\nIn response to Reviewer NmsR, we would also like to clarify that, consistent with prominent foundational works in GNN such as Xu et al. (2018), Corso et al. (2020), and Brody et al. (2021), our foundational paper also focuses on introducing novel theoretical insights and developing a foundational GNN model within the MPNN framework. Extending these theoretical results to more complex frameworks, while important, involves significant additional theoretical analysis and experimentation that warrants a comprehensive discussion in a separate and dedicated study, similar to how separate and subsequent papers extended the results of Xu et al. (2018) to higher-order WL tests. Furthermore, these foundational GNN works also primarily compared their proposed models (*e.g.*, GIN, GATv2, PNA) to other foundational GNNs (*e.g.*, GCN, GraphSAGE, GAT) to isolate and highlight performance improvements solely attributed to the key features of their model. Following these works, we only directly compare SIR-GCN to existing foundational GNNs but do not extend the comparison to state-of-the-art models, as these incorporate additional techniques that significantly enhance their performance, making such a comparison neither fair nor meaningful. This experimental analysis would also require substantial model tuning and experimentation that also deserves a separate study to ensure rigor. Nonetheless, we emphasize that our work, in its current form, already makes a significant contribution to the GNN literature by addressing the important problem of uncountable node features and providing novel theoretical and empirical insights. Furthermore, as a foundational work, it already includes the key discussions and comparisons standard in prominent foundational GNN works.\", \"references\": \"Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? arXiv preprint arXiv:2105.14491, 2021.\\n\\nGabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Lio, and Petar Velickovic. Principal neighbourhood aggregation for graph nets. Advances in Neural Information Processing Systems, 33:13260-13271, 2020.\\n\\nKeyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We thank the reviewer for the constructive insights!\\n\\n1. We would first like to clarify that our work is not an extension of Corso et al. (2020) but rather a novel approach to handling uncountable node features within the MPNN framework. Unlike PNA whose key features are multiple aggregators and scalers, SIR-GCN only employs a single aggregator. Consequently, it is also computationally efficient, requiring only an activation function (linear complexity) along edges, unlike PNA, which requires a full linear layer (quadratic complexity) as shown in Table 8 of Appendix C. Despite its lower computational requirement, SIR-GCN still achieves superior performance, consistently outperforming PNA across all benchmarks spanning both countable and uncountable node features, highlighting its efficiency, expressivity, and practical utility. We further note that unlike PNA whose *anisotropic* nature is simply a direct consequence of applying its main theoretical result of multiple aggregators and scalers within the MPNN framework (Section 2.3 of Corso et al., 2020), our work presents a rigorous theoretical foundation for the *anisotropic* nature of SIR-GCN in Line 216, highlighting its novel contribution to GNN research.\\n\\nMoreover, while Brody et al. (2021) explored *anisotropic* and *dynamic* functions in the context of GAT attention mechanisms, our work is the first to apply these principles specifically to message functions within the broader MPNN framework. This work contributes to GNN literature by providing a rigorous theoretical foundation (Lines 192 and 216) and extensive empirical results on how this key innovation enables SIR-GCN to capture complex, nuanced relationships between pairs of neighboring nodes prior to aggregation, allowing it to better handle uncountable node features. The significance of this contribution is evident in the experimental results, particularly in Table 3 where SIR-GCN consistently outperforms all other GNNs (including GAT) with the same parameter budget. This is also evident in Table 2 where SIR-GCN successfully accomplished the task when all other GNNs failed, as elaborated in 3 below.\\n\\nOverall, our work offers a novel perspective into the representational capability of GNNs, distinct from Corso et al. (2020) and Brody et al. (2021), especially in the problem of uncountable node features, by demonstrating that using only a single aggregator can already substantially improve the representational capability of GNNs. The SIR-GCN, through its use of *anisotropic* and *dynamic* message functions within the MPNN framework, introduces a fundamentally novel approach to this problem, distinguishing it from existing models like PNA and GATv2 whose specific utility and novelty differs from our work. Notably, the unique design of SIR-GCN directly addresses several limitations of existing GNNs, as demonstrated by illustrative examples on synthetic datasets and model performance on benchmark datasets. These findings establish SIR-GCN as a significant and novel contribution to advancing GNN research.\\n\\n2. The connection between the motivation and the proposed SIR-GCN is firmly established through the theoretical foundation provided by *pseudometrics* and *soft-injective* functions. Corollary 1 guarantees the existence of a *soft-injective* hash function $G$ and *soft-injective* feature map $g$ given a *pseudometric* $d$ on $\\\\mathcal{H}$ and *pseudometric* $D$ on bounded equinumerous *multisets* of $\\\\mathcal{H}$. From this result, two necessary properties of the *soft-injective* message function $g$ emerge, as elaborated in Line 189 onward, which directly inform the design of SIR-GCN. As clarified in Line 192, for arbitrary *pseudometrics* $d$, the corresponding *soft-injective* message function $g$ must be *dynamic* or non-linear. This insight motivates the use of an MLP to model the *dynamic* nature of $g$, similar to Brody et al. (2021). From Line 216, we further show that the *soft-injective* message function $g$ must also adapt to each node independently. This insight then motivates integrating the features of the center node into $g$, consequently making it *anisotropic*, to avoid the impracticality of designing distinct message functions for each node in large graphs. Combining these two properties, SIR-GCN is the first MPNN instance to utilize a *soft-injective* message function that is both *anisotropic* and *dynamic*. This connection between the theoretical motivation and the architectural design ensures that SIR-GCN is not just empirically performant but also rigorously grounded in theory, demonstrating the significance and novelty of our approach.\", \"references\": \"Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? arXiv preprint arXiv:2105.14491, 2021.\\n\\nGabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Lio, and Petar Velickovic. Principal neighbourhood aggregation for graph nets. Advances in Neural Information Processing Systems, 33:13260-13271, 2020.\"}", "{\"title\": \"Alignment with Prominent Foundational Works in GNN\", \"comment\": \"We thank the reviewer for the additional feedback. We would like to clarify how our work aligns with prominent foundational works in GNN.\\n\\n1. The theoretical discussions by Brody et al. (2021) only focus on the **representational capability of the GAT attention mechanism**. Specifically, they only analyzed the ***difference* in representational capability between *static* (Theorem 1) and *dynamic* (Theorem 2) GAT attention**, as explicitly written in their theorem statements. In contrast, our theoretical discussions focus on the **broader representational capability of GNNs with a specific focus on uncountable node features** which is **not addressed by Brody et al. (2021)**.\\n\\n2. In **Xu et al. (2018), Corso et al. (2020), Brody et al. (2021)**, and other prominent foundational works in GNN, they only compared the performance of their proposed *foundational models* (GIN, PNA, GATv2) to **other *foundational GNNs*** (GCN, GraphSAGE, GAT). Notably, these ***foundational models* are not designed to explicitly achieve state-of-the-art (SOTA) performance**. Hence, these works do not compare the performance of their model to SOTA models to ensure a **fair evaluation** where performance improvements are solely attributed to the **key features of the message-passing architecture**. Following these *foundational works*, our results also compare the performance of our proposed *foundational model* SIR-GCN only to *foundational GNNs* (GCN, GraphSAGE, GAT, GATv2, GIN, PNA) to highlight the **performance improvement attributed to the key message-passing features of SIR-GCN**. These results also underscore the practical utility of ***replacing conventional foundational GNNs* with SIR-GCN in existing models**. Furthermore, given the ***foundational* nature of SIR-GCN which is (similar to other *foundational models*) not designed to explicitly achieve SOTA**, a direct comparison of performance relative to SOTA would **not be meaningful nor comparable** and would **require *significant* model architecture tuning** since these models employ **several additional techniques that significantly increase their performance**. \\n\\n3. **Xu et al. (2018), Corso et al. (2020), Brody et al. (2021)**, and other prominent foundational works in GNN only provide a **theoretical analysis of their key contributions based on the MPNN framework**. It is ***subsequent and separate* works** that extended their results to other complex frameworks as this **required additional analysis that is *beyond the scope of the original paper***. Similar to these *foundational works*, our *foundational paper* also presents a **comprehensive theoretical framework based on MPNN**. Extending our results to other frameworks also requires ***significant* additional analysis** that is ***beyond the scope of our paper*** and warrants a **comprehensive discussion in a *separate* and dedicated work**. Nevertheless, our novel theoretical framework based on MPNN already provides a ***new, relevant, and impactful* advancement toward understanding GNN representational capabilities**.\\n\\n4. Overall, our work, *in its current form*, already provides a **significant and novel contribution to GNN literature** on the problem of uncountable node features. Furthermore, as a ***foundational paper*** introducing the *foundational GNN model* SIR-GCN, it also already **includes the standard and key discussions and comparisons** in prominent foundational works in GNN.\", \"references\": \"Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? arXiv preprint arXiv:2105.14491, 2021.\\n\\nGabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Lio, and Petar Velickovic. Principal neighbourhood aggregation for graph nets. Advances in Neural Information Processing Systems, 33:13260\\u201313271, 2020.\\n\\nKeyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.\"}" ] }
9RFocgIccP
Multi-Reward as Condition for Instruction-based Image Editing
[ "Xin Gu", "Ming Li", "Libo Zhang", "Fan Chen", "Longyin Wen", "Tiejian Luo", "Sijie Zhu" ]
High-quality training triplets (instruction, original image, edited image) are essential for instruction-based image editing. Predominant training datasets (e.g., InsPix2Pix) are created using text-to-image generative models (e.g., Stable Diffusion, DALL-E) which are not trained for image editing. Accordingly, these datasets suffer from inaccurate instruction following, poor detail preserving, and generation artifacts. In this paper, we propose to address the training data quality issue with multi-perspective reward data instead of refining the ground-truth image quality. 1) we first design a quantitative metric system based on best-in-class LVLM (Large Vision Language Model), i.e., GPT-4o in our case, to evaluate the generation quality from 3 perspectives, namely, instruction following, detail preserving, and generation quality. For each perspective, we collected quantitative score in $0\sim 5$ and text descriptive feedback on the specific failure points in ground-truth edited images, resulting in a high-quality editing reward dataset, i.e., RewardEdit20K. 2) We further proposed a novel training framework to seamlessly integrate the metric output, regarded as multi-reward, into editing models to learn from the imperfect training triplets. During training, the reward scores and text descriptions are encoded as embeddings and fed into both the latent space and the U-Net of the editing models as auxiliary conditions. During inference, we set these additional conditions to the highest score with no text description for failure points, to aim at the best generation outcome. 3) We also build a challenging evaluation benchmark with real-world images/photos and diverse editing instructions, named as Real-Edit. Experiments indicate that our multi-reward conditioned model outperforms its no-reward counterpart on two popular editing pipelines, i.e., InsPix2Pix and SmartEdit. Code is released at https://github.com/bytedance/Multi-Reward-Editing.
[ "Instruction-based Image Editing" ]
Accept (Poster)
https://openreview.net/pdf?id=9RFocgIccP
https://openreview.net/forum?id=9RFocgIccP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xQSRytSl3u", "qgffhHtGcl", "mdR0S0234N", "ldLH7Fsdhm", "l4USDFNLtv", "fLNfXO1Djd", "cGxgOsUrcz", "c3qncvFnaj", "WhzXWjRCUo", "W21AiudD4U", "UHeIpW77EZ", "NdyIsX7VCt", "MeAuwV4nGW", "MDe4jJSHtg", "C2e9Qu4PMf", "9SzemJ0BLa", "9GyQkUSaAb", "5uqXZO7WzI", "57gkh2bmHP", "3Nf0fvkFnz", "3GhvBRrvGP" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732211360378, 1732209295455, 1732602814939, 1732597603332, 1730562529138, 1732563046046, 1730704904557, 1732590816571, 1732210394106, 1737523441038, 1730659877114, 1732601479131, 1732590463331, 1734426646937, 1732537150052, 1732211917528, 1732209832977, 1732587447302, 1732517354728, 1730703585084, 1732208895028 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1218/Authors" ], [ "ICLR.cc/2025/Conference/Submission1218/Authors" ], [ "ICLR.cc/2025/Conference/Submission1218/Authors" ], [ "ICLR.cc/2025/Conference/Submission1218/Reviewer_sJ5c" ], [ "ICLR.cc/2025/Conference/Submission1218/Reviewer_sJ5c" ], [ "ICLR.cc/2025/Conference/Submission1218/Reviewer_X16E" ], [ "ICLR.cc/2025/Conference/Submission1218/Reviewer_RGo1" ], [ "ICLR.cc/2025/Conference/Submission1218/Authors" ], [ "ICLR.cc/2025/Conference/Submission1218/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1218/Reviewer_eTqX" ], [ "ICLR.cc/2025/Conference/Submission1218/Reviewer_eTqX" ], [ "ICLR.cc/2025/Conference/Submission1218/Authors" ], [ "ICLR.cc/2025/Conference/Submission1218/Area_Chair_GcaX" ], [ "ICLR.cc/2025/Conference/Submission1218/Reviewer_sJ5c" ], [ "ICLR.cc/2025/Conference/Submission1218/Authors" ], [ "ICLR.cc/2025/Conference/Submission1218/Authors" ], [ "ICLR.cc/2025/Conference/Submission1218/Authors" ], [ "ICLR.cc/2025/Conference/Submission1218/Authors" ], [ "ICLR.cc/2025/Conference/Submission1218/Reviewer_X16E" ], [ "ICLR.cc/2025/Conference/Submission1218/Authors" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal (part 1/2)\", \"comment\": \"We appreciate the reviewer for careful comments and provide our responses below. All changes in the revision are marked in red.\\n\\n> **Q1**: Although the use of GPT-4o for evaluation is efficient in practice, it lacks guaranteed consistency. The evaluation results could be influenced by underlying changes in GPT-4o, rendering the results unreliable. An alternative solution to this issue could be to train an independent evaluation model.\\n\\n**A1**: Thanks for this comment. The version of GPT-4o we used is '2024-08-06'. After multiple (5 times) tests, we found that the fluctuations in the accuracy of the three metrics are within **1%**, and the score fluctuations are within **0.05** on Real-Edit benchmark. This demonstrates the stability of GPT-4o. However, we fully agree with the reviewer's suggestion to train an independent evaluation model. In the future, we will explore fine-tuning a specialized evaluation model based on existing open-source multimodal large model.\\n\\nWe have included the above discussion in *Section D.3* of the appendix in revision.\\n\\n\\n> **Q2**: The RewardEdit-20K dataset continues to utilize images generated by InstructPix2Pix's model. For some challenging editing samples, scores cannot exceed 2 points. Even if the reward model can enhance performance in other samples, can it generate better results for these difficult cases generated by InsPix2Pix?\\n\\n**A2**: Thank you for your insightful comment. To investigate whether our reward model can generate better edited images for challenging editing samples in InsPix2Pix, we first randomly selected 500 samples from RewardEdit-20K with scores not exceeding 2. Then, we used our reward model to generate edited images based on the original images and instructions of these samples, and scored them using GPT-4o. The experimental results are shown in the table below. \\\"Original\\\" represents the average scores of the original edited images of these samples across three metrics, while \\\"Ours\\\" represents the scores of the edited images generated by our reward model. From the table, it can be observed that the edited images generated by our method significantly outperform the original edited images on all three metrics, indicating that our method can generate better results for these difficult cases.\\n\\n**Tab. C: Comparison of edited images for challenging samples in InsPix2Pix.**\\n\\n| &nbsp;&nbsp;Method&nbsp;&nbsp; | &nbsp;&nbsp;Following&nbsp;&nbsp; | &nbsp;&nbsp;Preserving&nbsp;&nbsp; | &nbsp;&nbsp;Quality&nbsp;&nbsp; |\\n|:----------:|:---------:|:----------:|:-------:|\\n| Original | 1.15 | 1.69 | 1.99 |\\n| Ours | 2.92 | 4.10 | 3.68 |\\n\\n\\nWe have integrated the above results and analysis in *Section C.3* of the appendix in revision.\\n\\n\\n> **Q3**: There is a lack of detailed analysis on the individual impact of each perspective reward on the improvement of editing models. Including such insights in an ablation study could strengthen the paper's contribution.\\n\\n**A3**: Thank you for your suggestion. We agree that analyzing the impact of each perspective reward is beneficial and we conduct additional experiments for training on each perspective separately. As shown in the table below, the following score reached 3.40 with only the instruction following reward, the preserving score reached 3.54 with only the detail preserving reward, and the quality score reached 3.95 with only the generation quality reward. These results demonstrate the effectiveness of each perspective reward.\\n\\n\\n**Tab. D: Ablation of each perspective reward. 'IF', 'DP' and 'GQ' are instruction following reward, detail preserving reward and generation quality reward.**\\n\\n| &nbsp;&nbsp; IF &nbsp;&nbsp; | &nbsp;&nbsp; DP &nbsp;&nbsp; | &nbsp;&nbsp; GQ &nbsp;&nbsp; | &nbsp;&nbsp;Following &nbsp;&nbsp; | &nbsp;&nbsp;Preserving&nbsp;&nbsp; | &nbsp;&nbsp;Quality&nbsp;&nbsp; |\\n|:----:|:----:|:----:|:---------:|:----------:|:-------:|\\n| \\u2713 | | | 3.40 | 3.25 | 3.72 |\\n| | \\u2713 | | 3.23 | 3.54 | 4.00 |\\n| | | \\u2713 | 3.20 | 3.23 | 3.95 |\\n| \\u2713 | \\u2713 | \\u2713 | 3.39 | 3.43 | 3.80 |\\n\\n\\nWe have included the above results in *Section C.4* of the appendix in revision to make our contributions clearer.\"}", "{\"title\": \"Rebuttal (part 2/2)\", \"comment\": \"> **Q2**: While experiments have shown that the introduction of additional reward text can improve image editing, unfortunately, there is no analysis of why introducing negative text in this way could help guide the diffusion process towards more effective editing.\\n\\n**A2**: Thanks for this comment. We introduced additional reward information because the ground truth in existing image editing datasets is inaccurate (see lines 92-104). To rectify these inaccuracies, we incorporated reward scores and text (examples in *Section B* of the Appendix ). The reward score is a quantitative evaluation reflecting the overall quality. Since the same reward score can correspond to different types of errors, we further included reward text, which provides more detailed error information. Specifically, the negative text introduced can be seen as a **correction** to the ground truth, which means that the original ground truth plus the negative text forms the true ground truth. To ensure that the negative text serves as a guide, we integrate it into the diffusion process as an additional condition.\\n\\nWe have included the above analysis and discussion in *Section D.1* of the appendix in revision.\\n\\n\\n> **Q3**: Are there limitations to the introduction of the dataset REWARDEDIT-20K. Different instructional editing models may have been obtained by tuning on different training sets [1][2]. While this paper achieved an advantage on Ins-Pix2Pix and its improved version SmartEdit, is it still desirable to train other models using REWARDEDIT-20K obtained from the Ins-Pix2Pix training set? \\n[1] Instructdiffusion: A generalist modeling interface for vision tasks, CVPR 2024 \\n[2] SEED-Data-Edit Technical Report: A Hybrid Dataset for Instructional Image Editing, arXiv 2024\\n\\n**A3**: Thank you for your thoughtful feedback. We proposed REWARDEDIT-20K based on Ins-Pix2Pix. Currently, most image editing models use the Ins-Pix2Pix dataset for training, including the Instructdiffusion[1] and SEED-Data-Edit[2] mentioned by the reviewers. Ins-Pix2Pix has become the most widely used dataset in the image editing field. Recent methods, such as SmartEdit, Instructdiffusion, and SEED-Data-Edit, typically use multiple editing datasets for mixed training. Our improvements in SmartEdit demonstrate that our method is also effective for models trained with mixed datasets. In the future, we will apply the proposed reward data generation method to other datasets to see whether it brings further improvement.\\n\\nWe have added the above discussion to *Section D.2* of the appendix in the revision.\"}", "{\"comment\": \"We are happy to hear that our response addressed your concerns. We sincerely appreciate the time and effort you have dedicated to reviewing our work.\"}", "{\"comment\": \"Thanks for the authors' follow-up response. The response resolves my concerns, and the revised version is now more readable. Given this, I will raise my rating based on the revised version.\"}", "{\"summary\": \"This paper introduces a dataset and benchmark for assessing image editing performance across multiple dimensions. The authors also utilize these multi-dimensional scores as rewards to enhance the effectiveness of image editing models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper underscores a valuable methodology for evaluating image editing datasets, which is highly beneficial in practical applications.\\n2. It proposes the use of evaluation scores to boost the performance of image editing models, which is novel in image editing.\", \"weaknesses\": \"1. Although the use of GPT-4o for evaluation is efficient in practice, it lacks guaranteed consistency. The evaluation results could be influenced by underlying changes in GPT-4o, rendering the results unreliable. An alternative solution to this issue could be to train an independent evaluation model.\\n\\n2. The RewardEdit-20K dataset continues to utilize images generated by InstructPix2Pix's model. For some challenging editing samples, scores cannot exceed 2 points. Even if the reward model can enhance performance in other samples, can it generate better results for these difficult cases generated by InsPix2Pix?\\n\\n3. There is a lack of detailed analysis on the individual impact of each perspective reward on the improvement of editing models. Including such insights in an ablation study could strengthen the paper's contribution.\\n\\n4. The practice of incorporating additional \\\"reward scores\\\" to enhance image generation quality is already established within the stable diffusion community (see https://civitai.com/articles/4248/what-is-score9-and-how-to-use-it-in-pony-diffusion). A more thorough discussion linking the proposed multi-reward framework to existing methodologies would enrich the manuscript's contribution.\", \"questions\": \"1. Could you elaborate on the dimensionality of $c_R$ and $Linear(c_R)$, and explain how lines 281 and 287 can be implemented, given that both use the same notation of $Linear(c_R)$?\\n2. In line 369, is there a specific reason for training at a resolution of 256? Wouldn't training and inference at 512 yield better results?\\n3. Are the trained modules shared or trained separately between InstructPix2Pix and smartEdit?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your rebuttal, most of the doubts have been explained. I'll maintain my score for acceptance.\"}", "{\"summary\": \"The authors introduce a high-quality editing reward dataset to address the problem of editing effectiveness due to training data quality issues with instruction-based image editing methods. This dataset contains assessment scores and reward texts, which are introduced as additional conditions to the instruction-based editing model to improve the model's ability. The authors also introduce an evaluation set to assess the quality of the instruction-based image editing model from multiple dimensions. Qualitatively and quantitatively, it is demonstrated that the present method effectively improves the quality of the instructional editing model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tA novel reward-based instruction editing framework that introduces evaluation scores for VLLM as well as rewarding textual feedback to improve the capability of instruction editing models.\\n2.\\tThe creation of Real-Edit provides a standardized approach to evaluate instructional editing methods in different scenarios.\\n3.\\tThe method shows superior performance in both quantitative metrics and qualitative results, indicating robust editing capabilities.\", \"weaknesses\": \"1. Both the quantitative assessment in Table 1 and the training strategy are based on the same scoring strategy for the GPT-4o, making the evaluation of the results overly dependent on the a priori of the VLLM and making it difficult to objectively validate the strengths of this method. One solution to this dilemma is to use other assessment metrics (e.g., CLIP scores) on the three dimensions to be adopted for comparison with other methods.\\n2. While experiments have shown that the introduction of additional reward text can improve image editing, unfortunately, there is no analysis of why introducing negative text in this way could help guide the diffusion process towards more effective editing.\\n3. Are there limitations to the introduction of the dataset REWARDEDIT-20K. Different instructional editing models may have been obtained by tuning on different training sets [1][2]. While this paper achieved an advantage on Ins-Pix2Pix and its improved version SmartEdit, is it still desirable to train other models using REWARDEDIT-20K obtained from the Ins-Pix2Pix training set?\\n[1] Instructdiffusion: A generalist modeling interface for vision tasks, CVPR 2024\\n[2] SEED-Data-Edit Technical Report: A Hybrid Dataset for Instructional Image Editing, arXiv 2024\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our work. Your insightful comments and constructive feedback are highly valued.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for careful comments on our work and provide our responses below. All changes in the revision are marked in red.\\n\\n> **Q1**: The paper lacks a comparison with RL in T2I methods like DPO-Diffusion[1]. If I understand correctly, I believe that the Multi-Reward Framework is conceptually similar to methods like DPO-Diffusion. Therefore, I think it is reasonable and necessary to articulate the comparisons and distinctions between these approaches, especially the method difference. \\n[1] Wallace, B., Dang, M., Rafailov, R., Zhou, L., Lou, A., Purushwalkam, S., Ermon, S., Xiong, C., Joty, S., \\\\& Naik, N. (2023). Diffusion Model Alignment Using Direct Preference Optimization. arXiv preprint arXiv:2311.12908.\\n\\n**A1**: Thanks for the insightful comment. We agree with the reviewer's suggestion that we should clarify the differences and connections between our method and methods like DPO-Diffusion. Both DPO-Diffusion and our proposed Multi-Reward approach fundamentally aim to optimize the quality of generated images through **feedback mechanisms**. The main differences between our Multi-Reward and DPO-Diffusion are as follows: \\n**(1)** Granularity of feedback. DPO-Diffusion's preference feedback is expressed as relative preferences, such as `Image A is better than image B', therefore the feedback signal only has two possible states. In contrast, our Multi-Reward uses absolute numerical values and detailed text description for feedback signals (For examples, see Appendix *Section B*). \\n**(2)** Applicability of feedback. DPO-Diffusion is only applicable to situations with a single feedback value, whereas our approach can simultaneously incorporate multi-perspective feedback information, including instruction following, detail preserving and generation quality. \\n**(3)** Training stability. We directly use feedback information as an additional condition while still employing the original Diffusion Loss. This approach is simple and effective, avoiding the training instability that DPO can introduce to the diffusion model.\\n\\nWe have integrated the above detailed comparison into *Section D.4* of the appendix in the revision.\\n\\n\\n> **Q2**: The article lacks some novelty. Of course, high-quality and abundant data can effectively enhance model performance, so I am uncertain about the extent to which the proposed Multi-Reward Framework improves upon traditional methods. For example, in Table 1, additional editing data (0.02M) was used for training. I believe it is fair to compare it with the exact same data using the same baseline method; otherwise, it is difficult to convince me whether the improvement comes from the high-quality data or the architecture. Perhaps experiments on the unprocessed REWARDEDIT-20K data could be added for comparison.\\n\\n**A2**: Thank you for your thoughtful feedback. **To clarify**, we are not working on using MLLMs to filter high-quality data from InsPix2Pix. The 20K samples in our RewardEdit-20K dataset are *randomly sampled* from InsPix2Pix. Our motivation is that constructing a perfect image editing dataset is challenging, and the ground truth in existing image editing datasets often contains issues. Therefore, we propose using multi-perspective rewards to **rectify the inaccurate supervision**. To more fairly demonstrate the role of multi-perspective rewards, we conducted the ablation experiments shown in the table below. The experimental results indicate that, with the same data, using multi-perspective rewards significantly improves performance compared to the baseline, demonstrating the effectiveness of multi-perspective rewards.\\n\\n**Tab. B: Ablation study of editing data.**\\n| &nbsp; Architecture&nbsp; | &nbsp; Edit Data &nbsp; | &nbsp; Following &nbsp;| &nbsp;Preserving&nbsp; | &nbsp;Quality&nbsp; |\\n|:----------:|:---------:|:---------:|:----------:|:-------:|\\n| Baseline | 0.30M | 2.77 | 2.59 | 3.15 |\\n| Baseline | 0.32M | 2.90 | 2.88 | 3.52 |\\n| Ours | 0.32M | 3.39 | 3.43 | 3.80 |\\n\\nWe have included the above clarification and results in *Section C.2* of the appendix in the revision to make our contributions clearer.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper identifies significant issues related to data quality in the current Image Edit dataset. It proposes a pipeline for data cleaning and scoring using MLLM and introduces a cleaned dataset along with a training framework designed for multi-reward scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"This paper discusses the issues present in the current dataset and utilizes MLLM for data cleaning. Through experiments and comparisons, it demonstrates the effectiveness of the cleaned dataset and the new training methods.\", \"Good writing and detailed experiments make this paper compelling.\"], \"weaknesses\": [\"W.1: The paper lacks a comparison with RL in T2I methods like DPO-Diffusion[1]. If I understand correctly, I believe that the Multi-Reward Framework is conceptually similar to methods like DPO-Diffusion. Therefore, I think it is reasonable and necessary to articulate the comparisons and distinctions between these approaches, especially the method difference.\", \"W.2: The article lacks some novelty. Of course, high-quality and abundant data can effectively enhance model performance, so I am uncertain about the extent to which the proposed Multi-Reward Framework improves upon traditional methods. For example, in Table 1, additional editing data (0.02M) was used for training. I believe it is fair to compare it with the exact same data using the same baseline method; otherwise, it is difficult to convince me whether the improvement comes from the high-quality data or the architecture. Perhaps experiments on the unprocessed REWARDEDIT-20K data could be added for comparison.\", \"[1] Wallace, B., Dang, M., Rafailov, R., Zhou, L., Lou, A., Purushwalkam, S., Ermon, S., Xiong, C., Joty, S., & Naik, N. (2023). *Diffusion Model Alignment Using Direct Preference Optimization*. arXiv preprint arXiv:2311.12908.\"], \"questions\": \"See above, especially W.1\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your reply, I will raise my rate.\"}", "{\"comment\": \"Dear Reviewer,\\nThank you once again for your time and effort in reviewing our work. We greatly appreciate the thoughtful comments and constructive feedback you have provided.\"}", "{\"metareview\": \"This paper aims to use multi-view reward data as an additional condition to address the problem of editing effectiveness caused by the issues of training data quality. To achieve this, the authors introduce a high-quality editing reward dataset and propose a benchmark to evaluate the quality of the instruction-based image editing model from multiple dimensions. Qualitative and quantitative evaluations are presented and demonstrate the effectiveness of the proposed method. All reviewers give positive rating scores. Based on the above considerations, I recommend to accept this manuscript.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided rebuttals for each reviewer, and most of reviewer present responses. During the rebuttal period, Reviewer eTqX and Reviewer sJ5c raise their rating scores considering that their problems are addressed. Reviewer RGo1 and Reviewer X16E keep their positive rating scores.\"}", "{\"comment\": \"Thank you for your rebuttal. However, there are still some concerns that need to be addressed.\\n## About A3\\nThe ablation study in A3 is lacking in certain aspects. Specifically, I have two questions:\\n1. The three evaluation metrics for each individual reward seem similar. Please analyze why the metrics increase even when the reward is not related to the specific metric.\\n2. Why is the evaluation of the full model inferior to that obtained by adding individual rewards?\\n## About A5\\nThere appears to be a bug between Equation line 281 and Figure 5. From line 281, the latent behaves as query only, then the resulting output of the cross attention $Z''$ eliminates the detailed information of the source image. However in Figure 5, $Z''$ is directly sent to the diffusion model. Per my understanding, the structure of Figure 5 is not feasible, as the diffusion model can not touch the details of the input source image, making it unable to perform image editing.\\n\\n# Suggestion\\nIt is highly recommended to label each equation with a numerical label, as this is a standard practice.\\n\\n# Summary\\nI am inclined to reject this paper as the readability could be significantly enhanced.\"}", "{\"title\": \"Rebuttal (part 2/2)\", \"comment\": \"> **Q4**: The practice of incorporating additional \\\"reward scores\\\" to enhance image generation quality is already established within the stable diffusion community (see https://civitai.com/articles/4248/what-is-score9-and-how-to-use-it-in-pony-diffusion). A more thorough discussion linking the proposed multi-reward framework to existing methodologies would enrich the manuscript's contribution.\\n\\n**A4**: Thanks for this comment. We discussed our method in relation to existing reward-based methods in *Section 2.2* of the Related Work. In the Text-to-Image, some work has explored the use of reward scores to enhance the quality of generated images. For example, Pony Diffusion, as mentioned by the reviewer, employs a CLIP-based aesthetic ranking method to generate reward scores. Different from these works, our reward information comes from GPT-4o, which includes not only reward scores but also reward text. In addition, we focus more on the role of reward information in the image editing domain rather than text-to-image generation.\\n\\nWe have added the discussions to the Related Work of the revision.\\n\\n\\n> **Q5**: Could you elaborate on the dimensionality of $c_R$ and $Linear(c_R)$, and explain how lines 281 and 287 can be implemented, given that both use the same notation of $Linear(c_R)$?\\n\\n**A5**: Thanks for this comment. The reward condition $c_R$ has a dimension of 768. In the Reward Encoder module, to match the dimension of latent noise, a linear layer (line 281) is used to transform the dimension of $c_R$ to 320. In the Unet module, the dimension is 1280, so a linear layer (line 287) is used to transform the dimension of $c_R$ to 1280. We have clarified this point in revision.\\n\\n\\n> **Q6**: In line 369, is there a specific reason for training at a resolution of 256? Wouldn't training and inference at 512 yield better results?\\n\\n**A6**: Thanks for this comment. We chose to train at a resolution of 256 to maintain consistency with other methods (InsPix2Pix, SmartEdit, MGIE and HQ-Edit are both trained on 256), ensuring a **fair comparison**. Increasing the training resolution from 256 to 512 requires about 4 times computation, so it is hard to keep the mini-batch size per GPU unchanged. Due to limited computation, we are not able to tune the hyperparameters for 512 resolution. We use gradient accumulation to keep the overall batch size and all the other hyperparameters unchanged. We did not observe performance improvement compared to 256 resolution.\\n\\n**Tab. E: Ablations of training image resolution.**\\n| &nbsp; &nbsp;Resolution &nbsp; &nbsp; | &nbsp; &nbsp;Following &nbsp; &nbsp; | &nbsp; &nbsp;Preserving &nbsp; &nbsp; | &nbsp; &nbsp;Quality &nbsp; &nbsp; |\\n|:----------:|:---------:|:----------:|:-------:|\\n| 512 | 3.28 | 3.20 | 3.61 |\\n| 256 | 3.39 | 3.43 | 3.80 |\\n\\n\\nWe have added this point and results in *Section C.5* of the appendix in revision.\\n\\n\\n> **Q7**: Are the trained modules shared or trained separately between InstructPix2Pix and smartEdit?\\n\\n**A7**: Thanks for this comment. The proposed MRC module is trained separately for InstructPix2Pix and smartEdit without sharing weights. We have clarified this point in Line 403 in revision.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for his careful and helpful comments on our work. We provide our responses below to answer the reviewer's questions. All changes in the revision are marked in red.\\n\\n> **Q1**: The multi-view reward mechanism used in this paper relies entirely on GPT-4o\\u2019s evaluation. Although GPT-4o demonstrates strong capabilities in understanding and generating natural language, it may not fully capture the subtle nuances of human perception regarding image editing quality.\\n\\n**A1**: Thank you for this comment. We agree that the annotation/evaluation from GPT-4o is not as good as human annotation. However, human annotation is very expensive and time-consuming, making it unsuitable for large-scale data generation. In contrast, GPT-4o-based data generation is scalable with reasonable quality. Moreover, our experiments demonstrate that using multi-view rewards generated by GPT-4o can still significantly improve the model's image editing performance, indicating the reliability of our method.\\n\\nWe have included the above discussion in *Section D.3* of the appendix for clarification.\\n\\n\\n> **Q2**: The cost for this reward is expensive as it using GPT-4o .\\n\\n**A2**: Thanks for this comment. The price for GPT-4o is \\\\\\\\$2.5 per 1 million tokens. Evaluating a single sample costs \\\\\\\\$0.004, and evaluating the entire Real-Edit benchmark costs approximately \\\\\\\\$2.24, which is acceptable/negligible compared to training costs such as GPU expenses. Frankly, using GPT-4o incurs additional costs compared to free metrics, but we believe it's worth it. As the most powerful multimodal model, GPT-4o provides more accurate evaluations of editing performance. In the future, we will attempt to train specialized editing evaluation models to replace GPT-4o.\\n\\n\\n> **Q3**: The human evaluations showed slightly lower scores than those generated by GPT-4o, though both showed consistency in ranking. Could the authors provide insights into why this discrepancy exists?\\n\\n**A3**: Thanks for this comment. We guess that the reason for this discrepancy may be that human evaluators often have higher expectations (e.g., setting a higher bar for a perfect score of 5) and subjective perceptions, making them more critical of details and quality. This results in lower scores compared to GPT's evaluations.\\n\\nWe have added the above analysis in *Section 6.3* in revision.\\n\\n> **Q4**: Given the reliance on the RewardEdit-20K dataset, a more detailed release plan for this dataset (and possibly pretrained models) could be helpful.\\n\\n**A4**: Thanks for this comment. We are currently organizing the data and code and plan to publicly release all code (including training, testing, evaluation code) and pretrained models on GitHub within the next two months. Additionally, the multi-perspective reward dataset RewardEdit-20K and the evaluation benchmark Real-Edit will also be made available on Huggingface. \\n\\n\\n> **Q5**: While the approach shows significant improvements, an analysis on instances where the multi-reward mechanism fails or provides subpar results would be beneficial. Understanding the limitations could offer insights for future iterations or refinements of the method.\\n\\n**A5**: Thank you for your suggestions. We agree that adding an analysis of the failed cases would be beneficial for our method. To explore the limitations of our method, we collected and analyzed failed cases (please see *Section A* of the appendix in the revision). The analysis revealed two main limitations of our method. The first limitation is that during testing, even when the given multi-perspective reward scores are all 5, the generated edited image does not always achieve a score of 5. This indicates that the reward information does not always perfectly guide the model, especially in some complex cases. The second limitation is that our method has difficulty accurately understanding the quantifiers and spatial position words in the instructions, as shown in Fig. 10 in revision. This may be due to the model's insufficient understanding of fine-grained language features. In future work, we will explore ways to improve the model's understanding of fine-grained semantics for image editing.\\n\\nWe have added the above analysis to *Section A* of the appendix in the revision.\"}", "{\"comment\": \"Thanks for the helpful comments, we address the concerns in the following items.\\n\\n> **About A3**: (1) The three evaluation metrics for each individual reward seem similar. Please analyze why the metrics increase even when the reward is not related to the specific metric. (2) Why is the evaluation of the full model inferior to that obtained by adding individual rewards?\\n\\n**For (1)**, the three types of rewards come from three different perspectives. Although these perspectives are independent by definition, they essentially aim to *rectify the inaccurate supervision and therefore influence each other*. For example, in the case shown in Fig. 1 (b) with the instruction \\\"make the glasses green\\\", the ground-truth edited image incorrectly changes the background and clothes to green as well. This could mislead the model into thinking that \\\"make the glasses green\\\" requires changing the background and clothes to green, resulting in incorrect instruction following. However, if a detail preserving reward is added, indicating that \\\"the colors of clothes and background are not consistent\\\", it helps the model correctly understand the instruction, thereby indirectly helping with instruction following. Therefore, specific rewards not only improve the corresponding metrics but also enhance the other two metrics.\\n\\n**For (2)**, from Tab. D, we see that using all three rewards simultaneously does not achieve SOTA for each metric. This is due to some negative interactions among the three perspectives; for example, strong instruction following might reduce detail preserving, and strong detail preserving might inhibit instruction following. Since editing is the core task and the sample is considered failed if the image is not edited at all, we prioritize the following score and then compare preserving and quality when following scores are similar. From Tab. D, when using all three rewards, the following score achieves 3.39, while preserving and quality achieve 3.43 and 3.80, respectively, demonstrating the effectiveness of using the three rewards together.\\n\\n**Tab. D: Ablation of each perspective reward. 'IF', 'DP' and 'GQ' are instruction following, detail preserving and generation quality reward.**\\n\\n| &nbsp;&nbsp; IF&nbsp;&nbsp; | &nbsp;&nbsp;DP&nbsp;&nbsp; | &nbsp;&nbsp; GQ &nbsp;&nbsp; | &nbsp;&nbsp;Following&nbsp;&nbsp; | &nbsp;&nbsp;Preserving&nbsp;&nbsp; | &nbsp;&nbsp;Quality&nbsp;&nbsp; |\\n|:----:|:----:|:----:|:---------:|:----------:|:-------:|\\n| \\u2713 | | | 3.40 | 3.25 | 3.72 |\\n| | \\u2713 | | 3.23 | 3.54 | 4.00 | \\n| | | \\u2713 | 3.20 | 3.23 | 3.95 | \\n| \\u2713 | \\u2713 | \\u2713 | 3.39 | 3.43 | 3.80 |\\n\\n\\n> **About A5**: There appears to be a bug between Equation line 281 and Figure 5. From line 281, the latent behaves as query only, then the resulting output of the cross attention $Z''$ eliminates the detailed information of the source image. However in Figure 5, $Z''$ is directly sent to the diffusion model. Per my understanding, the structure of Figure 5 is not feasible, as the diffusion model can not touch the details of the input source image, making it unable to perform image editing.\\n\\n[A] Vaswani, A. \\\"Attention is all you need.\\\" Advances in Neural Information Processing Systems (2017).\\n\\nSorry for the confusion. The $Z_t'$ is obtained by concatenating $Z_t$ with original image condition $c_I$ and fusing them through convolution, thus it contains details of the input source image. For line 281, we use latent noise $Z'_t$ as the query of the reward encoder, which is a standard transformer encoder block from [A]. As shown in Fig. 12 in the paper, it contains a skip connection so that the output is initialized as the input $Z'_t$ and the block only learns residual information if necessary. Therefore, the $Z''$ still contains the detail information of the input source image as well as the reward information. We will make this clear in the final version. \\n\\n\\n> **Others**\\n\\nThanks to the reviewer's suggestion, we have already labeled each equation with a numerical label. We will work on revising the manuscript repeatedly to improve the readability of the paper.\"}", "{\"title\": \"General Rebuttal\", \"comment\": \"We appreciate all the reviewers for their thoughtful and careful feedback.\\nIn this paper, we propose a new *comprehensive solution* to address the limitations in existing image editing, including new training data, network architecture, evaluation benchmarks, and evaluation metrics. Extensive experiments demonstrate that our proposed method can be combined with existing editing models, resulting in significant performance improvements and achieving state-of-the-art results in both GPT-4o and human evaluations.\\n\\nAs suggested by the reviewers, we have thoroughly revised our manuscript and address each of the issues raised in the reviews:\\n\\n- Reviewer `RGo1`: We have added results on other evaluation metrics (*Q1*), further analyzed the reasons for improvements brought by negative text (*Q2*) and discussed the generalizability of the RewardEdit-20K dataset (*Q3*).\\n\\n- Reviewer `X16E`: We compared and discussed the GPT-4o annotations and human annotations (*Q1*), further analyzed the costs of GPT-4o (*Q2*), the score discrepancies between GPT-4o and human evaluations (*Q3*), provided a detailed release plan (*Q4*), and added failure case analysis (*Q5*).\\n \\n- Reviewer `eTqX`: We thoroughly discussed the connections and differences between our approach and methods like DPO-diffusion (*Q1*), and added ablation experiments to further demonstrate the effectiveness of our method (*Q2*).\\n\\n- Reviewer `sJ5c`: We discussed the stability of GPT-4o (*Q1*), validated our method's improvements on challenging editing samples (*Q2*), added ablation experiments and analysis for each reward perspective (*Q3*), compared it with existing reward-based methods(*Q4*), clarified the dimensionality of the linear layer (*Q5*), explained why we train at a resolution of 256 (*Q6*), and clarified that the MRC module is trained separately for different methods (*Q7*).\\n\\nWe once again express our heartfelt gratitude to all the reviewers for their valuable feedback, and we hope that our responses satisfactorily address all concerns. Please feel free to let us know if you have any remaining concerns and we are happy to address them!\"}", "{\"summary\": \"This paper aims to correct the noise supervision in instruction-based image editing models by using multi-view reward data as an additional condition. To achieve this, the authors collected a dataset named RewardEdit-20K, which contains 20,000 instances of multi-view reward data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper intorduce a multi-view reward mechanism, instead of directly improving the quality of ground-truth images, the authors utilized GPT-4o to evaluate the training data from three key perspectives: instruction adherence, detail preservation, and generation quality.\\n\\n2.The RewardEdit-20K dataset and the Real-Edit evaluation benchmark.\", \"weaknesses\": \"1. The multi-view reward mechanism used in this paper relies entirely on GPT-4o\\u2019s evaluation. Although GPT-4o demonstrates strong capabilities in understanding and generating natural language, it may not fully capture the subtle nuances of human perception regarding image editing quality.\\n\\n2. The cost for this reward is expensive as it using GPT4-o .\", \"questions\": \"1. The human evaluations showed slightly lower scores than those generated by GPT-4o, though both showed consistency in ranking. Could the authors provide insights into why this discrepancy exists?\\n\\n2.Given the reliance on the RewardEdit-20K dataset, a more detailed release plan for this dataset (and possibly pretrained models) could be helpful.\\n\\n3. While the approach shows significant improvements, an analysis on instances where the multi-reward mechanism fails or provides subpar results would be beneficial. Understanding the limitations could offer insights for future iterations or refinements of the method.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal (part 1/2)\", \"comment\": \"We thank the reviewer for helpful comments on our work and provide our responses to the reviewer's questions below. All changes in the revision are marked in red.\\n\\n> **Q1**: Both the quantitative assessment in Table 1 and the training strategy are based on the same scoring strategy for the GPT-4o, making the evaluation of the results overly dependent on the a priori of the VLLM and making it difficult to objectively validate the strengths of this method. One solution to this dilemma is to use other assessment metrics (e.g., CLIP scores) on the three dimensions to be adopted for comparison with other methods.\\n\\n**A1**: Thanks for this comment. Evaluating edited images presents significant challenges due to the diversity and uncertainty of possible edited images for a given original image and instruction. GPT-4o is currently the most powerful multi-modal understanding model, capable of accurately parsing editing instructions and comprehensively understanding the content of the original images and the editing requirements. Therefore, we use GPT-4o for automated image editing evaluation. To verify the reliability of GPT-4o, we also performed human evaluations and obtained consistent results. \\n\\nAdditionally, we agree with the reviewers that it is necessary to evaluate the dimensions of Following,Preserving, and Quality based on existing evaluation metrics. We recalculated the performance of existing methods and our method on Real-Edit based on the CLIP score and the FID score, with the results shown in the table below. Specifically, the CLIP feature similarity between the edited image and the instruction is the following score, the similarity between the original and edited images is the preserving score, and the FID between the original and edited images is the quality score. The table shows that our method still achieved promising results and improvements over the baseline.\\n\\n**Tab. A: Comparison of different methods based on existing evaluation metrics.**\\n\\n| Method | &nbsp;&nbsp; Following (CLIP) &nbsp;&nbsp; | &nbsp;&nbsp;Preserving (CLIP)&nbsp;&nbsp; | &nbsp;&nbsp;Quality (FID) \\u2193 &nbsp;&nbsp;|\\n|------------------|:----------------:|:-----------------:|:---------------:|\\n| KOSMOS-G | 26.8 | 86.4 | 3.01 |\\n| MagicBrush | 25.2 | 91.9 | 2.86 |\\n| MGIE | 26.4 | 87.0 | 3.09 |\\n| InstructDiffusion | 26.3 | 86.4 | 2.89 |\\n| HIVE | 26.3 | 89.0 | 3.08 |\\n| HQ-Edit | 28.5 | 77.2 | 3.59 |\\n| InsPix2Pix | 27.0 | 82.3 | 3.51 |\\n| Reward-InsPix2Pix (Ours) | 27.5 | 83.8 | 3.31 |\\n| SmartEdit | 26.5 | 87.7 | 2.80 |\\n| Reward-SmartEdit (Ours) | 26.9 | 90.0 | 2.77 |\\n\\nHowever, these metrics also have limitations: **1)** When the editing instruction and the images are complicated, CLIP/FID score can not accurately represent the following/preserving/quality of the edited image, e.g., CLIP can not distinguish left/right. **2)** the range of the following score and preserving score is relatively small, which may make it hard to distinguish performance differences between methods.\\n\\nWe have included the above discussion and results in *Section C.1* of the appendix in the revision.\"}" ] }
9RCT0ngvZP
Montessori-Instruct: Generate Influential Training Data Tailored for Student Learning
[ "Xiaochuan Li", "Zichun Yu", "Chenyan Xiong" ]
Synthetic data has been widely used to train large language models, but their generative nature inevitably introduces noisy, non-informative, and misleading learning signals. In this paper, we propose Montessori-Instruct, a novel data synthesis framework that tailors the data synthesis ability of the teacher language model toward the student language model's learning process. Specifically, we utilize local data influence of synthetic training data points on students to characterize students' learning preferences. Then, we train the teacher model with Direct Preference Optimization (DPO) to generate synthetic data tailored toward student learning preferences. Experiments with Llama3-8B-Instruct (teacher) and Llama3-8B (student) on Alpaca Eval and MT-Bench demonstrate that Montessori-Instruct significantly outperforms standard synthesis methods by 18.35\% and 46.24\% relatively. Our method also beats data synthesized by a stronger teacher model, GPT-4o. Further analysis confirms the benefits of teacher's learning to generate more influential training data in the student's improved learning, the advantages of local data influence in accurately measuring student preferences, and the robustness of Montessori-Instruct across different student models. Our code and data are open-sourced at https://github.com/cxcscmu/Montessori-Instruct.
[ "synthetic data", "data influence", "instruction tuning" ]
Accept (Poster)
https://openreview.net/pdf?id=9RCT0ngvZP
https://openreview.net/forum?id=9RCT0ngvZP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zy2v9FADa8", "wbhmD6cBs7", "sgIoE21dvW", "sbMH6bRg6p", "qTW3fiNINo", "q66vQQ0Of5", "oAafNHmG2z", "n7WhA9w6zf", "izLUAwtFyF", "g8Q99cgDIL", "fkGMsOQyaA", "er5Nmv4Gpy", "elgRuKpN6n", "PffrosqHBj", "OxbkPZRx4o", "OW0gMsNfAU", "Mn4RxDXkGn", "FlyxVtk6Dm", "DxlDHEngmt", "Cc6OTh1gJw", "B4O7eIlXCS", "AJ6MLZoTQl", "9wkw2Ytm7q", "9WTXB8p3rw", "6Lb86DkTjG", "2OwA4yjefW", "1o4mcdaKbV", "1LARLg4cOB", "0cDGKG8dhx" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732215702845, 1730706991716, 1732214135470, 1732215809008, 1733106275165, 1730710256729, 1730741773899, 1733160751520, 1730530007257, 1732214433764, 1734930016543, 1732215647227, 1732765258877, 1732215282394, 1732553903938, 1732214522485, 1733136606472, 1732660328002, 1732546421503, 1733160763951, 1732643600559, 1733105791764, 1732528533380, 1737524124354, 1732215121476, 1732553606405, 1732553663894, 1732765208857, 1732215421664 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Reviewer_PpzE" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Reviewer_5Czb" ], [ "ICLR.cc/2025/Conference/Submission11434/Reviewer_5Czb" ], [ "ICLR.cc/2025/Conference/Submission11434/Reviewer_hB25" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Reviewer_D2rt" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Area_Chair_FZiN" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Reviewer_hB25" ], [ "ICLR.cc/2025/Conference/Submission11434/Reviewer_5Czb" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Reviewer_D2rt" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Reviewer_PpzE" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ], [ "ICLR.cc/2025/Conference/Submission11434/Authors" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal by Authors Part 2\", \"comment\": \"**Question 1**: In lines 226-228, how were the 6,792 preference pairs collected from the 10K probing dataset?\\n\\n**Response**: Thank you for pointing it out! As described in Section 3.3 (lines 196\\u2013199), we create preference pairs that satisfy the following conditions: 1) they share the same seed prompt, and 2) one has a positive influence while the other has a negative influence. To achieve this, we first aggregate the data in the 10K probing dataset by their seed prompts, dividing them into positive and negative groups. Every time we select one data point from the positive group and one from the negative group, if the two data points are generated from a common seed prompt, we combine them into a preference data pair. Since some seed prompts generate data with only positive or negative influences, we end up with 6,792 pairs from the 10K probing dataset.\\n\\n**Question 2**: Will the general capabilities of the optimized teacher model deteriorate?\\t\\n\\n**Response**: Thanks for raising this invaluable question! We conducted comprehensive testing on the Llama3-8B-Instruct teacher (with Llama3-8B as the student) before and after DPO on MT-Bench, MMLU, GSM8K, GPQA, ARC-C, and HellaSwag. The results are as follows:\\n\\n| | MT-Bench | MMLU | GPQA | ARC-C | GSM8K | HellaSwag |\\n|:------------------:|:--------:|:-----:|:-----:|:-----:|:-----:|:---------:|\\n| Llama3-8B-Instruct | 7.472 | 66.21 | 31.96 | 59.54 | 73.48 | 77.21 |\\n| Teacher-DPO-Iter1 | 7.473 | 65.95 | 31.72 | 59.15 | 73.65 | 76.86 |\\n| Teacher-DPO-Iter2 | 7.465 | 66.07 | 32.54 | 58.86 | 73.14 | 77.08 |\\n\\nThe results indicate that the teacher's ability, after specific optimization in data synthesis capability, is basically on par with the original model with minimal fluctuations, demonstrating that optimizing the teacher's data synthesis capability does not adversely affect performance on OOD tasks.\\n\\n[1]: Yu, Z., Das, S., & Xiong, C. (2024). MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models. *arXiv preprint arXiv:2406.06046*.\"}", "{\"summary\": \"This paper introduces Montessori-Instruct, a novel framework that optimizes the teacher model's data generation capabilities by aligning them with student learning preferences. The framework uses local data influence measurements to estimate how synthetic data impacts student learning, then employs DPO to tune the teacher model accordingly. Experiments with Llama-3 and TinyLlama models showed significant improvements over baseline methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"A novel data influence-driven approach where the teacher model actively learns to generate training data by optimizing against student's learning preferences, in contrast to previous works that use a static teacher.\", \"The experimental comparisons and benchmarks are comprehensive, the proposed method shows significant improvements, and the ablation and analysis experiments are thorough.\"], \"weaknesses\": \"The main drawback of this work is the excessive computational overhead beyond the objective of training the student model. Although the authors discussed this additional computation in detail in Section 6 and Appendix E, I believe that the impact on the paper's practicality cannot be dismissed through discussion alone. The majority of the additional computational cost comes from local data influence for the student, where one step of training is required for each sample, along with evaluation on the reference dataset. To accelerate this process, the authors used 8xH100, which is a very expensive overhead. This makes me question whether we could actually achieve similar gains by investing these resources in using stronger teacher models and building more sophisticated reward model pipelines[1]. I encourage the authors to (1) discuss potential directions that could directly reduce the unit cost of local data influence, rather than encouraging more computational resources, thereby enhancing the promise of local data influence-based approaches. (2) present the additional monetary costs for each method (i.e., APIs used for baseline methods and computational resources used for acceleration) to improve the fairness and transparency of the comparison.\\n\\n[1] Snell, Charlie, et al. \\\"Scaling llm test-time compute optimally can be more effective than scaling model parameters.\\\"\\u00a0_arXiv preprint arXiv:2408.03314_\\u00a0(2024).\", \"questions\": [\"In lines 226-228, how were the 6,792 preference pairs collected from the 10K probing dataset?\", \"Will the general capabilities of the optimized teacher model deteriorate?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Rebuttal\", \"comment\": \"# 0 Overview\\n\\nWe thank all the reviewers for their great efforts. \\n\\n**In this post**:\\n1. We summarize the positive points that the reviewers unanimously agree on.\\t\\t\\n2. We provide further clarification on the cost-performance relationship of our method compared to all baselines, along with a discussion on how the FLOPs utilized during post-training compare with those used during pretraining and test-time scaling. While the reviewers expressed concerns regarding the cost of our approach, our analysis demonstrates that Montessori-Instruct achieves better results at a cost comparable to Self-Instruct and is better than the best performance achieved by Self-Reward and LLM2LLM.\\n3. We provide the results of applying our method to self-play, where the student can also serve as the teacher to improve itself.\\n\\nIn the individual replies, we address other comments. We have also added a new appendix section to the paper, which is highlighted in red. The updated PDF has been re-uploaded.\\n\\n# 1 Positive statements\\n\\n- We sincerely thank the reviewers for recognizing the key contributions of our work. We appreciate their acknowledgment of Montessori-Instruct as a well-motivated and novel pipeline, particularly its use of data influence as rewards to align the teacher\\u2019s generation with the student\\u2019s preferences. (`hB25`, `5Czb`, `PpzE`, `D2rt`).\\n- We are grateful for their recognition of our method\\u2019s strong generalization ability in addressing OOD challenges (`5Czb`, `D2rt`), its theoretical guarantees, dynamic teacher design, and superior performance demonstrated through ablation studies (`D2rt`, `hB25`, `PpzE`). \\n- We thank the reviewers for praising our clear and well-written paper (`hB25`, `5Czb`, `PpzE`, `D2rt`).\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Thank you for your review of our paper! We will address your questions/comments below:\\n\\n**Question 1**: how does the effectiveness of the framework scale with the data?\\n\\n**Response**: Thank you for your positive affirmation of our work! The question you raised, which has also been mentioned in our limitations, is indeed a direction for further exploration. In our main experiment, we used 10K data points. To show the benefits of scaling training data, we randomly subsampled 2K, 5K, and 8K data points and further expanded the data volume to 20K. The results are shown in the table below:\\n\\n| Training Size | LC-WR | WR | MT-Bench |\\n|---------------|--------|--------|----------|\\n| 10K Self-Instruct | 50.00% | 50.00% | 6.490 |\\n| 2K Montessori-Instruct | 44.84% | 44.57% | 5.940 |\\n| 5K Montessori-Instruct | 50.71% | 51.29% | 6.563 |\\n| 8K Montessori-Instruct | 52.32% | 54.49% | 6.785 |\\n| 10K Montessori-Instruct | 54.92% | 58.59% | 6.903 |\\n| 20K Montessori-Instruct | 55.75% | 59.92% | 6.922 |\\n\\nWe found that when the data size is 5K, the performance of the student surpassed that of the student trained on 10K Self-Instruct data. As the data size increases, we can observe a continuous improvement in the performance of Alpaca Eval and MT-Bench, but the performance grows slower and slower. We believe this is due to two reasons: \\n\\n1. As the student is trained, its data preferences will also change, so it is necessary to collect updated data influence to optimize the teacher in order to achieve sustained performance improvement. \\n\\n2. As the synthesized data size increases, the ratio of similar data (identified by Rouge-L > 0.7) will increase, and the ratio of useful data will decrease. For example, when the size is 5K, the amount of similar data is ~2K; when the size is 20K, the amount of similar data is ~10K; and when the size is 50K, the amount of similar data reaches ~35K. **We found this high data duplication rate in all the baselines**, and we did not find papers studying this phenomenon. We believe this is a challenging issue and will raise this in the discussions of our paper's next version as a call for action for the community. In the future, we will explore diversifying the seed data to alleviate this phenomenon.\"}", "{\"title\": \"Thank you for your response!\", \"comment\": \"Thank you for the response, which has addressed most of my concerns. I have further improved the score.\"}", "{\"summary\": \"This paper proposes a customized synthetic data generation pipeline that seeks to improve the expert model towards generating more useful and debiased data for the student model by aligning the expert model towards data distribution with higher influences on the student model via DPO. Specifically, this paper (1) first adopts an established influence function in active learning to measure the utility of a single data point in the probing set, (2) then they construct preference data using the positive and negative data influences sampled from a same prompt, and use this data to align the expert model. Finally, they regenerate data from the updated expert model and finetune the student model with this data. While the method proposed in this paper generally make sense, it suffers from a lack of ablation study and analysis on the additional cost (e.g. manual curation cost of the reference set, per sample influence inference cost, training cost of DPO on expert model) incurred by this data selection process.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper proposes a synthetic data generation pipeline to customizedly align the expert model towards higher data utility over the student model. Specifically,\", \"the method proposed is sound and can intuitively address the OOD issue with the synthetic data to enhance the generalizability of the student model.\", \"this paper has good presentations and conveys the idea clearly.\"], \"weaknesses\": [\"The paper lacks more informative ablation study and in-depth explanation and analysis, especially,\", \"How do the authors pick the reference dataset and determine its size? It seems this dataset is critical to judge the influence of the data samples on the student model and highly determine the performance of the fine-tuned student model later.\", \"Can the author provide some ablation study on the size of the reference dataset and its selection method?\", \"How many samples do the authors obtain for each prompt in order to get a pair of positive and negative instruction? What are the additional costs behind these?\", \"Calculating the inference for each data point in the probing set seems a bit costly as it involves an LLM optimization (even if just one step). Can the authors provide an ablation study on the sample size of the probing set and the final performance?\", \"It also introduces some other additional costs as the proposed method involves DPO on the expert model, while most of the baselines do not. As the expert model is large in size, this training cost seems also non-negligible.\"], \"questions\": \"See the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Montessori-Instruct, a new framework for generating synthetic training data to enhance the learning of student language models. The authors propose a method that aligns data synthesis by a teacher model (e.g., Llama3-8B-Instruct) with the learning process of the student model (e.g., Llama3-8B). The approach involves assessing the local data influence of synthetic data points on the student model to understand its learning preferences, followed by training the teacher model using Direct Preference Optimization (DPO) to generate data better suited for the student. Experiments show that Montessori-Instruct significantly improves performance on benchmarks like Alpaca Eval and MT-Bench.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The concept of aligning the teacher model to better match the student model\\u2019s learning preferences is well-motivated.\", \"The method of assessing the impact of individual data points on student learning (i.e., local data influence) is a interesting approach backed by theoretical guarantees.\"], \"weaknesses\": [\"Including experiments with multiple runs and reporting the mean and standard deviation would strengthen the reliability of the results.\", \"The performance gains, particularly on out-of-domain benchmarks, appear minimal when weighed against the additional computational cost involved in training the teacher model.\", \"The idea of local data influence, which is the crucial algorithm for the proposed solution, is not new but taken from a previous paper, as the authors have mentioned. This reliance on existing techniques may reduce the perceived originality of the paper.\"], \"questions\": \"See weaknesses above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your response and recognition of our work! If you have any further questions, please don't hesitate to let us know.\\n\\nBest,\\n\\nThe Authors\"}", "{\"summary\": \"This paper introduces MONTESSORI-INSTRUCT, a framework to generate synthetic data for training student language models - tailored for the student model's learing process / ability. The method first uses local data influence to measure the utility of synthetic data points for student learning. Then, it optimizes a teacher model with DPO to generate more effective synthetic training data by aligning with the student model's learning preferences.\\n\\nThe authors evaluate MONTESSORI-INSTRUCT using `Llama3-8B-Instruct` as the teacher and `Llama3-8B/Tinyllama-1`.1B as students. Evaluation on Alpaca Eval and MT-Bench shows that Montessori-Instruct outperforms existing methods like Self-Instruct. The authors also show that Montessori-Instruct can beat GPT-4o on in-domain and out-of-domain evaluation. Ablation studies highlight the effectiveness of using data influence to capture student preferences, the benefits of optimizing teacher parameters, and the robustness across different configurations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**Writing**.\\n- Introduction is well-written. The motivation is clear, the problem is well-defined, key contributions are listed and aligned with the structure of the paper.\\n- Related work is well-written, extensive, and up-to-date. Related works is also well-organized in logic, and set up a good foundation between existing line of works and the proposed method.\\n\\n**Evaluation**.\\n- Evaluation has good coverage of different baselines, and the selection of baselines are realistic. Generally, the evaluation is comprehensive and thorough.\\n- Ablation study is comprehensive and thorough. The ablation studies the effectiveness of teacher optimization, seed data, (multiple) iterations, and very clearly demonstrates the generalizability of the method.\\n\\n**Originality**. The idea of the paper is novel, and well-motivated. \\n\\n**Significance**. The proposed method is a good contribution to the field of synthetic data generation for language model training. The proposed method is generalizable to other domains, though with additional overhead as stated in the limitation section.\", \"weaknesses\": \"**Scale of experiment.** The limitation section points out that in the experiment, the scale is chosen as a fixed 10k data points. Ablation on the scale of experiment will be helpful to show the generalizability of the method on the scale of data.\", \"questions\": \"1. Is there an easy way to see how the effectiveness of the framework scales with the data? You are welcome to scale down / scale up, to a point where the experiment is reasonable.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"2 Cost-performance relation of all the methods\", \"comment\": \"# 2 Cost-performance relation of all the methods\\nWe further explain the reviewers' questions about the consumption of our method. We analyzed the **Performance-FLOPs curve** of four methods, with a particular focus on the changes in Self-Instruct's Alpaca Eval and MT-Bench Score as their FLOPs increase to levels comparable to those of Montessori-Instruct. We scale the FLOPs of Self-Instruct by synthesizing additional data. We also marked the Performance-FLOPs relationship of the two baselines, LLM2LLM and Self-Reward, in **Figure 15 (a), Figure 15 (b) and Figure 15 (c)** . We have also attached the PDF versions of these three figures in the uploaded files.\\n\\nAccording to the figures, it can be seen that Self-Instruct quickly reached the upper bound during the scaling-up process, and even with more FLOPs, no better performance improvement can be achieved. The reason may be that the data generated by Self-Instruct is severely homogenized. In contrast, the upper bound of our method is significantly better and continuously grows when we invest more FLOPs into it.\\n\\nThen we give a computational result of the FLOPs estimated for four methods, as well as the pretraining and test-time-scaling. The main FLOPs for Montessori-Instruct come from processing probing data. In the main table, we used 10K probing data to utilize the most resources to achieve the best performance, but as the Figure 3(a) and 3(b) in our paper suggests, using ~1K probing data can already achieve better performance than other baselines. To make a fair comparison, we calculate the FLOPs under 1K probing data. We estimate the FLOPs as follows (Llama3-8B-Instruct as the teacher, Llama3-8B as the student):\\n\\n- Self-Instruct: $1.34\\\\times10^{20}$ FLOPs\\n- Self-Reward: $2.11\\\\times10^{21}$ FLOPs\\t\\n- LLM2LLM: $2.3\\\\times10^{20}$ FLOPs\\n- Montessori-Instruct: $6.43\\\\times10^{20}$ FLOPs\\n- Pretrain Llama3-8B: $1.87\\\\times10^{24}$ FLOPs\\n- Inference-Time Scaling : $1.60\\\\times10^{23}$ FLOPs\\n\\nWe can see that Montessori-Instruct's FLOPs are 7 times less than Self-Reward, which is the current SOTA method. Furthermore, if we use the proxy model[1], such as a smaller-sized model (e.g., 1B parameters for assisting an 8B model) to process probing data, Montessori's FLOPs can further reduce to $1.92\\\\times10^{20}$ FLOPs. This makes it comparable to Self-Instruct while still outperforming it. Using a proxy model has promising potential for enhancing both efficiency and performance, which we leave for future work. Regarding the pretraining, since the computational cost during the SFT phase is significantly lower than that during the pretraining phase ( $10^4$ times smaller), even if we increase resource investment in SFT, its overall consumption remains minimal. Recent work has focused on scaling inference time to achieve better performance [2]. However, the inference-time scaling FLOPs are also significantly larger than those of SFT, being approximately $10^3$ times greater, according to [3]. Nevertheless, our teacher training represents a one-time cost. As demonstrated in Section 5.4 of the paper, the optimized teacher can assist multiple students in improving their performance without the need for retraining from scratch.\\n\\nThe detailed derivation is provided in Section E.3 of the new version of the paper in the Appendix.\\n\\n\\n\\n[1]: Yu, Z., Das, S., & Xiong, C. (2024). MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models. *arXiv preprint arXiv:2406.06046*.\\n\\n[2]: Snell, C., Lee, J., Xu, K., & Kumar, A. (2024). Scaling llm test-time compute optimally can be more effective than scaling model parameters. *arXiv preprint arXiv:2408.03314*.\\n\\n[3]: Sardana, N., Portes, J., Doubov, S., & Frankle, J. (2023). Beyond chinchilla-optimal: Accounting for inference in language model scaling laws. *arXiv preprint arXiv:2401.00448*.\"}", "{\"metareview\": \"This paper proposes to use influence function to select training examples in the iterative instruction tuning process. Despite increasing the computation cost due to evaluating the influence function, it can outperform other iterative improvement methods and the observations are valuable for future studies.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised concerns on the FLOPs comparison with other methods and the authors gave the details.\"}", "{\"title\": \"Rebuttal by Authors Part 1\", \"comment\": \"Thank you for your review of our paper! We will address your questions/comments below:\\n\\n**Weakness**: concerns about the additional cost\\n\\n**Response**: \\n\\n**(1) 8xH100 is a very expensive overhead**\\n\\nWe used the H100 to complete the experiments on the main table, which utilized the local data influence of 10K data to optimize the teacher, solely to achieve the best performance. As we mentioned in the general response, we only need about 1K local data influences to enable the student\\u2019s performance to surpass other baselines, which only require 8 A6000s or 4 A100-80GBs and can be finished in 3 hours. This actually provides researchers with diverse options: if resources are limited, generating approximately 1,000 data influences to optimize the teacher can yield significantly superior performance (2.40% for LC-WR, 5.85% for WR, and 0.304 for MT-Bench), all of which outperform the baselines. If resources are sufficient, the teacher can be continuously optimized to further raise the performance ceiling. \\n\\n**(2) whether we could achieve similar gains by investing these resources in using stronger teacher models**\\n\\nWe believe that a strong teacher does not necessarily mean that it can generate equally high-quality synthetic data, as mentioned in this paper[1]. Thus, targeted optimization is an important condition for synthesizing high-quality data. We represent another dimension of improvement, one that not only enhances teacher\\u2019s capability but also supports student\\u2019s learning. Moreover, the construction of complex reward models is also very complicated and resource-intensive. \\n\\nIn fact, the resources we invest in optimizing the teacher represent a one-off cost. In Section 5.4, we fine-tuned four different models, including Qwen, Gemma, and Mistral, using the data synthesized by the same optimized teacher, all of which achieved better performance than the baseline. Therefore, our method allows the teacher to be trained once and fixed, without the need to retrain every time a student is changed. \\n\\n**(3) Discuss potential directions that could reduce the unit cost**\\n\\nWe believe the most direct approach is to use a proxy model, as demonstrated in our experiments, where the local data influence obtained from a 1B small model can generalize well to an 8B model (54.13%WR, 53.61%LC-WR and 6.83 for MT-Bench). This can help reduce our computational load to a level comparable to the Self-Instruct method. Another future direction is to utilize classifier models, such as BERT-based models, to further accelerate the process of obtaining data influence[1]. Thank you for your suggestion.\\n\\n**(4) present the additional monetary costs for each method**\\n\\nThank you for your suggestion! We calculated the FLOPs of each method in the General Response. According to the results, the FLOPs of our method is lower than both the Self-Reward and LLM2LLM baselines when using the proxy model, and it is already very close to Self-Instruct. However, our method demonstrates a significant improvement for the student, with increases of 3.61% for LC-WR, 4.13% for WR, and 0.34 for MT-Bench. Regarding the monetary costs, the A6000 is priced at \\\\\\\\$1 per hour, the A100 at \\\\\\\\$1.50 per hour, and the H100 at \\\\\\\\$2 per hour. In the Self-Instruct and LLM2LLM methods, we utilize GPT-4o to generate synthetic data. The API cost for GPT-4o is \\\\\\\\$2.50 per 1 million input tokens and \\\\\\\\$10.00 per 1 million output tokens. Consequently, the average cost for synthesizing 10,000 data points (considering data waste) amounts to approximately \\\\\\\\$53.20, which is significantly more expensive than using GPUs. So in fact, our method is superior in terms of both computational load and cost.\"}", "{\"comment\": \"Thank you very much for your response and recognition of our work! If you have any further questions, please don't hesitate to let us know.\"}", "{\"title\": \"Rebuttal by Authors Part 1\", \"comment\": \"Thank you for your review of our paper! We will address your questions/comments below:\\n\\n**Weakness 1**: How to pick the reference dataset and determine its size?\\n\\n**Response**: Thanks for raising this question! Regarding the choice of the reference dataset, we have two guiding principles: First and foremost, we select reference tasks that reflect the target capabilities we want the LLM to achieve. Following the practices of some previously accepted excellent works [1][2][3], we chose to use in-domain data that is the same as the seed data, specifically alpaca gpt4[4]. This is a dataset synthesized by GPT4 using prompts in the Alpaca format, aimed at improving the model's instruction-following ability. Second, we ensure that there is no data leakage to prevent potential overfitting. Our OOD experiments show good generalization ability of students, which demonstrates that we are not overfitting the targeted task. Regarding the size of the reference dataset, we selected the best quantity that balances performance and efficiency within the limits of our computational resources, which is 256.\\n\\nWe conducted ablation experiments on different reference datasets and different reference dataset sizes.\\n\\nRegarding the size of the reference dataset, in addition to the original 256, we also chose 8, 32, and 128 for experimentation.\\n\\n| Size | LC-WR | WR | MT-Bench | Correlation |\\n|------|--------|--------|----------|----------|\\n| 8 | 56.74% | 60.23% | 6.672 | 0.940 | \\n| 32 | 54.30% | 59.53% | 6.654 | 0.992 |\\n| 128 | 53.29% | 56.70% | 6.820 | 0.984 |\\n| 256 | 54.92% | 58.59% | 6.903 | 1.000 |\\n\\nThe experimental results show that changing the reference dataset size has a small impact on the student's performance. Although it performs well on Alpaca Eval when the size is 8, it performs poorly on MT-Bench, which may be due to the randomness of selecting the 8 data points. We also calculated the correlation coefficients between the data influence generated by different sizes and the data influence when the size is 256. All the correlation coefficients are greater than 0.9, demonstrating a very strong correlation among the reference datasets of different sizes. Therefore, the chosen size of 256 can achieve the best overall performance on both Alpaca Eval and MT-Bench metrics.\\n\\nIn addition to the original alpaca gpt4 as the reference dataset, we chose two other datasets: DOLLY[5] and Open Assistant[6]. These two open-ended generation datasets with human-written answers contain various forms of data, while the answers in the alpaca gpt4 dataset are synthesized by the model rather than written by humans. The results are shown below: \\n\\n| Seed data | LC-WR | WR | MT-Bench |\\n|----------------|--------|--------|----------|\\n| Alpaca GPT4 | 54.92% | 58.59% | 6.903 |\\n| Dolly | 53.77% | 54.62% | 6.752 |\\n| Open Assistant | 48.76% | 51.48% | 6.946 |\\n\\nUsing Dolly and Open Assistant as reference datasets will lower the scores of Alpaca Eval. We believe this decrease comes from that Dolly and Open Assistant are responses provided by humans, while Alpaca GPT4 generates answers using the GPT4 model. The latter has a smaller distribution difference from the model itself, making it easier to learn from. Additionally, we believe that the reason Open Assistant improves the MT-Bench score is that MT-Bench measures the model's ability in multi-turn dialogue, and only Open Assistant among these three reference datasets contains multi-turn dialogue data. Overall, our method demonstrates robustness on reasonable reference tasks.\\n\\n**Weakness 2**: More details about preference data pairs\\n\\n**Response**: We generate four instructions for each prompt, with each instruction assigned to a different local data influence. From these four instructions, we select one with positive influence and one with negative influence to form a preference data pair. If there are multiple positive/negative data influences, we will pair them together randomly, resulting in more than one preference data point for a single prompt. We will include more details in the next version of the paper.\"}", "{\"title\": \"Looking Forward to Your Reply\", \"comment\": \"Dear Reviewer `D2rt`,\\n\\nWe have carefully addressed your feedback in our rebuttals and provided detailed responses to each of your comments, particularly regarding the scale of the experiments. We believe these clarifications will aid in assessing our work more comprehensively.\\n\\nWe would greatly appreciate it if you could review our rebuttals and provide any further feedback, given that the author-reviewer discussion will be closed on Nov. 26 at 11:59 p.m. AoE in no more than two days. We are willing to answer any further questions.\\n\\nThank you for your time and consideration. We look forward to your reply.\\n\\nBest,\\n\\nThe Authors\"}", "{\"title\": \"3 Self-play experiments\", \"comment\": \"# 3 Self-play experiments\\nWe use a teacher-student framework in the main paper to ensure a fair comparison with other baselines, as they all rely on a strong model to generate instructions for the student. However, our method also demonstrates even greater potential in the self-play setting, where the same model serves as both teacher and student. We conducted the experiments under the same conditions outlined in Table 1, using LLama3-8B-Instruct as both the teacher and the student, which yielded promising results.\\n\\n| | Alpaca Eval WR | Alpaca Eval LC-WR | MT-Bench |\\n|--------------------------|----------------|--------------------|----------|\\n| Llama3-8B-Instruct | 50.00% | 50.0% | 7.472 |\\n| Llama3-8B-Instruct-iter1 | 53.74% | 52.51% | 7.563 |\\n| Llama3-8B-Instruct-iter2 | 56.78% | 54.84% | 7.595 |\\n| Llama3-8B-Instruct-iter3 | 58.62% | 56.12% | 7.611 |\\n\\nLlama3-8B-Instruct shows continuous growth on both the in-domain Alpaca Eval and the out-of-domain MT-Bench, demonstrating the exciting prospects brought by our method combined with self-play.\"}", "{\"comment\": \"Thanks for the detailed response. The response has resolved my concerns, and I have updated the score accordingly.\"}", "{\"comment\": \"I thank the authors for their detailed response, which has addressed most of the concerns. Regarding the response to W1, could the authors provide more analysis on why simply using in-domain data can significantly boost OOD performance using their method? Since it appears that their method selects samples simply based on their influence over the reference data, which has no implication on the distribution of the test set. Regardless, I have improved my score.\"}", "{\"comment\": \"Thank you very much for your response and recognition of our work! We will add the assessment results of teachers' general abilities and further explain how to use the small proxy model in the next version.\"}", "{\"comment\": \"Thank you very much for your response and recognition of our work! If you have any further questions, please don't hesitate to let us know.\\n\\nBest,\\n\\nThe Authors\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for you response! I think the comment addresses my concern about the methodology.\"}", "{\"title\": \"Looking Forward to Your Reply\", \"comment\": \"Dear Reviewer `hB25`,\\n\\nGiven that Dec 2nd is the last day for reviewers to post messages, we would greatly appreciate it if you could review our rebuttals and provide any further feedback. We are willing to answer any further questions.\\n\\nThank you for your time and consideration. \\n\\nBest,\\n\\nThe Authors\"}", "{\"comment\": \"Thank you for your comprehensive reply! The small model proxy is indeed a good point. My concerns have been largely addressed.\\n\\nI sincerely believe this work represents one of the general solutions to the current problem that \\\"stronger teachers may not necessarily be better at teaching students,\\\" if its computational efficiency can be effectively improved in the future. The data influence calculation faithfully reflects the samples' contributions to real downstream tasks and can be used to adjust the teacher model.\\n\\nTherefore, I am inclined to accept this work.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Thank you for your review of our paper! We will address your questions/comments below:\\n\\n**Weakness 1**: Include experiments with multiple runs and report the mean and the standard deviation.\\n\\n**Response**: Thank you for pointing it out! We conducted three repeated experiments on the Llama3-8B group using different seeds. We report the mean and standard deviation of the evaluation results.\\n\\n| | LC-WR | WR | MT-Bench | MMLU | GPQA | ARC-C | GSM8K | HellaSwag |\\n|--------------------|----------------------|----------------------|-------------------|--------------------|--------------------|--------------------|--------------------|--------------------|\\n| Llama3-8B-iter1 | 54.88% \\u00b1 0.39% | 58.51% \\u00b1 0.35% | 6.90 \\u00b1 0.05 | 62.78 \\u00b1 0.15 | 30.06 \\u00b1 0.31 | 62.32 \\u00b1 0.78 | 58.47 \\u00b1 0.57 | 81.10 \\u00b1 0.14 |\\n| Llama3-8B-iter2 | 56.81% \\u00b1 0.39% | 60.55% \\u00b1 0.36% | 7.18 \\u00b1 0.06 | 63.32 \\u00b1 0.21 | 31.86 \\u00b1 0.43 | 60.40 \\u00b1 0.31 | 60.03 \\u00b1 0.39 | 81.90 \\u00b1 0.14 |\\n\\n\\nAs shown in the table, our method demonstrates strong stability on both the Alpaca Eval and OOD benchmarks. Even in the case of $-3\\\\sigma$, we are still better than the baseline. While the fluctuations in MT-Bench are slightly larger compared to the other benchmarks, we attribute this to the inherent instability of using LLM-as-a-judge in MT-Bench. We will repeat the other experiments in the following time and report the mean and standard deviation of all methods in the final version.\\n\\n**Weakness 2**: The performance gain on OOD benchmarks appears minimal\\n\\n**Response**: Thank you for raising this question! We want to clarify that our primary goal is to enhance the model\\u2019s instruction-following ability (evaluated using Alpaca Eval for in-domain and MT-Bench for OOD) while also preserving its generalization ability across other tasks (e.g., Math, QA). Synthetic data often exacerbate OOD issues, as evidenced by performance declines on OOD benchmarks for our baselines (e.g., a 3.73 point drop for Self-Reward-iter1 and a 2.8 point drop for LLM2LLM-iter1 on GPQA). However, our method effectively addresses this challenge by maintaining strong overall performance on OOD benchmarks, with notable improvements of 0.673 points on MT-Bench, 1.05 points on MMLU, 1.26 points on GSM8K, 2.99 points on ARC-C, and 1.05 points on HellaSwag (you can find more details in Table 1 in our paper). This strength was also highlighted by Reviewer 5Czb, who noted: \\u201c***the method proposed is sound and can intuitively address the OOD issue with the synthetic data to enhance the generalizability of the student model.***\\u201d We will emphasize this further in the next version of the paper.\\n\\n**Weakness 3**: This reliance on existing techniques may reduce the perceived originality of the paper.\\n\\n**Response**: We want to clarify that our focus is not on emphasizing local data influence as an innovation point. Influence functions are a well-established method in statistics and have a wide range of applications[1][2][3][4], except for guiding the generation of synthetic data. Our contribution lies in being the first to leverage data influence to guide the synthetic data generation process and demonstrate its superiority, as highlighted in lines 85-86 in the paper: \\u201cWe incorporate influence functions to accurately capture the student\\u2019s data preferences and effectively guide the teacher\\u2019s optimization directions.\\u201d Also, in the ablation experiment titled \\u201cEffectiveness of Local Data Influence\\u201d in Section 5.3, we compared the performance of IF and an LLM-judger in evaluating data quality. The results clearly show that IF outperforms the LLM-as-a-judge, with a 1.50% improvement on LC-WR, a 3.66% improvement on WR, and a 0.172 point gain on MT-Bench. This shows the effect of introducing IF. We believe that incorporating IF in the generation of synthetic data has huge potential also in pre-training and post-training, given the importance and wide usage of synthetic data.\\n\\n[1]: Sanford Weisberg and R Dennis Cook. Residuals and influence in regression. 1982.\\n\\n[2]: Koh, P., & Liang, P. (2017). Understanding black-box predictions via influence functions. *In International conference on machine learning (pp. 1885\\u20131894)*.\\n\\n[3]: Park, S., Georgiev, K., Ilyas, A., Leclerc, G., & Madry, A. (2023). Trak: Attributing model behavior at scale. *arXiv preprint arXiv:2303.14186*.\\n\\n[4]: Grosse, R., Bae, J., Anil, C., Elhage, N., Tamkin, A., Tajdini, A., Steiner, B., Li, D., Durmus, E., Perez, E., & others (2023). Studying large language model generalization with influence functions. *arXiv preprint arXiv:2308.03296*.\"}", "{\"title\": \"Looking Forward to Your Reply\", \"comment\": \"Dear Reviewer `hB25`,\\n\\nWe have carefully addressed your feedback in our rebuttals and provided detailed responses to each of your comments, particularly regarding the experiments involving multiple runs and the further explanation of our OOD performance gains. We believe these clarifications will enhance the comprehensive assessment of our work.\\n\\nWe would greatly appreciate it if you could review our rebuttals and provide any further feedback, given that the author-reviewer discussion will be closed on Nov. 26 at 11:59 p.m. AoE in no more than two days. We are willing to answer any further questions.\\n\\nThank you for your time and consideration. We look forward to your reply.\\n\\nBest,\\n\\nThe Authors\"}", "{\"title\": \"Looking Forward to Your Reply\", \"comment\": \"Dear Reviewer `5Czb`,\\n\\nWe have carefully addressed your feedback in our rebuttals and provided detailed responses to each of your comments, particularly regarding the ablation studies on the reference dataset, the size of the probing dataset, and a thorough analysis of our cost-performance relationship. We believe these clarifications will aid in assessing our work more comprehensively.\\n\\nWe would greatly appreciate it if you could review our rebuttals and provide any further feedback, given that the author-reviewer discussion will be closed on Nov. 26 at 11:59 p.m. AoE in no more than two days. We are willing to answer any further questions.\\n\\nThank you for your time and consideration. We look forward to your reply.\\n\\nBest,\\n\\nThe Authors\"}", "{\"comment\": \"Thank you for your response and for your additional questions! We believe the reason lies in two aspects:\\n\\n1. Montessori-Instruct takes student preferences into account when generating synthetic data. The pre-trained model has already acquired sufficient knowledge[1], while instruction tuning is not intended to inject new knowledge but rather to align the query (instruction) with the model's internal knowledge[2][3]. This alignment enhances the model's ability to utilize its existing knowledge, thereby improving its performance on both in-domain and out-of-domain tasks. Specifically, our method calculates the data influence scores of various instructions on the reference dataset. A higher score indicates that this data is more beneficial for the model in aligning external queries with its inherent knowledge, thereby enhancing the model's ability to leverage its internal knowledge.\\n\\n2. Montessori-Instruct utilizes in-domain data as a reference to guide the generation of training data, but it does not train directly on the in-domain data. The actual training data is generated by the teacher and may contain general information that enhances the capabilities of the LLMs. We believe that this can lead to an effective fine-tuning stage, thereby improving performance on out-of-domain tasks[4][5].\\n\\nIf you have any further questions, please don't hesitate to let us know. Thank you for acknowledging our work.\\n\\n[1] Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, LILI YU, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, & Omer Levy (2023). LIMA: Less Is More for Alignment. *In Thirty-seventh Conference on Neural Information Processing Systems.*\\n\\n[2] Mengjie Ren, Boxi Cao, Hongyu Lin, Cao Liu, Xianpei Han, Ke Zeng, Wan Guanglu, Xunliang Cai, and Le Sun (2024). Learning or Self-aligning? Rethinking Instruction Fine-tuning. *In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)*, pages 6090\\u20136105, Bangkok, Thailand. Association for Computational Linguistics.\\n\\n[3] Wei Liu, Weihao Zeng, Keqing He, Yong Jiang, & Junxian He (2024). What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning. *In The Twelfth International Conference on Learning Representations.*\\n\\n[4] Wei, J., Bosma, M., Zhao, V., Guu, K., Yu, A., Lester, B., Du, N., Dai, A., & Le, Q. (2022). Finetuned Language Models are Zero-Shot Learners. *In International Conference on Learning Representations.*\\n\\n[5] Nihal Nayak, Yiyang Nan, Avi Trost, and Stephen Bach. 2024. Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation. *In Findings of the Association for Computational Linguistics: ACL 2024, pages 12585\\u201312611*, Bangkok, Thailand. Association for Computational Linguistics.\"}", "{\"title\": \"Rebuttal by Authors Part 2\", \"comment\": \"**Weakness 3**: Ablation study on the sample size of the probing set and the final performance\\n\\n**Response**: Thank you for raising this question. We actually conducted this experiment in the original paper and illustrated the relationship between student performance and teacher training steps in Figures 3(a) and 3(b), where the size of the probing dataset determines the teacher training steps since the teacher is trained with one epoch. To clarify, we used \\\"probing dataset size\\\" as the x-axis in Figures 3(a) and 3(b) instead of the teacher training steps. The discrete data points used to plot the relationship between probing dataset size and student performance are shown in the table below:\\n\\n| Probing dataset size | LC-WR | WR | MT-Bench |\\n|----------------------|--------|--------|----------|\\n| 500 | 51.86% | 53.63% | 6.652 |\\n| 1000 | 52.40% | 55.85% | 6.794 |\\n| 2000 | 53.77% | 58.05% | 6.681 |\\n| 3000 | 54.42% | 57.81% | 6.852 |\\n| 5000 | 54.10% | 58.20% | 6.920 |\\n\\nAs shown, when the probing dataset size reaches 1K, the student's performance already surpasses the baseline. As mentioned in the general response, if researchers do not have sufficient resources, they can choose to use 1K probing data, which will ensure that the performance of the student model achieves the best results compared to other baselines while maintaining efficiency. We generated 6,792 probing data for the main experiments in our paper and achieved overall optimal performance within the resources available to us.\\n\\n**Weakness 4**: some other additional costs as the proposed method involves DPO on the expert model, while most of the baselines do not.\\n\\n**Response**: We want to clarify that the other two baselines\\u2014LLM2LLM and Self-Reward\\u2014also require additional resources: LLM2LLM relies on larger models via API calls to generate instructions, while Self-Reward introduces a separate 70B expert model dedicated solely to generating instructions for students. In contrast, our method requires only an additional DPO step for the teacher. The FLOPs of DPO can be estimated as 4 times the Policy Model Forward FLOPs plus 2 times the Reward Model Forward FLOPs, while the FLOPs of SFT can be estimated as 3 times the Policy Model Forward FLOPs. Although the FLOPs of DPO are higher than those of SFT on a per-unit data basis, DPO does not incur much additional consumption because it uses a smaller total amount of data. According to the Performance-FLOPs relationship in the General Response, even if we introduce DPO for the teacher, our overall FLOPs are still comparable to Self-Instruction.\\n\\nThis DPO process can be beneficial, as verified in a recent paper[7], which aligns with our experimental findings: a strong model does not necessarily excel at synthesizing high-quality data. Therefore, even with a strong teacher, producing high-quality data may still require investing additional resources to refine the teacher.\\n\\n[1]: Xia, M., Malladi, S., Gururangan, S., Arora, S., & Chen, D. (2024). Less: Selecting influential data for targeted instruction tuning. *arXiv preprint arXiv:2402.04333*.\\n\\n[2]: Paul, M., Ganguli, S., & Dziugaite, G. (2021). Deep learning on a data diet: Finding important examples early in training. *Advances in neural information processing systems, 34, 20596\\u201320607*.\\n\\n[3]: Xiaobo Xia, Jiale Liu, Jun Yu, Xu Shen, Bo Han, & Tongliang Liu (2023). Moderate Coreset: A Universal Method of Data Selection for Real-world Data-efficient Deep Learning. *In The Eleventh International Conference on Learning Representations*.\\n\\n[4]: Peng, B., Li, C., He, P., Galley, M., & Gao, J. (2023). Instruction Tuning with GPT-4. *arXiv preprint arXiv:2304.03277*.\\n\\n[5]: Conover, M., Hayes, M., Mathur, A., Xie, J., Wan, J., Shah, S., Ghodsi, A., Wendell, P., Zaharia, M., and Xin, R. Free. Dolly: Introducing the world\\u2019s first truly open instruction-tuned LLM, 2023.\\n\\n[6]: Kopf, A., et al. \\\"Openassistant conversations-democratizing large language model alignment,\\\" in *Advances in Neural Information Processing Systems, vol. 36, 2024*.\\n\\n[7]: Xu, Z., Jiang, F., Niu, L., Lin, B. Y., & Poovendran, R. (2024). Stronger Models are NOT Stronger Teachers for Instruction Tuning. *arXiv preprint arXiv:2411.07133*.\"}" ] }
9Qptgv0Eyw
PtychoFormer: A Transformer-based Model for Ptychographic Phase Retrieval
[ "Ryuma Nakahata", "Shehtab Zaman", "Mingyuan Zhang", "Fake Lu", "Kenneth Chiu" ]
Ptychography is a computational method of microscopy that recovers high-resolution transmission images of samples from a series of diffraction patterns. While conventional phase retrieval algorithms can iteratively recover the images, they require oversampled diffraction patterns, incur significant computational costs, and struggle to recover the absolute phase of the sample's transmission function. Deep learning algorithms for ptychography are a promising approach to resolving the limitations of iterative algorithms. We present PtychoFormer, a hierarchical transformer-based model for data-driven single-shot ptychographic phase retrieval. PtychoFormer processes subsets of diffraction patterns, generating local inferences that are seamlessly stitched together to produce a high-quality reconstruction. Our model exhibits tolerance to sparsely scanned diffraction patterns and achieves up to 3600 times faster imaging speed than the extended ptychographic iterative engine (ePIE). We also propose the extended-PtychoFormer (ePF), a hybrid approach that combines the benefits of PtychoFormer with the ePIE. ePF minimizes global phase shifts and significantly enhances reconstruction quality, achieving state-of-the-art phase retrieval in ptychography.
[ "Deep Learning", "Transformer", "Ptychography", "Diffractive Imaging" ]
Reject
https://openreview.net/pdf?id=9Qptgv0Eyw
https://openreview.net/forum?id=9Qptgv0Eyw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nhiMeRI3oL", "mOLSrOoYIM", "ctvQowDVyG", "ZFzHPEZBiR", "UX97IAf1nI", "URsoEKnheZ", "5lrQYtjkpg" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "official_review", "decision", "meta_review" ], "note_created": [ 1730768543327, 1730720925010, 1730933585471, 1730941156864, 1730081540765, 1737524212773, 1734661273306 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12751/Reviewer_La6D" ], [ "ICLR.cc/2025/Conference/Submission12751/Reviewer_PSVU" ], [ "ICLR.cc/2025/Conference/Submission12751/Reviewer_r8CE" ], [ "ICLR.cc/2025/Conference/Submission12751/Reviewer_1DEg" ], [ "ICLR.cc/2025/Conference/Submission12751/Reviewer_1WZv" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12751/Area_Chair_Auxc" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces PtychoFormer for data-driven, single-shot ptychographic phase retrieval. The model is robustness with\\nsparsely scanned diffraction patterns and achieves imaging speeds up to 3600 times faster than the extended ptychographic iterative\\nengine (ePIE). Additionally, the authors present the extended-PtychoFormer (ePF), a hybrid model that merges the strengths of\\nPtychoFormer and ePIE, effectively minimizing global phase shifts and significantly improving reconstruction quality.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is written in a logical sense and is easy to follow.\\n2. The author introduces PtychoFormer as a new approach to achieve the fast speed of phase retrieval.\\n3. The authors provide simulated results to demonstrate the performance and speed of their proposed algorithm.\", \"weaknesses\": \"The author's contributions and innovations in the PtychoFormer appear trivial and insufficient, the network structure is similar to previous Mix Transformer (MiT). To enhance the impact of this work, I recommend that the author clearly articulates the unique contributions and advancements of the PtychoFormer .\", \"questions\": \"As I mentioned in the weakness part, the innovation presented in the PtychoFormer may not be robust enough to meet the standards typically expected for an ICLR paper. I suggest the author to provide a more detailed clarification of the contributions in the rebuttal. It would be beneficial to highlight specific aspects of the model that distinguish it from existing work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The manuscript presents PtychoFormer, a hierarchical transformer-based model aimed at enhancing deep learning phase retrieval for ptychography. While it offers a framework that processes multiple diffraction patterns and maintains spatial awareness through relative scan point information.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The model can achieve faster imaging speeds.\\n\\nThe development of the extended-PtychoFormer (ePF) hybridizes deep learning with iterative methods, improves reconstruction quality.\", \"weaknesses\": \"The novelty of the proposed approach appears to be limited, as it does not present substantial advancements over existing methodologies.\\n\\nThis narrow comparison does not provide a comprehensive assessment of its performance against the broader landscape of existing approaches, raising questions about the validity and significance of the claims.\", \"questions\": \"While the introduction of PtychoFormer offers some improvements, such as processing multiple diffraction patterns. However, it can be argued that the model largely represents a combination of existing modules rather than a novel approach.\\n\\nThe comparative analysis shown in Figure 6 is inadequate, as the proposed method is evaluated against only two other techniques, which does not provide a comprehensive assessment of its performance. A broader comparison with additional state-of-the-art methods would strengthen the validity of the claims made in this study.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper develops a transformer based network for solving the ptychographic phase retrieval problem. The proposed method is tested on simulated data and outperforms two CNN baselines. It is also compared with a conventional algorithm ePIE, which it generally underperforms. However, one can initialize the iterative algorithm with the output of the transformer to get results that are better than either.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"+++Works better than two NN baseline methods.\", \"weaknesses\": \"---The global phase ambiguity is an unresolvable problem---without modifying the measurement process, there is no way to know the phase of the light that hits the microscope. A global phase shift would have zero impact on the intensity-only measurements. Networks are just hallucinating one possible solution.\\n\\n---Any solution that is equivalent up to a global phase shift should be considered equivalent. The results in figure 1 are extremely misleading. The ePIE solution is just as correct as the others.\\n\\n---Tested only on simulated data. Simulated data (natural images) does not match the statistics of typical ptychography samples\\n\\n---Missing comparison: PtychoDV (Gan et al.) recently applied vision transformers to solve the ptychography problem. The argument for why this method doesn't capture spatial relationships (\\\"coordinates alone inadequately capture the overlap between the vectorized patterns with high granularity\\\") is unconvincing and there is no comparisons with this method.\\n\\n---Novelty: If I understand correctly, ePF is just using Ptychoformer output to initialize a conventional iterative algorithm\", \"questions\": \"What differentiates this method from PtychoDV?\\n\\nWhy is the global phase ambiguity problem a problem? When would you ever care about global phase? How is the proposed method doing more than hallucinating the global phase?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies ptychographic phase retrieval and proposes a transformer-based model for recovering phase and amplitude from a series of diffraction patterns. The method leverages a Mix Transfomer (MiT) with hierarchical architecture to extract features at different resolutions and uses a convolutional decoder to reconstruct local patches of the transmission. The patches are stitched together with feathering and form the final reconstruction. The improved performance over other deep methods is demonstrated on simulated data. The output can be further used as an initialization for ePIE, which outperforms ePIE.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The proposed transformer-based method outperforms other DL methods. The performance gain could be a more effective MiT encoder than CNN, the feathering technique when stitching local patches, etc.\", \"weaknesses\": [\"The performance gain with the proposed method seems marginal. The advantage of using transformer is not made very clear.\", \"One major concern is whether the size of training data is enough to train a vision transformer. As the author mentioned, lack of training data is a significant challenge in real world, and the pretraining and finetuning datasets are from <100K images. Is data with this size large enough for the pretraining of a vision transformer to outperform CNN? Will adopting some pretrained ViT/MiT and finetuning from there work better?\", \"Using the output of PtychoFormer as the initialization for ePIE is not very convincing to demonstrate the effectiveness of PtychoFormer. It would be better to benchmark it with using the output of other models (PtychoNet, PtychoNN, etc) as initializations for ePIE.\"], \"questions\": [\"Maybe I missed this part: how does the proposed model eliminate global phase shift by design?\", \"What's the speed difference between ePIE and ePF?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a transformer-based method for ptychographic phase retrieval called PtychoFormer, and they propose to use it as an initialization for the existing ePIE method.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The proposed PtychoFormer runs quickly and yields performance that is 1 or 2 dB better than two deep-network approaches from about 5 years ago.\", \"weaknesses\": [\"The problem statement and model assumptions are extremely vague.\", \"The background section never actually states which quantities are known, which are to be estimated, and which are nuisance parameters. The experiment section seems to suggest that the goals of estimation are the amplitude and phase of the \\u201ctransmission function,\\u201d but that leaves major questions as to whether the complex probe function and the lateral offsets Xj and Yj are perfectly known or not.\", \"The authors also never describe the prior knowledge on the amplitude and phase images. Figure 2 suggests that they are both natural grayscale images, yet completely unrelated, and it seems that the same convention was followed in the experiments. But it is difficult to understand how and why this would manifest in any practical setting.\", \"The authors never describe why there is no measurement noise.\", \"The representation of all quantities as continuous functions of location (x,y) is confusing and unnecessary, since in reality these locations are sampled on a discrete grid. The authors never actually state this, which further increases the confusion.\", \"In the end, it seems that measurement vector (across all diffraction patterns) can be written abs(A*t) for known matrix A and unknown complex vector t, which means that it is a standard phase-retrieval problem that can be solved using a huge number of methods, not some specialized phase-retrieval problem.\", \"The discussion of \\u201cthe current state\\u201d of phase-retrieval algorithms is lacking, as are the methods considered for the numerical experiments.\", \"Well-known classical algorithms like HIO from the 80s are missing.\", \"The family of plug-and-play algorithms like prDeep, Deep-ITA, etc., are never mentioned.\", \"Recent deep-learning based approaches are not mentioned, such as those based on diffusion.\", \"Because all of the aforementioned methods jointly would process all diffraction patterns, they would likely substantially outperform any method than handles each diffraction pattern separately, such as PtychoNet, PtychoNN, and PtyNet.\", \"Likewise, if PtychoDV does not property model the probe function, which seems to be what the authors are suggesting around line 192, then all of these aforementioned methods would likely substantially outperform PtychoDV as well.\", \"The proposed methods have strong limitations.\", \"PtychoFormer is heavily dependent on the assumption that the amplitude and phase of the transmission function are two unrelated natural images, and that massive training sets are available for both. No effort to justify this assumption is given. In practice, the amplitude and phase images will be highly interrelated, neither will be a natural image, and massive training sets will not be available.\", \"PtychoFormer is heavily dependent on the absence of measurement noise, which itself is never justified.\", \"The ePF method is intellectually trivial: just initialize ePIE with PtychoFormer. One could just as easily initialize ePIE with existing DL methods, which perform only slightly worse than the proposed PtychoFormer according to Figure 6.\", \"The experimental results are problematic.\", \"The PtychoNet and PtychoNN competitors appear to be very weak, as described earlier. Also, there are only two competitors tested, which is far fewer than in most ICLR papers. The PtychoDV method, which is much more closely related to the proposed PtychoFormer (since it is based on ViT) is not investigated.\", \"According to Figure 6, the proposed PtychoFormer reduces NRMSE in amplitude recovery by only 1.3dB (i.e., 26%) relative to PtychoNN, which is a primitive method that processes diffraction patterns one at a time. This does not seem impressive.\", \"As for phase reconstruction, the competing methods in Figure 6 all produce NRMSEs that are greater than 1. But how could this be? A trivial method that always reports zero for the phase image would give an NRMSE of one. As a result, this casts doubt the implementations of the competing methods. Furthermore, it means that PtychoFormer only provides a small benefit (25% reduction?) over the trivial all-zeros method, which again does not seem impressive.\", \"Moving on to Figure 7, the claim in subplots (d) and (e) that the proposed ePF method achieves an average 1e-8 recovery MAE and NRMSE is simply impossible. The impossibility can be verified by looking at the example of NRMSE at 20 pixel offset in subplot (c), which is 5 orders of magnitude larger!\"], \"questions\": [\"Which practical applications cause a natural image to manifest as the amplitude and an unrelated natural image to manifest as the phase?\", \"Is the light probe P really perfect known as you seem to assume? If so, why does ePIE iteratively approximate it?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper proposes Ptychoformer, a hirerarchical transformer-based method for solving the ptychographic phase retrieval problem.\\n\\nThe algorithm runs quickly and outperforms earlier (but somewhat outdated) neural-network based methods for phase retrieval. \\n\\nThe reviewers raised several (mostly unanimous) concerns. The idea seems somewhat incremental, the baselines are somewhat outdated and weak, and the performance gains over said baselines were marginal. Moreover, the evaluations were only on simulated data on natural images, so the applicability of the resulting model to real-world ptychographic microscopy problems is questionable.\", \"additional_comments_on_reviewer_discussion\": \"There was no response from the authors.\"}" ] }
9Qfja4ZQW0
A multi-region brain model to elucidate the role of hippocampus in spatially embedded decision tasks
[ "Yi Xie", "Jaedong Hwang", "Carlos D Brody", "David W. Tank", "Ila R Fiete" ]
We present a multi-region brain model exploring the role of structured memory circuits in spatially embedded decision-making tasks. We simulate decision-making processes that involve the cognitive maps formed within the CA1 region of the hippocampus during an evidence integration task, which animals learn through reinforcement learning (RL). Our model integrates a bipartite memory scaffold architecture that incorporates grid and place cells of the entorhinal cortex and hippocampus, with an action-selecting recurrent neural network (RNN) that integrates hippocampal representations. Through RL-based simulations, we demonstrate that joint encoding of position and evidence within medial entorhinal cortex, along with sensory projection to hippocampus, replicates experimentally observed place cell representations and promotes rapid learning and efficient spatial navigation relative to alternative circuits. Our findings predict conjunctive spatial and evidence tuning in grid cells, in addition to hippocampus, as essential for decision-making in space.
[ "place cell", "grid cell", "cognitive map", "multi-region interactions", "decision making", "neuroscience" ]
Reject
https://openreview.net/pdf?id=9Qfja4ZQW0
https://openreview.net/forum?id=9Qfja4ZQW0
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yUk623kKwk", "rtGKxhZqq8", "pp3skzcJAm", "pjGdQmOYZ0", "oZS2qNerrY", "nHEd8U89GW", "mG5iutHozU", "iT68AQhpYT", "geBYz09VEX", "eYJb04pXts", "eNsd4kuItN", "bcOqIqfoUO", "Wy2lYZydvN", "VlAT4DYXd8", "V0dOqkLffO", "UDq9JrqUkZ", "T0B5dRcVKy", "Q9MMxNKvnm", "Q4j1csOaAd", "Q0vUeHMHZq", "NFOoyhRhRq", "L4YHlvgsVR", "FEpvmKZCLN", "CBfeSVT9IG", "5exWCpEKbj", "5LRa4ew3ow", "4asK32o9gl", "2jgJ95Fr1d", "2IWLDvtRao", "1Bwe4CIlDZ" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733003475580, 1731959379954, 1730715106309, 1732925111790, 1731958933454, 1731958456607, 1732678304889, 1730654034542, 1731958686088, 1731957798735, 1731959418148, 1730624042894, 1737523899910, 1733176231743, 1732448991881, 1731959017945, 1732459209624, 1731958977681, 1732570120798, 1731957952749, 1730712805505, 1731958491688, 1735026110912, 1731959294463, 1733169851910, 1730460589652, 1732681324893, 1732678246448, 1732570699201, 1732641879497 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8300/Reviewer_LP6J" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Reviewer_L6Mf" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Reviewer_bpUy" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Reviewer_LP6J" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8300/Reviewer_qnRf" ], [ "ICLR.cc/2025/Conference/Submission8300/Reviewer_bpUy" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Reviewer_LP6J" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Reviewer_TiD8" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Area_Chair_b2MW" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Reviewer_qnRf" ], [ "ICLR.cc/2025/Conference/Submission8300/Reviewer_qnRf" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Authors" ], [ "ICLR.cc/2025/Conference/Submission8300/Reviewer_qnRf" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your thorough follow-up response, as well as the additional experiments and clarifications. I appreciate the effort you have put into addressing my concerns.\\n\\nRegarding the first concern about CA3 recurrence, I acknowledge the inclusion of new experiments (Fig 11 and Fig 12 in Appendix F) and the accompanying discussions in the manuscript. However, I remain unconvinced that the role of CA3 recurrence in generating conjunctive place cells can be fully excluded based on your current results. My primary concern is that the learning rule employed in your model is limited to Hebbian learning, which is inherently simplistic and not sufficient for solving many tasks that require more complex associative computations. This limitation might restrict the potential of CA3 recurrence to manifest its functional role in your simulations. I encourage further exploration in future work with more sophisticated learning rules to fully address this question.\\n\\nOn the second concern about baseline comparisons, I am satisfied with the additional standalone RNN baselines you included in Fig 2 and the related implementation details. The new results provide a more comprehensive view of the model's capabilities, and I appreciate the inclusion of these comparisons.\\n\\nGiven the current stage of revisions, I do not intend to propose any additional changes. I am considering raising my score from 5 to 6 but have not yet made a final decision. I will submit my evaluation before the deadline.\\n\\nThank you again for your time and dedication to addressing my feedback.\"}", "{\"comment\": \"*Thread [2/3]*\\n\\n**Continued: regarding weaknesses**\\n\\n> the claim that the model enables rapid learning in spatially embedded task is not warranted. how do the authors define rapid learning and efficient navigation (red and brown curves in Fig. 2)? \\u2026Rapid learning is usually used under the context of zero, one, or few-shot learning, and not thousands of trials.\\n\\nThank you for clarifying our terminology, and pointing us to valuable works in few-shot learning. The goal of the study is indeed not few-shot learning, but instead offering a biologically plausible testbed for exploring hypotheses of neural computation underlying multi-region brain interactions in spatially embedded decision-making tasks, formulated using reinforcement learning. Our use of \\u201crapid learning\\u201d is meant to be relative within our model variants (hypotheses) in the context of the paper, since the comparison is not otherwise fair. Though we stated this in most parts of our paper, e.g., in the abstract (line 22), and mostly mentioned the term when comparing model variants, e.g., lines 319-320, we understand this can be confusing. We have modified our wording to \\u201cfaster\\u201d, \\u201cmore rapid\\u201d, or similar terms as appropriate to make it more clear. We\\u2019d also like to add that the model behavior reflects the reinforcement learning behavior in reality. In experiments, the mice underwent at least 11 shaping stages of tasks in increasing difficulty. Mice were typically trained 5\\u20137 days/week, for one 1-h session per day, and took 6-7 weeks to learn the accumulating tower task (Pinto et al. https://doi.org/10.3389/fnbeh.2018.00036). \\n\\n\\n> it is unclear why M3 and M5 show similar increase in success rates of learning (Fig. 2A) when M3 does not get evidence from EC?\\n\\nWe would like to clarify our methodology. To reiterate, M3 gets a joint grid code of both evidence and position from MEC, the same as M5, as indicated in Table 1 and 2. The difference between M3 and M5 is whether the non-grid EC-HPC pathway is activated; it is not activated in M3, but activated in M5, also indicated in both tables. According to the experimental results shown in Fig 2, 6, 7, 8, the projection of conjunctive grid code to HPC alone is sufficient to give rise to conjunctive place code, which could explain why M3 and M5 have similar increase in success rates of learning. \\n\\n> Additionally, it is unclear why M0 and M5 show similar decrease in steps spent per episode (Fig. 2B).\\n\\nHere's our intuition regarding the navigation efficiency of M0 and M5: non-grid sensory input to HPC (or sensory to RNN) is potentially helpful to capture the nuances in the environment, such as where the wall is (marked as \\u2018-1\\u2019, hinting the decision region). This source of nuances is present in M0, M4, and M5, and we indeed observe fast navigation in both M0 and M5. M4 does not exhibit fast navigation, which can be due to other confounds such as its disjoint grid code. This would similarly explain why M3\\u2019s increase in success rate is comparable to M5, but is slower in navigating. For added clarity, we added lines 358-360 in blue. Thank you for your feedback! \\n\\n> Representational analysis for M0 and M3 should be included in Fig. 3,4 and 5.\\n\\nRepresentation analysis for model variants is already included in the Appendix, such as Figure 7,8,9,10, to not deviate the attention from the comparison of joint vs disjoint grid code. This additional information in the Appendix is referred to throughout the main content when related results are discussed. A representation analysis for M0 would not be comparable to M1-M5 as it does not receive hippocampal input like other variants, therefore it is not valuable to the focus of our study; M0 should be treated as a baseline when an abstractor network, i.e., Vector-HaSH, is absent.\"}", "{\"summary\": \"The authors introduce a modular architecture which integrates position and sensory evidence to create a RL agent capable of performing a common experimental neuroscience task. They specifically show that integration of sensory information in the module responsible for grid cell activity is essential for performance of this task.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The methodology described in section 3.2 seems to accurately match the previously published methodology. The methods relating to learned representations (5.2-5.3.1) show a strong similarity to experimental properties such as place-field location biases. The conjunctive tuning curves (eg: Figure 3 & 7) are particularly convincing.\", \"weaknesses\": [\"Overall, it is difficult to follow the logic of the paper. The introduction begins with a very specific example of neurophysiological findings and then seeks to justify the choice of Vector-HaSH. However, the justification for this starting point is extremely weak (line 122-123). Given that the proposed model is a slight modification of this previously published work, by the introduction of an MLP, there must be a much stronger argument for the validity of the chosen base model.\", \"Relatedly, the model is a small modification of the previously published vector-hash, followed by detailed investigation of performance and similarity of learned representations to experimental findings. The work therefore seems more appropriate for a neuroscience venue. In the current format, it is unclear what the implications for representation learning are.\", \"Section 2.3 is an odd aside, and the text does not seem to link it to the proposed model or findings at hand.\"], \"questions\": [\"How can these findings be more directly linked to representation learning and machine learning at large? Beyond explaining specific phenomenon in neuroscience, there do not appear to be any general ML findings. While there are paragraphs (2.3 and discussion) claiming that neuroscience findings can guide neuro-inspired AI, no concrete predictions or general findings are made.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"Figure 1 A-B is directly copied from Chandra 2023. This may be either a copyright concern or unintentionally reveal the authors of the paper.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Our existing results have addressed reviewer qnRf\\u2019s primary concerns\", \"comment\": \"We sincerely appreciate the reviewer\\u2019s prompt response and the opportunity to clarify our approach.\\n\\n> I assume there are a total of 3 sets of weights to train in your \\\"policy network\\\": the input to the RNN, the recurrent weights, and the readout weights. Training 3 sets of weights using deep RL allows the model to learn additional representations/features to solve the task, for instance integrating temporal information (Singh et al. 2023 Nature Mach. Int.). Perhaps the EC-HPC network does not capture the temporal dependencies of the Tower Accumulating task, and instead the RNN is the one that is learning to integrate these temporal features to successfully learn a policy. I am starting to think this might be the case.\\n\\nWe would like to kindly reiterate that the reviewer\\u2019s primary concerns are already addressed by the standalone RNN baseline (Fig. 1, black). The standalone RNN\\u2014receiving concatenated sensory, positional, and evidence-velocity inputs\\u2014performs poorly, whereas the EC-HPC network (i.e., Vector-HaSH) significantly outperforms it. This result clearly demonstrates that the EC-HPC network, not the RNN, is essential for capturing temporal dependencies and integrating sensory features over time.\\n\\nFurthermore, the reviewer\\u2019s concern that the EC-HPC network does not adequately capture temporal dependencies is addressed by our results in Fig. 4 and Fig. 5, B1. Fig. 4 shows that multiple variants of the EC-HPC network exhibit hippocampal firing fields with respect to positions or accumulated evidence, demonstrating that the hippocampus learns the temporal aspects of the task via MEC information flow. Additionally, Fig. 5, B1, illustrates that hippocampal features align with positional information in low-dimensional PC space. Together, these results confirm that temporal integration is intrinsic to the EC-HPC network\\u2019s design.\\n\\nThus, the reviewer\\u2019s concerns are already resolved by the current results. If the paper is accepted, we can clarify and further emphasize this aspect in the camera-ready version.\\n\\n> A model without the RNN but just the policy network will be more convincing to show that the temporal integration of sensory information is done by the EC-HPC network.\\n\\nThank you again for the constructive comments! This is indeed an insightful way to strengthen the learning scheme. If accepted, we think it would be valuable to extend our work to include results using this approach. However, as we [previously elaborated](https://openreview.net/forum?id=9Qfja4ZQW0&noteId=2jgJ95Fr1d),\\n``` our current approach of integrating RNN with Vector-HaSH is grounded in existing literature, as the most common method to model behavioral and neural processes (e.g., Lee et al., 2023; Gershman & \\u00d6lveczky, 2020). By incorporating Vector-HaSH, which generates more biologically plausible representations, our framework showcases how these representations can enhance existing RL-based models for studying both behavioral and neural processes.```\\nAdditionally, our use of an RNN abstracts the subcortical and cortical regions (lines 62-63), which are integral to the decision-making process. Modeling these regions as an auxiliary readout, as suggested, would oversimplify their role and deviate from our objective of capturing biologically plausible interactions.\\n\\nAs we mentioned earlier, our paper already shows that Vector-HaSH clearly shows the temporal integration of sensory information via Fig 4 and Fig 5, B1. Consequently, although we did not include a fully biologically plausible model but rather combine Vector-HaSH with a RNN policy network learned by the standard reinforcement learning method, the reviewer\\u2019s main concern on proving temporal integration by Vector-HaSH is already resolved with current results.\\n\\nThank you once again for deeply engaging with our work and offering constructive feedback. We hope our further clarifications have sufficiently addressed your concerns and resolved any potential misunderstandings. We also hope that our elaboration on how our current results address the primary issues raised will provide sufficient grounds for reconsideration and raising the score.\"}", "{\"comment\": \"*Thread [1/3]*\\n\\nThank you very much for your detailed and thoughtful feedback! We highly appreciate your time helping us to improve the quality of our work. Here we address your comments in order, in addition to modifications made to the manuscript, including Fig 1C, 1E, Discussion, and other minor edits in blue for added clarity on methodology and result interpretation.\\n\\n**Regarding weakness:**\\n\\n> For instance, CA3 in the hippocampus has extensive recurrent connections\\u2026This simplification may have critical effects when studying how the model produces co-tuned place cells and choice-specific firing since it removes internal computation within the hippocampus, relying instead on upstream grid cell networks.\\n\\nThe consideration of CA3 recurrence is a very valid point, but we didn\\u2019t include it due to the scope of our work. We have added an additional paragraph discussing this limitation to our Discussion section. For TLDR; reasons are several-fold, such as the reasons being brought up by reviewer bpUy (Question 1) that we focus on entorhinal-hippocampal interactions due to its importance to cognitive map theory and spatial navigation. Furthermore, considering CA3 recurrence is, in fact, complementary to our work, as testing the counterfactuals of CA3 recurrence would still involve M0-M5, and their variants. This extra consideration can be easily tested using our framework as detailed in the additional paragraph in Discussion, but it adds an extra layer of complexity, making it outside the scope of this work. Since our work generates a straightforward, falsifiable prediction, we are currently collecting neurophysiological data to verify our predictions\\u2013the result will directly inform whether other mechanisms play a role, such as CA3 recurrence, and can be easily investigated in our current framework as described in lines 518-522. \\n\\n> For a balanced evaluation, it would be essential to compare the performance of a standalone RNN (without Vector-Hash) receiving both physical velocity and sensory evidence, with an equivalent number of neurons.\\n\\nThank you for pointing this out. M0 is a standalone RNN, intaking sensory only. All model variants have RNNs of the same size (hidden size of 32, indicated in Appendix A.1). The point of including M0 is to provide a baseline when an abstractor of velocity information, i.e., VectorHaSH, is absent. If the standalone RNN instead receives abstracted velocity information, in contrast to other models that receive hippocampal representations, the task would be trivial. We could be misunderstanding the proposal, e.g., did you mean to have an RNN of size 32 to receive sensory vectors concatenated with MLP-abstracted evidence velocity, instead of just the sensory vectors in the current setup? Currently, in M0-M5, no RNN receives velocity information directly. Any clarification would be appreciated, thank you!\"}", "{\"comment\": \"*Thread [1/2]*\\n\\nThank you very much for your valuable time and feedback in helping us improve our work! Here we address the weaknesses and questions. Additionally, to enhance the clarity of our methodology, we have added additional figures, Fig 1C, 1E, Fig 10, and further elaboration in lines 198-200 and Appendix A.1.\\n\\n> The neocortex is modeled as a MLP and appears to the only channel by which sensory input enters the EC-HPC circuit.\\n\\nWe\\u2019d like to kindly clarify\\u2014sensory information in non-grid EC layer (Fig 1B green) can also enter the circuit (M2, M4, M5; detailed in Table 1, 2).\\n\\n> Since the MLP module plays the important role of extracting the evidence velocity and conveying to the Vector-HaSH module, it is unclear comparison with Models 0-2, which do not have the MLP module, is fair.\\n\\nWe believe the comparison is rigorous and logical. The main goal of the paper is testing hypotheses of neural computation, as shown in Table 1\\u2013the point of including M1, M2 is to rigorously span the entire hypothesis space of how conjunctive coding in place cells arise. To put Table 2 more clearly, the inclusion of M1-2 follows a simple logic, where the hypothesis space is\\n\\n| Evidence from EC | Evidence from MEC | Corresponding model of the hypothesis |\\n| ---------------- | ---------------- | ------------------------------------- |\\n| False | False | M1 |\\n| True | False | M2 |\\n| False | True | M3 |\\n| True | True | M4, M5 |\\n\\nThe point of including M0 is to provide a baseline when an external integrator of velocity information, i.e., VectorHaSH, is absent. \\n\\n> It is not entirely clear how the different coding\\u2026joint coding scheme emerged in M4\\u2026Do different grid codes in Models 3-5 emerge due to difference in the architecture? Or other network parameters?\\n\\nThank you for pointing this out! Line 309 (now line 304) was a typo, and is now fixed. The grid coding scheme is carefully controlled by us to test how different coding schemes affect the downstream computation, e.g., place cell representation, task performance. \\n\\nTo add clarity, we added Fig 1C and 1E in the latest version to illustrate the differences between the coding scheme and model architecture, and elaborated in lines 198-200 and Appendix A.1. As Fig 1C shows, the disjoint grid code means each grid module only gets velocity input of one variable, represented by one axis of the 2D representation space; the joint grid code means both axes of the 2D space are utilized in each grid module. Both coding schemes are capable of providing distinct code for different states of position and evidence, but the downstream representation and performance turns out to be different. \\n\\nThe above also addressed both questions you had. The hyperparameters such as network size and learning rate are shared across models\\u2013these details are stated in Appendix A.1. Please let us know if anything else is still unclear, and we are more than happy to elaborate.\"}", "{\"title\": \"References [2/2]\", \"comment\": \"We are keen to hear your thoughts and hope our clarifications have sufficiently addressed the concerns and provided enough reasons for a consideration of raising the score. Please let us know if we could assist with further comments or questions. Thank you again for your time and feedback.\\n\\n[1] Montague et al., A framework for mesencephalic dopamine systems based on predictive Hebbian learning. J Neurosci., 1996.\\n\\n[2] Schultz et al., A neural substrate of prediction and reward. Science, 1997.\\n\\n[3] Lee et al., A feature-specific prediction error model explains dopaminergic heterogeneity. Nature Neuroscience, 2024.\\n\\n[4] Miller et al., Cognitive Model Discovery via Disentangled RNNs. NeurIPS, 2023.\\n\\n[5] Pinto et al., Task-dependent changes in the large-scale dynamics and necessity of cortical regions. Neuron, 2020.\\n\\n[6] Bondy et al., Coordinated cross-brain activity during accumulation of sensory evidence and decision commitment, 2024\\n\\n[7] Brunton et al., Rats and humans can optimally accumulate evidence for decision-making. Science, 2013.\\n\\n[8] Gershman & \\u00d6lveczky, The neurobiology of deep reinforcement learning. Current Biology, 2020.\"}", "{\"summary\": \"This paper proposed a multi-region brain model for the hippocampo-entorhinal-neocortical circuit in a spatially embedded decison-making task. The purpose of the model is to understand the neural mechanism underlying the conjunctive encoding of position and evidence in the hippocampus in the accumulating tower task. By simulating the task as a RL problem, this paper demonstrates that 1) conjunctive encoding of position and evidence in the MEC and 2) non-grid sensory input from EC to HPC are necessary for the conjunctive representation in the HPC. The conclusion is reached by performing a rigorous testing of mutiple alternative hypothesese and comparing the model representations with hippocampal representations obtained experimentally. Overall, this is a solid paper that asks and answers a very interesting neuroscience question with well-designed experiments making use of an existing model.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Rigorous testing of alternative hypotheses to the proposed one: I really appreciate the examinations of other possible mechanisms of conjunctive hippocampal encoding (Table 1 and 2) in the paper, which is sometimes missing in many computational neuroscience papers. These examinations not only serve as an ablation study that makes the results more robust but are also biologically interpretable, which definitely strengthens the argument of the paper\", \"Reference to experimental results (i.e., Nieh et al., 2021): Many computational neuroscience works assume the reader, usually from a computational background, understands the experimental setup they simulate very well. In such cases the reader has to carefully go through the original experimental work to fully understand the context of the model. This paper does quite a good job in this regard by clearly presenting the simulated task very early and referring to the experimental results from Nieh et al. 2021 frequently.\"], \"weaknesses\": [\"I didn't find any significant weakness of this paper that can be summarized under general topics. There are some detailed questions/unclarity I wish the authors could address, which I noted under Questions.\"], \"questions\": [\"Why do the authors study the two specific reasons i.e., conjunctive encoding of space and evidence in the MEC and sensory inputs, other than other possible mechanisms underlying the conjunctive hippocampal encoding? Are there any experimental evidence demonstrating that these two are the most likely/important ones? I understand these two reasons are related to the interactions between (M)EC and HPC, but is it possible that the conjunctive representation in HPC is also due to internal reasons e.g., interactions between HPC subregions? The authors might want to add a sentence or two to highlight why they chose to study these two reasons specifically.\", \"The grid-cells module in the model takes pos and evidence as inputs. I understand that the evidence is a MLP-projected representation of the sensory inputs, which the authos presented quite clearly in lines 215-222. However, it is unclear how position is represented and input into the grid cells. Is it encapsulated in the CAN module?\", \"The MLP module models how evidence is extracted from sensory inputs, and from Fig1B, it seems to receive sensory inputs directly from non-grid EC. Biologically, is this a known mechanism? Could it be receiving inputs directly from sensory regions? Which brain region and what neural mechanism does this MLP correspond to? Maybe something to include in the Discussion.\", \"How did you achieve disjoint grid cell encoding of space and evidence in your model? i.e., computationally and mathematically, how did you make a difference between joint and disjoint grid cell codes? You may want to include this in section 3.2 Model Setup, or in the appendix.\", \"In line 309 'we show that our multi-region brain model (M4)' - you mean M5 right?\", \"Fig3 left: if space allows, you may want to include Fig1d from Nieh et al. 2021 to make a contrast to right-choice-selective place cells, as readers unfamiliar with their work might struggle to understand what this diagram means.\", \"Section 5.3.1: I feel this whole subsection is a bit unclear. I wish you can check/clarify the following points:\", \"I think this section aims to demonstrate only M5 shows separable clusters of tasks variables, but the title says 'only joint integration model exhibits...' - both M3 and M5 are joint integration model, but M3 doesn't have activated EC pathway right? So I guess what you want to say is 'only joint integration model with activated EC pathway exhibit...'? This is also related to the fact that you are showing in Fig5 that both M4 and M5 are action-separable, but M4 is a disjoint model.\", \"On line 431, you refer to Fig 5 A1/A2 for separable clsuters. I guess you should refer to B1/B2?\", \"On line 467, you mentioned that you did not observe separability in accumulated evidence. Can you show some example, at least in the appendix?\", \"I also feel the whole Fig5 could improve with a re-wording of the caption clarifying which model (M3,4,5) each row (A, B) and each column (1,2,3,4) correspond to. Currently it is not very clear.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"*Thread [1/1]*\\n\\nThank you very much for your time, and insightful, detailed comments. We truly appreciate your kind words, and your valuable time helping us to improve the quality of our work. Here we address your questions and feedback in order, with modifications applied to Discussion, Fig 1, Fig 5 caption; we added Fig 10, and additional text to Method (in blue):\\n\\n> \\u2026is it possible that the conjunctive representation in HPC is also due to internal reasons e.g., interactions between HPC subregions? The authors might want to add a sentence or two to highlight why they chose to study these two reasons specifically.\\n\\nThank you\\u2013it is a very valid point regarding the internal interactions within HPC subregions. Aside from the reasons you mentioned, we did not include it for reasons we just added to the Discussion section of the manuscript (in blue, lines 516-525). TLDR; being the consideration of e.g., CA3 recurrence would be complementary to our current work. And, given we have a falsifiable and straightforward prediction of conjunctive grid code, we are collecting neurophysiological data to verify this prediction directly. The result will directly inform whether other mechanisms play a role, and can be easily investigated with our current framework with methods described in lines 518-522.\\n\\n> However, it is unclear how position is represented and input into the grid cells. Is it encapsulated in the CAN module?\\n\\nPositional velocity is represented as 0 (stuck) or +1 (forward) without the use of MLP given backward-moving is not task-relevant (Nieh et al., 2021, see behavioral training), though it is possible to use an MLP as well. It will not affect the results. Generally, the velocity inputs to each grid module updates the grid phases through path integration, following Vector-HaSH implementation in Chandra et al., 2023, akin to Burak & Fiete, 2009. For added clarity, we added a grid code schematic of how velocity is represented with respect to inputs in Fig 1C and elaborated in lines 198-200 and Appendix A.1. \\n\\n> The MLP module models how evidence is extracted from sensory inputs, and from Fig1B, it seems to receive sensory inputs directly from non-grid EC\\u2026what neural mechanism does this MLP correspond to? Maybe something to include in the Discussion.\\n\\nIn our model, the sensory input and the encoding of non-grid EC are technically the same as a consequence of our setup (lines 215-217). The MLP predicts velocity from sensory directly instead of from non-grid EC (lines 200-201). We agree our schematic was misleading, and thank you a lot for pointing this out! We have modified Fig 1B and 1E correspondingly. We have also added to the Discussion our justification of using an MLP (lines 526-532 in blue), in addition to the earlier mention of its potential biological implication in lines 227-228.\\n\\n> How did you achieve disjoint grid cell encoding of space and evidence in your model? i.e., computationally and mathematically, how did you make a difference between joint and disjoint grid cell codes? You may want to include this in section 3.2 Model Setup, or in the appendix.\\n\\nThanks for pointing this out! We added Fig 1C for added clarity and elaborated in lines 198-200 and Appendix A.1 (as mentioned above in pt. 2). The joint/disjoint grid coding scheme is a manual setup so we can have a careful, controlled comparison of both possibilities. Please let us know if it\\u2019s still unclear, and we are happy to clarify more.\\n\\n> In line 309 'we show that our multi-region brain model (M4)' - you mean M5 right?\\n\\nYes, thanks. This is now fixed.\\n\\n> Fig3 left: if space allows, you may want to include Fig1d from Nieh et al. 2021 to make a contrast to right-choice-selective place cells, as readers unfamiliar with their work might struggle to understand what this diagram means.\\n\\nYes, it was a part of the consideration, but we removed it since Nieh et al. confirm that Fig1e was the experimental outcome, so we didn\\u2019t want to confuse readers with the other possibility that was ruled out experimentally.\\n\\n> Section 5.3.1: I feel this whole subsection is a bit unclear. I wish you can check/clarify the following points\\u2026\\n\\nThank you for pointing this out and for your constructive feedback! We have made modifications with respect to each point. Specifically, we modified the section 5.3.1 title and fixed the relevant typo. We have included the separability of accumulated evidence in Figure 10. We have also reworded Fig 5 caption. \\n\\n----\\n\\nPlease let us know if further clarifications are needed. Thank you again for your time and valuable feedback!\"}", "{\"comment\": \"*Thread [1/2]*\\n\\nThank you very much for your valuable time and feedback in helping us improve our work! Here we address the weaknesses and questions respectively, in addition to the revision made in Introduction to emphasize on the ML relevance of our work (lines 65-70, in blue) \\n\\n**Regarding weakness**\\n\\n> Overall, it is difficult to follow the logic of the paper. The introduction begins with a very specific example of neurophysiological findings and then seeks to justify the choice of Vector-HaSH.\\n\\nWe would like to respectfully point out that the neurophysiological findings we reference (Nieh et al., 2021, Nature) are not just specific examples but are representative of broader and well-established principles in cognitive map theory, which is foundational for our study (Section 2.1). We chose these findings because they offer robust experimental evidence regarding place cell conjunctive tuning of physical and cognitive variables, which serves as a strong ground truth for testing hypotheses in our model. By aligning our simulations with these generalizable experimental results (Fig 3, 4), we demonstrate the biological relevance and predictive power of our ML-oriented approach. Therefore, the introduction of these findings is not only logical but essential for grounding the context of our work.\\n\\n> the proposed model is a slight modification of this previously published work, by the introduction of an MLP\\n\\n> Relatedly, the model is a small modification of the previously published vector-hash...\\n\\nWe would like to kindly point out that our method is not just a simple modification of Chandra et al., (2023). While it is true that our approach to modeling the entorhinal-hippocampal interactions is based on Vector-HaSH, it's not the entirety of our multi-region model (Section 3.1, Fig 1B, D, E; e.g., we introduce a new framework for modeling the entorhinal-hippocampal-neocortical loop, as highlighted by Reviewer LP6J). The model is established to be biologically grounded (concise reasons listed in lines 121-122), serving our purpose of building a neural computation testbed given its mechanistic nature and proper level of abstraction; other well-constructed mechanistic models of entorhinal-hippocampal interactions are not available to our best knowledge. Thus, we did not reinvent the wheel. However, the model\\u2019s\\n\\n1. interaction with cortical and subcortical regions,\\n2. goal-directed task performance and neural representation under different neural computational rules,\\n3. application to neuroscience discoveries as an ML model,\\n\\nare unexplored\\u2014we explored all the above.\\nThe main contribution of our work is a biologically plausible ML-based framework (lines 77-82) that serves as a testbed of hypothesis-driven neural computation discoveries (Table 1, 2, Section 4). Further, we directly leveraged this framework to make straightforward, falsifiable neurophysiological predictions in important topics of neuroscience (Section 2.1). This aspect alone already makes our work well suited and important in our primary area, \\u201c**applications to neuroscience & cognitive science**.\\u201d\"}", "{\"comment\": \"*Thread [3/3]*\\n\\n**Regarding questions:**\\n\\n> However, it is unclear whether they used a biologically plausible learning rule or backpropagation for policy learning i.e. policy gradient.\\n\\nThe connectivity between non-grid EC and HPC is learned through Hebbian learning (Equation 4, 5). We used a policy gradient (REINFORCE) for RNN. The RL training of RNN was briefly described in lines 247-248, \\u201cwhich is an action-selection RNN policy trained through policy gradient under reinforcement learning\\u2026\\u201d \\n\\n> why is there a disparity in M0's and M5's learning performance in Fig. 2A and 2B? Shouldn't we expect a model with a slower increase in cum. success rate to demonstrate a slower decrease in steps spent per episode?\\n\\nAn agent, like M0, can quickly get to the end yet still fail at turning to the correct direction in the end if they do not learn the rule of accumulating evidence. For example, M0 can quickly navigate through memorization of keep going forward until a certain position, then turn randomly to left or right. This is why we examine both metrics in Fig 2.\\n\\n> Since separability of clusters is not a prerequisite for navigation behavior (M3 vs M5 Fig. 9 vs Fig. 2), how can we make sense of the representation to behavior?\\n\\nIt is true that while the clusters are not separable in the low-dimensional PC space for M3, M3 place cells still exhibit many important biological properties observed in Nieh et al., such as splitter cell phenomenon (i.e., we see choice-specific place cells) and conjunctive place code of position and evidence, as shown in Fig 8, A3 and B. However, the low-dimensional separability implies the downstream processing in cortical regions would be more at ease, as indicated by M5\\u2019s superior learning speed in Fig 2A (on bar with M3) and fast navigation shown in Fig 2B in comparison to other models. \\n\\n----\\n\\nPlease let us know if further clarifications are needed. Thank you again for your time and valuable feedback! We hope that our response and corresponding edits provide sufficient reasons to raise the score.\"}", "{\"summary\": \"This paper models the entorhinal-hippocampal-neocortical circuit in the brain using a Vector-Hash + RNN model and trains it with reinforcement learning to accumulate sensory evidence and make decisions. The authors test several model hypotheses, including whether the MEC receives sensory evidence, whether sensory information and spatial information are co-represented, and whether the HPC binds sensory and spatial information through associative memory. Their results indicate: (1) in terms of task performance, inputting both spatial and sensory information to the MEC improves learning speed; (2) in terms of neural experiment interpretation, the mixing of sensory and spatial information in the MEC results in cotuning of evidence and position in HPC place cells, as well as choice-specific firing patterns, similar to experimental findings. These results suggest that integrating multimodal information in the MEC for abstract path integration may be a biological strategy for decision-making tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The first strength lies in the novelty of the model architecture. This paper introduces a new framework for modeling the entorhinal-hippocampal-neocortical loop, combining an RNN trained with gradient backpropagation with a hand-designed, Hebbian-learning-based Vector-Hash module. This approach offers a unique testbed for exploring hypotheses about brain region function and information transfer, providing valuable insights for future research in this area.\\n\\nAnother strength is the linkage between experimental neural findings and the model\\u2019s performance on specific tasks. The paper connects cotuning of evidence and spatial location, as well as choice-specific place cell firing, with efficient spatial navigation and rapid learning in the network, potentially advancing brain-inspired intelligence research.\", \"weaknesses\": \"The primary limitation is the oversimplification of the biological network and the lack of rigorous model comparisons, which affects the credibility of the findings. First, the model reduces the entorhinal-hippocampal circuit to a Vector-Hash model, which, while capable of explaining associative and episodic memory functions attributed to these areas, significantly simplifies the actual biological network. For instance, CA3 in the hippocampus has extensive recurrent connections, whereas the Vector-Hash model lacks internal connections, reducing the hippocampus to an intermediary layer connecting MEC and LEC. This simplification may have critical effects when studying how the model produces co-tuned place cells and choice-specific firing since it removes internal computation within the hippocampus, relying instead on upstream grid cell networks.\\n\\nSecondly, when proposing joint integration of evidence and spatial velocity in the MEC as a means to enhance learning, the paper lacks a fair comparison of different models' capabilities. For a balanced evaluation, it would be essential to compare the performance of a standalone RNN (without Vector-Hash) receiving both physical velocity and sensory evidence, with an equivalent number of neurons. This is a minimal comparison proposal to clarify Vector-Hash\\u2019s role, although it alone may not be fully sufficient for rigorous validation.\", \"questions\": \"Overall, I appreciate the paper\\u2019s attempt to model the entorhinal-hippocampal-neocortical circuit, especially combining classic theoretical models with RNNs. However, I am somewhat skeptical about the reliability of the conclusions, which impacts my overall assessment. I have several questions, including both major and minor ones, related to the results and model details.\\n\\n**Major Questions:**\\n1. In hypothesis M2, where sensory evidence is bound to the HPC, and downstream RNN receives HPC activity as input, why does the network fail to learn effectively, performing worse than an RNN receiving sensory evidence directly? Since spatial location is also useful in this task, as the agent needs spatial navigation for reward acquisition, the paper suggests that coupling both inputs makes it difficult for the downstream RNN to decouple them. Could the authors elaborate on this point?\\n\\n2. According to Figure 2B, the RNN in M0 can perform efficient navigation, even outperforming the network in M3. This seems counterintuitive, as M3 includes additional spatial location information, which is crucial in a spatial navigation task. Could the authors discuss this in more detail?\\n\\n3. In the disjoint integration model (M4), no co-tuned place cells emerge in the HPC. Under this model framework, is it because separate grid cell modules lead to orthogonal representations of evidence and position? If so, a natural prediction would be that \\\"evidence cells,\\\" tuned purely to evidence spacing, might appear in the HPC. Is this observed in the actual results?\\n\\n4. I believe that omitting recurrent connections within the HPC contributes to the lack of co-tuned cells when evidence and spatial location are combined in the HPC. If the HPC were replaced with a standard RNN receiving both MEC spatial input and LEC evidence input, would co-tuning place cells still emerge even if the MEC does not receive evidence or integrates evidence and position disjointly? I recommend the authors investigate this, as it may determine whether the findings are artifacts due to biological oversimplification.\\n\\n**Minor Questions:**\\n1. In Equation (1), the operation CAN(.) is not clearly defined in the main text or the Appendix. As CAN is a classic, well-studied dynamical model with various implementations, could the authors clarify the specific form used here?\\n\\n2. In the joint-integration model, how exactly are evidence and position information combined before being input to the MEC? Is it a simple summation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Maintain score\", \"comment\": \"I appreciate the authors' clarifications. The model's strength lies in its rapid map learning, akin to episodic memory. However, I am concerned about its ability to adapt to other neuroscience experiments.\\n\\n> Additionally, our use of an RNN abstracts the subcortical and cortical regions (lines 62-63), which are integral to the decision-making process. [Author Response]\", \"the_authors_mentioned_that_m0\": \"RNN + RL network is not able to solve the task (Fig. 2), and the grid+place cell network added to the RNN improves learning. Hence, it seems contradictory to me that the authors emphasize on the importance of using an RNN to model decision-making again. As mentioned before, it will be much clearer to show that a grid-place-policy network, without the RNN, improves policy learning. Please see the list of references given before that shows a simpler model and grounding to neuroanatomy, which will make it easier to adapt the current model to other simulations. To me, overcomplicating a model to show multi-region interaction is obscuring the computational understanding offered by simpler alternatives.\\n\\n> Such an understanding could reveal how cognitive maps could be leveraged not only to navigate physical spaces but also to guide cognitive decisions. [Pg. 1, Line 51]\\n\\n> We believe the current scope of our work is well contained. [Author Response]\\n\\nI would like to re-emphasize that the current scope of the experiments to make the claim that the 'cognitive' map learned by the model improves policy learning is insufficient. The authors should have shown at least 1 additional simulation in a different environment perhaps with obstacles, or a non-spatial but cognitive task to further validate that the representations learned by the model is general. \\n\\nAnother suggestion is to show that the model replicates the rapid learning seen in Tse et al. 2007 Science by learning schemas. Some modeling papers include Hwu & Krichmar 2020 Biological cybernetics and Kumar et al. 2021 arXiv 2106.03580. \\n\\nThe current model is capable of improving our understanding on a range of interesting cognitive questions e.g. schemas, beyond the tower accumulation task. I urge the authors to consider these suggestions to strengthen the model's robustness.\"}", "{\"comment\": \"Thank you for your clarification. This paper is much more complete and clearer now. I don't have any more concerns. Good luck :)\"}", "{\"comment\": \"*Thread [3/3]*\\n\\n**Regarding minor questions**\\n\\n> As CAN is a classic, well-studied dynamical model with various implementations, could the authors clarify the specific form used here?\\n\\nThank you for the clarifying question. The velocity inputs to each grid module updates the grid phases through path integration, following Vector-HaSH in Chandra et al., 2023, akin to Burak & Fiete, 2009. We have also added Fig 1C for added clarity on the disjoint/joint grid module coding scheme, and elaborated in lines 198-200 and Appendix A.1. \\n\\n> \\u200b\\u200bIn the joint-integration model, how exactly are evidence and position information combined before being input to the MEC? Is it a simple summation?\\n\\nDifferent task variables each utilize a different axis of a grid module, also addressed in the new Fig 1C and added lines 198-200, along with Appendix A.1.\\n\\n----\\n\\nPlease let us know if further clarifications are needed. Thank you again for your time and valuable feedback! We hope that our response and corresponding edits provide sufficient reasons to raise the score.\"}", "{\"comment\": \"Thank you for your response and for taking the time to update the manuscript. However, after reviewing your reply, I feel that my primary concerns regarding the weaknesses of the study have not been fully addressed. I would like to elaborate further, especially considering the importance of these points to the validity of your core conclusions.\\n\\n1.\\tOn the exclusion of CA3 recurrence:\\n\\nOne of the key contributions of your paper is the prediction that joint integration of velocity and evidence in grid cells is critical for producing co-tuning of place cells in the hippocampus. However, as I mentioned in my review, this conclusion may be significantly influenced by the structural simplifications in your model, particularly the omission of CA3 recurrent connections. The hippocampus, as a central component of the cognitive map, is widely understood to integrate information from multiple sources, and the recurrent connections in CA3 are thought to play an essential role in this integration.\\nWhile I understand that the scope of your work focuses on entorhinal-hippocampal interactions, simply stating that your prediction is falsifiable does not justify omitting CA3 recurrence from your analysis. Instead, it is essential to explain why the exclusion of CA3 recurrence does not compromise your prediction or, at the very least, discuss how it might influence the interpretation of your results. This is especially crucial given that your model already incorporates projections from the lateral entorhinal cortex (LEC) to the hippocampus, which should facilitate evidence integration in the hippocampus.\\n\\nConsidering the central role of this prediction in your study, a more thorough justification or analysis is necessary to demonstrate that the lack of CA3 recurrence does not undermine your conclusions. Otherwise, the validity of this prediction remains questionable and risks being an artifact of the model's structural oversimplification.\\n\\n2.\\tOn the role of a standalone RNN:\\n\\nIn your manuscript, the first section of the Results is titled \\\"Joint Integration of Position and Evidence in MEC Induces Rapid Learning\\\". This title implies that introducing joint coding of velocity and evidence in the Vector-Hash model (e.g., M3 and M5) leads to faster learning compared to a standalone RNN (e.g., M0). Indeed, Figure 2A shows that M3 and M5 achieve higher accuracy more quickly than M0. However, I believe this comparison is not entirely fair for the following reasons:\\n\\n\\u2022\\tThe RNN in M0 has significantly fewer total neurons than the combined RNN and Vector-Hash modules in M3 and M5, which inherently puts M0 at a disadvantage in learning capacity.\\n\\n\\u2022\\tThe RNN in M0 receives only sensory information as input, whereas the Vector-Hash models also encode velocity information, which is highly task-relevant.\\n\\nTo make a fair comparison, I suggest including an additional condition where an RNN with the same number of neurons as M3 or M5 receives both velocity and sensory inputs. This would allow readers to directly compare the learning capabilities of the standalone RNN and the proposed RNN + Vector-Hash model.\\n\\nIf your intention is not to claim that the RNN + Vector-Hash model learns faster than a standalone RNN, then the phrase \\\"induces rapid learning\\\" should be clarified in the manuscript. Specifically, the comparison target for this claim should be explicitly stated to avoid potential misinterpretations.\"}", "{\"comment\": \"*Thread [2/3]*\\n\\n**Regarding major questions:**\\n\\n> In hypothesis M2\\u2026why does the network fail to learn effectively, performing worse than an RNN receiving sensory evidence directly? Since spatial location is also useful in this task\\u2026\\n\\nSpatial information is indeed useful to the task, but not as critical as the ability to accumulate evidence correctly. For example, an agent could navigate quickly simply due to memorization, and fails the task ultimately if it makes a turn to left or right randomly in the end. The difficulty for downstream decoupling can be a consequence of M2 missing evidence firing fields (Fig 8, A2), potentially due to the difficulty of disentangling evidence from sensory information through simple projection, without a structured abstractor i.e., grid cells. This is in contrast to well-performed M3-M5 (Fig 8, A3, and Fig 4, B2, B3). \\n\\n> According to Figure 2B, the RNN in M0 can perform efficient navigation, even outperforming the network in M3. This seems counterintuitive, as M3 includes additional spatial location information, which is crucial in a spatial navigation task. Could the authors discuss\\u2026\\n\\nIntuitively, grid code provides rigid information of an environment and the task at hand, such as the spatial information you mentioned. On the other hand, non-grid sensory input to HPC (or sensory to RNN) is potentially helpful to capture the nuances in the environment, such as where the wall is (marked as \\u2018-1\\u2019, hinting the decision region). This source of nuances is present in M0, M4, and M5, and we indeed observe fast navigation in both M0 and M5. M4 does not exhibit fast navigation, which can be due to other confounds such as its disjoint grid code. For added clarity, we added lines 358-360 in blue. Thank you for your feedback! \\n\\n> In the disjoint integration model (M4), no co-tuned place cells emerge in the HPC. Under this model framework, is it because separate grid cell modules lead to orthogonal representations of evidence and position? If so, a natural prediction would be that \\\"evidence cells,\\\" tuned purely to evidence spacing, might appear in the HPC. Is this observed in the actual results?\\n\\nThank you for pointing this out. We did not observe HPC cells tune to evidence or position only in M4. This can be inferred from Fig 6D directly, for example, a cell only tuned to evidence should have a mutual information value on the X=Y line in D1, since the positional information (Y) should not matter. To further make sure this is not an artifact due to EC projection to HPC, we ran M4 without an activated EC-HPC pathway. We observed a similar mutual information pattern as Fig 6D, and similar ExY fields shown in Fig 3 top row, where the firing fields are rigid and multiple ExY peaks can exist in one cell, but we do not see any continued stripe with respect to one axis like those in Fig 7A. We did not identify choice-specific cells in either case (a visualization for M4 is in Fig4, A2).\\n\\n> I believe that omitting recurrent connections within the HPC contributes to the lack of co-tuned cells when evidence and spatial location are combined in the HPC\\u2026I recommend the authors investigate this, as it may determine whether the findings are artifacts due to biological oversimplification.\\n\\nThank you for the proposal, and this is the potential next step of our work, depending on the experimental verification result. We have discussed our reasons for exclusion when addressing paper weakness above (e.g., our predictions are experimentally testable and falsifiable) and in the new paragraph added to Discussion (lines 516-526). The current findings would not be affected and are not the outcome of simplification, as it carefully tests the effect of grid code on place cell representation and behavior performance. Testing the effect of CA3 recurrence would still require various ablation studies including our M0-M5, and their variants. The role of CA3 recurrence can be complementary, and is definitely not mutually exclusive to the current prediction.\"}", "{\"comment\": \"Thank you very much for the insightful and timely elaborations! We now better understand the concerns, and have conducted relevant experiments.\\n\\nIn addressing concern (1) of CA3 recurrence, we have added Fig 11 and Fig 12 to Appendix F. These results serve as a proof of concept, that adding CA3 recurrence to M2 (position only grid code + mix p) or M4 (disjoint grid code + mix p) is not sufficient to drive experimentally observed phenomena in place cells, showing such a model simplification does not undermine our conclusions drawn in the main text. We have modified the relevant text in Discussion accordingly, highlighted in orange in lines 517-525, elaborating the implication and future directions.\\n\\nIn addressing concern (2) of baseline comparison, we have included an additional standalone RNN baseline to Fig 2 in black. We additionally scaled up the original standalone RNN baseline in blue. Specifically, the standalone RNNs now have a hidden size of 32 + Ng + Np + Ns (same number of neurons as RNN + Vector-HaSH in M5). The RNN takes in sensory info, while for the black variant, it is concatenated with position velocity and evidence velocity (predicted by MLP). The previous conclusion remains the same, except we observe learning/training instability in these large-size RNNs--the implementation details are elaborated in Appendix A.1. \\n\\nThank you again for your time and valuable feedback! Please let us know if further clarifications are needed in addressing your concerns. We hope that our response and corresponding edits provide sufficient reasons to raise the score.\"}", "{\"comment\": \"*Thread [2/2]*\\n\\n**Regarding ML relevance:**\\n\\n> How can these findings be more directly linked to representation learning and machine learning at large?\\n\\nNeural representations, such as those of place cells and grid cells, are emerging properties of learning within biological neural networks; these representations can arise in artificial neural networks with appropriate biologically-imposed constraints as shown in our work. We study these representations using ML frameworks (lines 131-139). Our study on understanding how distinct cognitive processes like decision-making and spatial navigation are jointly possible in a limited number of interacting neural networks (brain regions) is of interest to both ML and Neuroscience communities. Navigation is very difficult for AI (Mirowski et al., ICLR 2017), and it\\u2019s even more so when they are combined with decision-making. Our understanding of what representations are needed to efficiently solve the combination using a limited network circuitry, with verifiability in both biological and artificial agents, is an important contribution. Further, our study matters for figuring out how artificial agents can truly make sense of their environments while being energy efficient, capable, and cognitively flexible, like humans and animals. This is related to achieving autonomous machine intelligence positioned and discussed by LeCun, 2022; we demonstrated, as a proof of concept, that sample-efficient learning in RL is achievable through an external content-addressable associative memory with a structured aspect. In particular, our work shows a structured conjunctive coding scheme (i.e., grid cells as a canonical example drawn from biological representation learning) is an important structural representation for forming cognitive maps (world models), enabling individuals to learn quickly and navigate spaces efficiently. \\n\\nWe have revised our introduction with the above to emphasize the relevance of our work to ML (lines 65-70, in blue). Additionally, our work showcases ML techniques applied to neuroscience discoveries (our primary area of submission) for the reasons stated when addressing weaknesses. Thank you again for your feedback!\\n\\n> Section 2.3 is an odd aside, and the text does not seem to link it to the proposed model or findings at hand.\\n\\nWe kindly disagree with the reviewer\\u2019s comment that Section 2.3 is an unrelated aside. The ML relevance of grid cell representation, which is the main subject of our study, is directly backed by the literature we mentioned in Section 2.3, e.g. Banino et al. demonstrated that artificial agents with grid cell-like representations have superior performance in navigation, a difficult ML task. Our findings provide additional insights, e.g., grid cell-like coding should also be efficient, i.e., utilizing all axes of the representation space (\\u201dconjunctive tuning\\u201d) in each module. \\n\\n**Regarding ethics concerns**\\n\\nFig 1A is adapted from Chandra et al., 2023 with proper citation (line 179). Fig 1B is created by us, not from Chandra et al., 2023.\\n\\n----\\n\\nNieh et al., Geometry of abstract learned knowledge in the hippocampus. Nature 2021 \\n\\nLeCun, Yann. \\\"A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27.\\\" Open Review 2022\\n\\nBanino et al., Vector-based navigation using grid-like representations in artificial agents, Nature 2018\\n\\nMirowski et al., Learning to Navigation complex environments, ICLR 2017\\n\\n----\\nPlease let us know if further clarifications are needed. Thank you again for your time and valuable feedback! We hope that our response provides sufficient reasons to raise the score.\"}", "{\"summary\": \"The authors extend the Vector-HaSH model (Chandra et al. 2023), a recently developed neural network model inspired by the hippocampal (HPC)-entorhinal cortex (EC) circuit, to include an MLP-based model of the sensory neocortex and an RNN module to model the brain circuits involved action selection and reinforcement learning. The authors demonstrate the model\\u2019s ability to rapidly learn the accumulating tower task, and to generate place cell maps that resemble those observed in the brain (based on data from Nieh et al. 2021). Drawing comparison across 5 models with different architecture and grid and place cell coding schemes, the authors conclude that location-evidence conjunctive coding in grid cells and non-grid EC inputs to the HPC are important for rapid learning and the formation of conjunctive maps in the HPC.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Understanding how the HPC-EC circuit interact with other brain regions in spatial cognition tasks is an important open question. Extensive modeling effort has been focused on the HPC-EC circuit, while sophisticated models linking HPC-EC with other brain regions remain lacking. Hence this study is an important step towards the right direction.\", \"weaknesses\": \"1)\\tThe neocortex is modeled as a MLP and appears to the only channel by which sensory input enters the EC-HPC circuit. Since the MLP module plays the important role of extracting the evidence velocity and conveying to the Vector-HaSH module, it is unclear comparison with Models 0-2, which do not have the MLP module, is fair.\\n2)\\tIt is not entirely clear how the different coding schemes of the grid cells (position only, disjoint pos+evi, joint pos+evi) are incorporated into the model. In line 309, the authors claim that the joint coding scheme emerged in M4, while Table 2 seems to claim otherwise? Do different grid codes in Models 3-5 emerge due to difference in the architecture? Or other network parameters? It would be greatly helpful if the authors can provide more implementation details clarifying how the models differ from each other, which would also help distinguish which observed phenomena arise by design and which emerge due to optimizing for task performance?\\n3)\\tThe authors show that M5, with a joint grid code and non-grid input to HPC, yielded faster learning. However, there lacks an intuitive insight on why this is the case. Could the author provide a mechanistic insight and some supporting analyses? \\n4)\\tThe comparison between model dynamics and biological data appear largely qualitative. For example, the PCA results show some visual clusters in B but less so in A, but how can we be sure there are no separable clusters in high dimensional space? \\n5) Other implementation details, such as how the RNN is trained in an RL framework, is lacking.\", \"questions\": \"1) Could the authors clearify how the 5 models differ in architecture, external input, and/or hyperparameter?\\n2) Could the authors provide mechanistic intuition on why the joint grid emerged (or is incorporated by design) in M3 &5? (please also clairfy on the claim regarding M4 in line 309-310)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"*Thread [2/2]*\\n\\n> Could the author provide a mechanistic insight and some supporting analyses?\\n\\nIntuitively, grid code provides rigid information of an environment and task at hand, but non-grid input to HPC is potentially helpful to capture the nuances in the environment, such as where the wall is (hinting the decision region). For added clarity, we added lines 357-359 in blue. Thank you for your feedback! \\n\\nThe mechanistic insight of M5\\u2019s faster learning can be supported by the analysis in Fig 4, A3, showing splitter cell phenomenon, in which place cells show differential activity based not only on spatial location but also on additional context (\\u201dchoice-specific\\u201d). This is also supported by analyses of Fig 5, B, and quantitative measure in the new Fig 10, showing the hippocampal representation is (visually) separable in low dimension under joint grid code and activated EC pathway.\\n\\n> The comparison between model dynamics and biological data appear largely qualitative\\u2026how can we be sure there are no separable clusters in high dimensional space?\\n\\nWe focus on low-dimensional representations, especially given the task is low-dimensional and Nieh et al. found the experimental hippocampal population activity is constrained to a low-dimensional manifold. High dimensional separability is very likely, but even so, given M5 is separable in low dimension while other variants are not, this characteristic alone underscores the computational advantage of mechanisms implied by M5. We have modified the wording in section title of 5.3.1 to reflect our intention. We also appreciate your feedback on more quantitative measures, and have added scree plots of M1-M5, i.e., cumulative variance explained by numbers of PCs, to the Appendix Fig 10 (row 1). Quantitatively, M5's first 2 PCs of hippocampal activity can explain 68% of variance, while M3 only explains <20%, similar to other models. Though the comparison of place cell coding is qualitative-oriented in Fig 4, the place cells are selected through a quantitative mutual information analysis (Appendix B).\\n\\n> Other implementation details, such as how the RNN is trained in an RL framework, is lacking.\", \"the_implementation_details_are_mentioned\": \"the RL training was briefly described in line 248, \\u201cwhich is an action-selection RNN policy trained through policy gradient under reinforcement learning\\u2026\\u201d which points to a more detailed description in the Appendix A (line 249), \\u201cPlease refer to Appendix A.3 for what one step by the agent in the environment entails among the involved brain regions.\\u201d These are also mentioned in line 312.\\n\\n----\\n\\nPlease let us know if further clarifications are needed. Thank you again for your time and valuable feedback! We hope that our response and corresponding edits provide sufficient reasons to raise the score.\"}", "{\"metareview\": \"This paper introduces a multi-region computational model integrating entorhinal and hippocampal dynamics with a reinforcement learning framework to study cognitive maps' role in decision-making. Using simulations of the accumulating tower task, the authors present results supporting the importance of conjunctive coding of spatial and sensory information in grid cells for rapid learning and efficient navigation. The study extends the Vector-HaSH framework to include sensory neocortex and action-selection RNN modules.\\n\\nThe reviewers found the study's goal and premise intriguing, emphasizing its potential contribution to computational neuroscience. However, significant concerns regarding its methodological execution, scope, and generalizability were raised. All but one reviewers voted for rejection.\", \"additional_comments_on_reviewer_discussion\": \"An important criticisms raised by the reviewers and that was not resolved was that the study focuses on a single task without demonstrating generalizability to other environments or cognitive tasks, undermining the broader claims about \\\"cognitive maps.\\\"\\n\\nOther important comments included a) omission of CA3 recurrent dynamics may impact the validity of conclusions about conjunctive coding in the hippocampus, b) use of an RNN raises questions about whether the EC-HPC network or the RNN is responsible for temporal integration, and c) the model's learning is far from few-shot or one-shot learning, which contradicts common definitions of rapid learning in neuroscience. The authors responded to these comments, however reviewers were not fully convinced.\"}", "{\"comment\": \"*Thread [1/3]*\\n\\nThank you very much for your detailed, insightful feedback and the mention of many related valuable works. We highly appreciate your time helping us to improve the quality of our work. Here, we address your feedback and questions in order, in addition to our modification in the manuscript following your feedback, such as in the Discussion and interpretation of results, among others, all in blue.\\n\\n> Based on the results, the authors propose that place cells that conjunctively integrate position and sensory information was important to solve the task.\\n\\nThis is most likely a typo in the summary. But in case of any misunderstanding, we\\u2019d like to clarify that we predicted grid cells (not place cells, which are already experimentally established) that conjunctively integrate position and sensory information, along with an activated non grid EC-HPC pathway, were important to solve the task and replicate experimental findings of conjunctive tuning in place cells.\\n\\n**Regarding weaknesses:**\\n\\n> The authors only demonstrated that their model replicated the representations observed in 1 experiment (Nieh et al. 2021). The authors should consider recapitulating 1 other neural phenomena to increase the generality of their proposed model e.g. representational drift (Qin et al. 2023 Nature Neuro.), or increase the variety of simulations e.g. 5 arm decision navigation (Baraduc, Duhamel, Wirth, 2019 Science).\\n\\nWe believe the current scope of our work is well contained. Our work studies multi-region interactions underlying spatial navigation given recent findings indicate the same circuit has a potential role in spatial decision-making (Nieh et al., 2021). The findings of the work is binded to cognitive map theory. Though it is conveniently doable to simulate additional tasks that are not directly relevant, we think adding these tasks would deviate away from the main focus of the paper.\\n\\n> Nonetheless, the authors should consider previous models that use different hippocampal representations for navigation learning (Brown & Sharp, 1995; Arleo & Gerstner, 2000; Foster et al. 2000; Zannone et al. 2018; Kumar et al., 2022). \\n\\nThank you very much for your insights! We would appreciate more clarification on the first proposal, or, if you were referring to a potential inclusion of CA3 recurrence, we have added a new paragraph to Discussion justifying our current choice of modeling (lines 516-525). \\n\\n> For instance, will an RNN based RL agent with place cells and sensory evidence as input learn to solve the task (Kumar et al. 2022 Cerebral Cortex; Singh et al. 2023 Nat Mach. Int.), similar to M0 but with place cell position inputs? This proposal also suggests a EC-MEC-hippocampus-neocortical-striatum pathway but was not considered.\\n\\nRegarding the latter, this is a great point, and we actually have results on the variant of M0 that receives both hippocampal position code and sensory, as well as variants to M1-M5 where RNN receives both hippocampal code and sensory. These are not ultimately included because of the scope of the work, as we think these results will bring added confusion without ultimately affecting the current result. Our consideration and justification of only considering p \\u2192 RNN is discussed in the last paragraph of Section 3.1 lines 202-208. Below we copy the content directly for your convenient reference:\\n\\n**\\u201c** Although MEC, HPC, and EC can all interact with the cortex biologically, we model the place cell vector as the ultimate readout to the cortex given it is minimally sufficient for learning the task and testing the counterfactual of how the co-tuning of place cells arises. This framework enables extensive future studies. For example, one can systematically evaluate the computational advantages of different combinations of {MEC, HPC, EC} input(s) to the cortex for their roles in enabling generalization and rapid learning, e.g, sensory inputs from the EC may be especially important in scenarios where animals need to remember decision positions in the dark. **\\u201d**\"}", "{\"title\": \"Kindly Reminder of Last Day of Discussion\", \"comment\": \"Dear reviewers,\\n\\nThank you very much for your valuable time and feedback! We are posting to kindly remind you that today is the last day reviewers may post a message. We are enthusiastic to hear from you if our responses have sufficiently addressed your concerns and provided grounds for your consideration of increasing the score. We\\u2019d also like to learn why if it\\u2019s not the case. Looking forward to hearing from you.\\n\\n----\\n\\nWarm regards,\\n\\nAuthors of submission 8300\"}", "{\"summary\": \"The authors proposed a multi-regional reinforcement learning model to solve the evidence accumulation task. The model comprised of MEC for grid cells, EC for sensory cells, hippocampus for place cells, RNN as neocortex and perhaps the policy network as the striatum. The authors proposed 5 different model variants with different network connectivity and information passed into the hippocampus layer and evaluated its learning behavior and analyzed its representation. Based on the results, the authors propose that place cells that conjunctively integrate position and sensory information was important to solve the task.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"-- clearly explained the rationale for the model development.\\n-- presented 5 hypothesis for grid-place interaction for evidence accumulation navigation and showed how they deferred across behavior and representation. \\n-- The authors demonstrated that the model recapitulates the neural representations observed in experiments.\", \"weaknesses\": \"-- The authors only demonstrated that their model replicated the representations observed in 1 experiment (Nieh et al. 2021). The authors should consider recapitulating 1 other neural phenomena to increase the generality of their proposed model e.g. representational drift (Qin et al. 2023 Nature Neuro.), or increase the variety of simulations e.g. 5 arm decision navigation (Baraduc, Duhamel, Wirth, 2019 Science).\\n\\n-- Although the title specifies the role of hippocampus (instead of the entorhinal cortex), the authors only varied their models using different entorhinal activity to formulate the hypothesis in Table 1. This could be to demonstrate how the type of input to the hippocampus influences navigation learning behavior. Nonetheless, the authors should consider previous models that use different hippocampal representations for navigation learning (Brown & Sharp, 1995; Arleo & Gerstner, 2000; Foster et al. 2000; Zannone et al. 2018; Kumar et al., 2022). For instance, will an RNN based RL agent with place cells and sensory evidence as input learn to solve the task (Kumar et al. 2022 Cerebral Cortex, https://doi.org/10.1093/cercor/bhab456; Singh et al. 2023 Nat Mach. Int., https://doi.org/10.1038/s42256-022-00599-w), similar to M0 but with place cell position inputs? This proposal also suggests a EC-MEC-hippocampus-neocortical-striatum pathway but was not considered.\\n\\n-- the claim that the model enables rapid learning in spatially embedded task is not warranted. how do the authors define rapid learning and efficient navigation (red and brown curves in Fig. 2)? These agents still require close to 1000 episodes to converge instead of 1 or a few episodes (Foster et al. 2000 Hippocampus https://doi.org/10.1002/(SICI)1098-1063(2000)10:1<1::AID-HIPO1>3.0.CO;2-1 ; Kumar et al. 2024 https://doi.org/10.48550/arXiv.2106.03580)? Rapid learning is usually used under the context of zero, one, or few-shot learning, and not thousands of trials. \\n\\n-- it is unclear why M3 and M5 show similar increase in success rates of learning (Fig. 2A) when M3 does not get evidence from EC? Additionally, it is unclear why M0 and M5 show similar decrease in steps spent per episode (Fig. 2B). Representational analysis for M0 and M3 should be included in Fig. 3,4 and 5.\", \"questions\": \"-- the authors proposed a mechanistic hippocampus-entorhinal setup. However, it is unclear whether they used a biologically plausible learning rule or backpropagation for policy learning i.e. policy gradient. The former will make the model very appealing. But this is a minor point.\\n\\n-- why is there a disparity in M0's and M5's learning performance in Fig. 2A and 2B? Shouldn't we expect a model with a slower increase in cum. success rate to demonstrate a slower decrease in steps spent per episode? Maybe I am missing something. \\n\\n-- Since separability of clusters is not a prerequisite for navigation behavior (M3 vs M5 Fig. 9 vs Fig. 2), how can we make sense of the representation to behavior?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Simpler policy network is needed to show EC-HPC is the network integrating temporal information\", \"comment\": \"A recurrently connected neural network can have readouts as an additional set of nodes and weights, but it is not a given as we can also have an RNN without readouts to learn representations as in Miconi et al. 2017 eLife. I assume there are a total of 3 sets of weights to train in your \\\"policy network\\\": the input to the RNN, the recurrent weights, and the readout weights. Training 3 sets of weights using deep RL allows the model to learn additional representations/features to solve the task, for instance integrating temporal information (Singh et al. 2023 Nature Mach. Int.). Perhaps the EC-HPC network does not capture the temporal dependencies of the Tower Accumulating task, and instead the RNN is the one that is learning to integrate these temporal features to successfully learn a policy. I am starting to think this might be the case.\\n\\nA clearer way to construct a policy network is to have only one set of weight matrix $W_{ij}$ so that place cell representations or states $s$ directly synapse to the actions $a_j$ following $a_j = \\\\sum_i^N W_{ij} s_i$, similar to having a classification layer in deep networks. This is done by Zannone et al. 2018 Sci. Rep. who uses the Q learning algorithm and Foster et al. 2000 Hippocampus who uses the Actor-Critic algorithm to update $W_{ij}$. Hence, a simpler policy network only has 1 set of weights, that does not need to be trained by back propagation but can be trained by bio. plaus. rules. A model without the RNN but just the policy network will be more convincing to show that the temporal integration of sensory information is done by the EC-HPC network. \\n\\nI would strongly recommend the authors to evaluate a new model in which the HPC representations feed directly to the policy network, without the RNN, so that $a_{j,t} = \\\\sum_i^N W_{ij} p_{i,t}$. I hope this is clear?\"}", "{\"title\": \"On Biological Plausibility & Generality, and Further Clarifications of Misinterpretation [1/2]\", \"comment\": \"Thank you for your further comments! After carefully reviewing your response, we would like to point out some misinterpretations and seek more clarification respectfully.\\n\\n> The authors should have gone with a purely biologically plausible navigation model since the main computation is performed by Vector Hash, and the RL signal is not backpropgated to learn the representations for navigation like in deep RL.\\n\\nWe thank the reviewer for their thoughtful comments and acknowledge the related work by Kumar et al., 2022, which we have cited in line 133. We agree that a model propagating RL signals fully could offer a good alternative approach.\\n\\nIn our work, we combine Vector-HaSH with an RNN policy network trained using the REINFORCE algorithm, a simple and effective RL method. Although the policy network is trained by backpropagation but not the prestructured Vector-HaSH as the reviewer pointed out, RL is still the most common method to model behavioral and neural processes (e.g., Lee et al., 2023; Gershman & \\u00d6lveczky, 2020). By incorporating Vector-HaSH, which generates more biologically plausible representations, our framework showcases how these representations can enhance existing RL-based models for studying both behavioral and neural processes.\\n\\nNotably, the accumulating tower task we focus on differs significantly from the Paired Association Spatial Navigation Tasks in Kumar et al., 2022, which involve multiple fixed reward locations and open-space navigation. Instead, our task presents a more complex decision-making problem that requires the agent to infer the relationship between the frequency of tower appearances and the reward location. In this scenario, we believe our use of RL methods is particularly well-suited to capturing the nuanced computations required by the task, such as learning from the interactions with the environment.\\n\\nWhile we focus here on analyzing neural representations by comparing actual animal experiments with our Vector-HaSH-based models, we acknowledge the importance of further exploring a purely biologically plausible navigation model as future work.\\n\\n> This will make the model much more interpretable and easier to train for others to replicate, given the difficulty the authors faced as mentioned in Appendix A.1.\\n\\nIt would be great if the reviewer could clarify the difficulty they referred to. From our understanding, if this refers to the orange text in Appendix A.1, the difficulty we mentioned refers to the pure RNN baseline *without* Vector-HaSH, \\u201cFor the standalone RNN baselines (M0)...\\u201c, which had learning instability. There is no difficulty or additional hyperparameter search conducted for Vector-HaSH + RNN. M1-M5 uses the exact same set of hyperparameters as mentioned in Appendix A.1. Please let us know if we could help with further clarifications.\\n\\n> The authors should discuss why they chose the RNN + policy network and not simpler alternatives.\\n\\nWe\\u2019d like to politely clarify that the RNN is a policy network by itself and there\\u2019s no additional policy network. The RNN is the simplest setup in our view, with just a recurrent layer and a readout layer. Would it be possible to clarify what a simpler alternative could be or if this is a potential miscommunication?\\n\\n> Lastly, the authors should have shown the generality of the model in a variety of 1D and 2D environments instead of just one environment.\\n\\nWe appreciate the reviewer\\u2019s insightful suggestions. We agree that demonstrating generalizability across multiple environments is always valuable. However, as mentioned in our previous response, we leave this as future work due to the specific scope of our study. Instead, we focused on the learning behavior, neural representations, and the underlying computational roles of multiple regions involved in the accumulating tower task, which serves as a canonical example within the family of spatially embedded decision-making tasks.\\n\\nIn neurophysiology experiments, task-related variables\\u2014such as recording techniques, animal models, training durations, and specific task parameters\\u2014can vary significantly. For instance, Nieh et al. and Pinto et al. both investigated accumulating tasks but focused on different brain regions. Other well-established accumulating evidence tasks, such as the poisson click task (Brunton et al, 2013 and Bondy et al., 2024.), use an auditory sensory input instead of visual sensory inputs and do not have a spatial component. These inherent differences make task generalization more complex. Consequently, most neuroAI and theoretical works demonstrate their methods in a single environment as a proof of concept (e.g., Kumar et al., 2022, Cerebral Cortex; Miller et al., 2023, NeurIPS). We believe our approach is consistent with this established practice in the field.\\n\\nWe thank the reviewer again for their thoughtful feedback and will explore extending our approach to other environments in future work.\"}", "{\"title\": \"Kindly Reminder of Discussion Timeline & General Rebuttal Summary\", \"comment\": \"Dear reviewers,\\n\\nThank you again for your valuable time and constructive feedback! \\n\\nWe have uploaded a revised manuscript with edits in color after incorporating your valuable feedback. We have responded to each reviewer\\u2019s individual feedback, providing clarifications and highlighting relevant edits in each thread. Below, we summarize common questions and their corresponding manuscript changes, and other minor ones. We believe our paper has been significantly strengthened as a result of incorporating your feedback.\\n\\nGiven the last day of making any revision is this Wednesday AoE, we are keen to hear your thoughts and hope our revisions have sufficiently addressed your concerns and provided enough reasons for you to consider raising the score. Please let us know if we could assist with any further comments or questions.\\n\\nWarm regards,\\n\\nAuthors of Submission 8300\\n\\n----\\n\\n**Summary of updates regarding common questions**\\n1. How grid code arises (Reviewers TiD8, bpUy, LP6J)\\n* New Fig 1C (& caption), lines 198-200, Appendix A.1. \\n2. Why not include CA3 recurrence (Reviewers bpUy, LP6J)\\n* Discussed in lines 516-525, with additional analysis of its impact in M2 & M4 in Appendix F. \\n3. More intuition on Fig 2 interpretation (Reviewers TiD8, LP6J, qnRf)\\n* Lines 357-359.\\n\\n**Other minor edits in addressing questions by individual reviewers:**\\n1. ML relevance: lines 64-70 in Introduction.\\n2. Added clarity on model (hypothesis) difference: Fig 1E (& caption).\\n3. Modified RNN baselines (blue, black) in Fig 2.\\n3. Correct typo of M4 to M5: line 304.\\n4. Emphasize \\u201crapid learning\\u201d is a relative term: emphasis added to lines 336 and 367. \\n5. Make Section 5.3.1 and Fig 5 caption more clear; add more quantification: reworded Fig 5 caption, added Fig 10.\"}", "{\"title\": \"Complex model where RL does not contribute to representation learning,\", \"comment\": \"I would like to thank the authors for clarifying some concerns, especially with the analysis. However, I am inclined to maintain my score as I feel the model is unnecessarily complex and the analysis is only performed in one simulation environment.\\n\\nUnlike the TEM model (Whittington et al. 2020), the Vector Hash model has grid and place codes using bio. plaus. rules which is nice. Furthermore, it has been shown that training just the readout layer of a feedforward/RNN using the temporal difference Hebbian learning algorithm allows agents to learn complex navigation policies (Kumar et al. 2022 Cerebral Cortex). The authors should have gone with a purely biologically plausible navigation model since the main computation is performed by Vector Hash, and the RL signal is not backpropgated to learn the representations for navigation like in deep RL. This will make the model much more interpretable and easier to train for others to replicate, given the difficulty the authors faced as mentioned in Appendix A.1. The authors should discuss why they chose the RNN + policy network and not simpler alternatives. Will this change the conclusion of the model? \\n\\nLastly, the authors should have shown the generality of the model in a variety of 1D and 2D environments instead just one environment.\"}" ] }
9QYJu1cGfE
Quo Vadis, Motion Generation? From Large Language Models to Large Motion Models
[ "Ye Wang", "Sipeng Zheng", "Bin Cao", "Qianshan Wei", "Qin Jin", "Zongqing Lu" ]
Inspired by the recent success of LLMs, the field of human motion understanding has increasingly shifted towards the development of large motion models. Despite some progress, current state-of-the-art works remain far from achieving truly generalist models, largely due to the lack of large-scale, high-quality motion data. To address this, we present MotionBase, the first million-level motion benchmark, offering 15 times the data volume of the previous largest dataset and featuring multimodal data with hierarchically detailed descriptions. By leveraging this vast dataset, our large motion model demonstrates strong performance across a broad range of motions, including unseen ones. Through systematic investigation, we underscore the importance of scaling both data and model size, with synthetic data and pseudo labels playing a crucial role in mitigating data acquisition costs. Moreover, our research reveals the limitations of existing evaluation metrics, particularly in handling out-of-domain text instructions --- an issue that has long been overlooked. In addition to these, we introduce a novel 2D lookup-free tokenizer for motion quantization, which preserves motion information and expands codebook capacity, further enhancing the representative ability of large motion models. The release of MotionBase and the insights gained from this study are expected to pave the way for the development of more powerful and versatile motion generation models. Our code and database will be released at \url{https://anonymous.4open.science/r/MotionBase}.
[ "human motion generation", "large motion model", "large language model" ]
Reject
https://openreview.net/pdf?id=9QYJu1cGfE
https://openreview.net/forum?id=9QYJu1cGfE
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zqMwA3obxa", "zhHfJshBS9", "zf2M9BOgCJ", "yuQPt0M8oH", "ylfDim84iz", "ykzPzsELZJ", "wWv7PTFHEy", "vfQ3dIH86L", "vPw47fzflO", "udvJqqr5km", "uaKn6u9S4s", "uC4j3w4VEs", "txXpBbkmDC", "t3G1PiZGYm", "sunO7hWLw4", "sDvctvEISu", "sBHrnT9BRF", "rh1egnr3Jk", "rUTmq9sMnp", "rHh7VAsjzE", "qxlrvpvrBj", "nFGRWzIi85", "kTPTLVljJJ", "iITtGcnkfH", "iA7EWgQ0gl", "hWkGdtt5Mn", "gZab91m1Vb", "dMW2ErVhaL", "d7BicEheeH", "csGc3URRMW", "cDPo1t6qjb", "bwUOam3cSl", "aeQo5en9Pr", "YzdcMsRw4L", "X6lX3NHJwf", "WSIKjqdqpA", "TEhIofiTaY", "TCWfcks4fe", "RyAdpQgzdg", "Q10bKIQC8v", "PSIUwT7Hrt", "PI88EJhUdq", "NenxWBzW7v", "MKzz7eFHiX", "M8m41H1VMu", "LVcPWKezKY", "KvoakNE99A", "KPQgyUA8Ce", "FTldChnYWL", "FQgGHfjDOE", "FOZpliDZLh", "FMeVGw5wce", "EnAHgZGiGI", "EEDcJmsnmn", "BodTtDuQuV", "BBX4lfllXT", "AUvBr2Z4wC", "A6ICGtV10K", "8od5AMEMqN", "8ZccImSZJ9", "5SnYSKktBG", "5JOOjftL1D", "46jYGU9PXD", "3hiyUgoSNN", "2fauoNhv5K" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732280615961, 1733190868339, 1732505306435, 1732605317842, 1733310779756, 1732979347390, 1732788463541, 1732985128329, 1737523627637, 1732280830043, 1732626765257, 1732278296709, 1732279490360, 1729049561581, 1732602380296, 1732698869907, 1732484663264, 1732875164740, 1732602029087, 1733215628674, 1732278761861, 1733311210656, 1732278389834, 1732600641789, 1733147491493, 1730443623557, 1732779978350, 1732682085727, 1732787951470, 1733151646524, 1734738390092, 1732530058159, 1730828657353, 1732373052011, 1732280952093, 1732277788601, 1733157816202, 1732372830990, 1732788473573, 1732465481070, 1732280887119, 1733310981725, 1730595865141, 1733147480851, 1732974138375, 1732700549701, 1732313911208, 1732279106591, 1732278167551, 1732278937527, 1732453693988, 1732973981962, 1732279232243, 1732695591602, 1730713617967, 1733110730020, 1733311110167, 1733059687238, 1732278116317, 1732875243427, 1733028964111, 1733192050594, 1733184951646, 1732280362605, 1733155488562 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_CH9D" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_WfZ5" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_WfZ5" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_pT95" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_pT95" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_WfZ5" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_qpUP" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_qpUP" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_pT95" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_qpUP" ], [ "ICLR.cc/2025/Conference/Submission4241/Area_Chair_A5AJ" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_CH9D" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_pT95" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_WfZ5" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_WfZ5" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_WfZ5" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_pqNv" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_WfZ5" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Reviewer_WfZ5" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ], [ "ICLR.cc/2025/Conference/Submission4241/Authors" ] ], "structured_content_str": [ "{\"title\": \"Official Comment to Reviewer WfZ5's weakness 2\", \"comment\": [\"**W2. [The text annotation] The annotation quality of the text by Gemini-1.5-pro is not well evaluated. In my practice, it always contains some answers like \\\"sorry...\\\". The results should be corrected by researchers one by one. Has the >1M data been checked?**\", \"The quality of generated texts is highly related to the prompt design. We introduce two approaches used in our work to evaluate the text description quality.\", \"1. Firstly, we sample 10,000 motion descriptions from our MotionBase generated by Gemini, along with 10,000 descriptions each from MotionX and HumanML3D. These descriptions are scored using GPT-4o, which evaluates each description on a scale of 1 to 5 based on predefined criteria focused on clarity, detail, and accuracy. The scoring criteria are as follows.\", \"Score 1: The description is vague, lacks specific details about body movement, and contains confusing or unclear expressions.\", \"Score 2: The description covers some movement and posture content but lacks sufficient detail or accuracy and includes unclear expressions.\", \"Score 3: The description clearly outlines movement and posture, providing basic details but lacking in-depth analysis.\", \"Score 4: The description is accurate and detailed, effectively conveying the movement process and changes in body posture, with some analytical depth.\", \"Score 5: The description is precise, comprehensive, and fluent, offering in-depth analysis of every detail of the movement and posture, demonstrating a high level of professionalism and clarity.\"], \"we_then_calculate_the_average_scores_for_each_dataset\": \"MotionBase (3.837) and MotionX (1.386) and HumanML3D (1.703). These scores suggest that MotionBase descriptions are generally more detailed and accurate compared to MotionX and HumanML3D.\\n \\n2. To further evaluate the quality of the generated texts for vision-based motions, we prompt Gemini-pro with text descriptions and corresponding rendered motions. Our primary focus is on the accuracy with which the text descriptions reflect the content of the visual cues. To assess this, we present 500 rendered samples with their corresponding text descriptions from each dataset to Gemini, requesting a score based on the criteria we established earlier. The evaluation results provide valuable insights. The texts of MotionX and HumanML3D receives an average score of 2.25 and 3.08, respectively. Notably, MotionBase achieves a significantly higher average score of 3.82, outperforming the other two datasets.\\n\\n**W2. [The text annotation] The proposed contribution of hierarchical text is not discussed well. Has it been used in the model training? If I miss, please point it out. If this annotation is not used, what is the motivation for this hierarchical text contribution? Will it make the result more fine-grained? It is quite unclear.**\\n\\n| Training text | R@1 \\u2191 | R@3 \\u2191 | FID \\u2193 | MMDist \\u2193 |\\n|-----------------|-------|-------|-------|-----------|\\n| Real | 0.290 | 0.563 | 0.011 | 3.480 |\\n| Basic | 0.264 | 0.542 | 0.516 | 4.007 |\\n| Hierarchical | 0.302 | 0.603 | 0.521 | 3.523 |\\n\\nYES, the hierarchical text is used during pretraining to improve the diversity of text corpus and make the description finer-grained. The table compares experimental results using hierarchical text versus using only basic text. The results show that hierarchical text can effectively enhance the model's semantic understanding and thereby improve the semantic matching of generated motions.\\n\\nIn addition, the hierarchical texts also ensure the quality of motion description generated by Gemini-1.5 or GPT-4o. Specifically, our latest motion descriptions are structured into three hierarchical and complementary levels: an overall description of the entire body, detailed part-level description for each body part, rule-based description of each joint's relative movement derived from joint positions (e.g., \\\"The left hand is wide apart from the right hand. It is on the ground, the left upper arm, both legs and the left forearm are aligned horizontally......\\\"). To enhance the reliability and quality of the motion descriptions, we condition GPT-4o with two levels of description while using the remaining level as the evaluation target. GPT-4o then refines the textual content. By doing this, each level of description can provide complementary details and correction for the other two levels, enabling the generation of more precise and reliable motion descriptions.\"}", "{\"comment\": \"Dear reviewer WfZ5,\\n\\nThis is the reviewer guidance on ICLR 2025 official website. We sincerely note that ICLR's rebuttal phase is different from most other AL/ML comminuty: https://iclr.cc/Conferences/2025/ReviewerGuide#Reviewer%20tasks. If you are the first time to serve as ICLR's reviewer, you should really read the guideline carefully first. Otherwise, your conclusion may disobey the traidition and spirit of our ICLR community.\\n\\n**Engage in discussion: The discussion phase at ICLR is different from most conferences in the AI/ML community. During this phase, reviewers, authors and area chairs engage in asynchronous discussion and authors are allowed to revise their submissions to address concerns that arise. It is crucial that you are actively engaged during this phase. Maintain a spirit of openness to changing your initial recommendation (either to a more positive or more negative) rating.**\\n\\nIn what context does this paragraph suggest that the reviewer should base their evaluation on the \\\"initial paper\\\"?\\n\\nWe are quite puzzled where do you get the information that \\\"a reviewer's decision should base on the original version\\\". As a ICLR reviewer too, I saw many reviewers appreciate the revision of the author and change. If the reviewers do not follow the ICLR's spirit, it would be quite unfair for our paper's rebuttal.\\n\\nIn addition to rebuttal guidance, we have the following concerns that we believe should be clarified first:\\n\\n - For PHC and other details: the reason we provide so much details about PHC is because you ask the corresponding questions. The PHC's role is only a sentence in our initial paper, not a major part. **Once again. we concern the reviewer mistakenly represent our contribution.**\\n - For static data: Do we have any approach to convince you in addition to experimental results and codes we provided?\\n - Most importantly, we raise the **logical questions**: Based on your feedback, do you believe the paper would be improved by removing all of our static data and corresponding experimental results? Doesn\\u2019t this suggestion seem somewhat unreasonable? Additionally, do you recognize the contribution of the dynamic part of MotionBase? If you believe the static data is more problematic, then the dynamic data should be of higher quality to compensate, right? Otherwise, how would the results improve? \\n\\n\\nFinally, we believe it\\u2019s not productive to judge the tone of others. This could serve as a subjective hint to the AC and other reviewers. If any of our words stray from the matter at hand, please point it out. Otherwise, it risks being an unfounded accusation.\\n\\nBest,\\n\\nAuthors\"}", "{\"comment\": \"Thank you for your response! We appreciate your concern and would like to clarify that, even excluding static data, our dataset contains 800K dynamic motion sequences\\u2014ten times larger than existing benchmarks. We believe that this volume of data, which is also scalable, substantiates the contribution we have claimed. Furthermore, the effectiveness of static data has been demonstrated in our ablation study (Table 4). We hope these points address your concern to some extent.\\n\\nIf there are any remaining issues or concerns that were not addressed in our previous discussion, we would sincerely appreciate it if you could point them out. We are more than happy to provide further clarifications or answer any additional questions!\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for sharing your remaining questions. Before attaching our further responses, we are encouraged to see that the discussion is becoming more focused and specific with reduced questions, which suggests we are moving in the right direction!\", \"here_is_our_summary_to_the_remaining_five_questions\": \"(1) details issues about PHC; (2) discussion about static data; (3) video concatenation; (4) sim-to-real demo videos; (5) text hallucinations; (6) the FID of T2M-GPT. Hoping that our understanding of the remaining questions is correct.\\n\\n**We will attach our responses very soon.** Thanks again for your constructive feedbacks!\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Summary of the rebuttal\", \"comment\": [\"Dear AC and reviewers:\", \"We deeply appreciate your efforts during the rebuttal process. We are encouraged that reviewers CH9D, phNv, and pT95 recognize the value of our work. Reviewer qpUP believe our paper would be a good paper with the inclusion of revised details during rebuttal, though their review, along with WfZ5's, is borderline reject. To address our lengthy and fragmented discussion with WfZ5, we provide a brief summary below.\", \"---\", \"# Our work in brief\", \"We propose to construct a large-scale motion generation dataset, validate the scaling law in motion understanding, and explore motion tokenization. The importance of contributions is ranked in order.\", \"---\", \"# Reviewers' positive recognitions\", \"**From reviewer CH9D**:\", \"well motivated\", \"dataset is large\", \"Evaluation on multiple datasets and multiple models with strong baselines.\", \"**From reviewer phNv**:\", \"propose a large-scale dataset\", \"identifies key factors influencing the effectiveness of large motion models\", \"explore effective motion quantization method\", \"**From reviewer pT95**:\", \"the proposed large-scale data has the potential to be a valuable resource for the community\", \"**From reviewer qpUP**:\", \"collect a new large-scale text-motion dataset.\", \"scale the tokenizer vocab size and the model size.\", \"**From reviewer WfZ5**:\", \"good writting\", \"The statistics of the dataset are clear.\", \"# Remaining points:\", \"**Should the revised details be considered during rebuttal?**\", \"We emphasize that the spirit of ICLR motivates us to share extensive details of our work, including code, checkpoints, and demo \\u2014\\u2014 something few other papers do before official publication. The unique spirit of ICLR, which fosters open discussion, is what sets it apart from other conferences. Since some reviewers may be attending for the first time, we hope the AC, Senior AC, and PC will remind them of this. If reviewers insist on statements like:\", \"> \\\"the discussion process is for addressing misunderstandings in the reviews, not for providing major revisions. \\\"\", \"> \\\"basing a decision on revisions would be unfair to other papers\\\"\", \"> \\\"These details are supplemented during the rebuttal period. This is unfair to other submissions.\\\"\", \"Our efforts to share will be in vain. We feel discouraged and disheartened by these comments, especially after receiving appreciation in previous ICLR submissions. We are even restricted from using anonymous links to share our code, checkpoints, and demos. As reviewers ourselves, we\\u2019ve seen other papers praised for their revisions. If our revision is disregarded, it would be unfair. For reference, here\\u2019s the reviewer guideline from the official website:\", \"> \\\"Engage in discussion: The discussion phase at ICLR is different from most conferences in the AI/ML community. During this phase, reviewers, authors and area chairs engage in asynchronous discussion and authors are allowed to revise their submissions to address concerns that arise. It is crucial that you are actively engaged during this phase. Maintain a spirit of openness to changing your initial recommendation (either to a more positive or more negative) rating.\\\"\", \"**WfZ5 considers some points as major revisions, but we disagree.**\", \"The details during rebuttal do not go against our original paper. For example:\", \"WfZ5 labels PHC as a significant revision, but we never claimed that; it's just one step in a broader data refinement process.\", \"WfZ5 argues we overemphasize the robot demo, which we did not claim.\", \"WfZ5 views increasing dynamic data from 400K to 800K as a major revision, but we question this; the original 400K clips with over 1M motion-text pairs are already self-standing.\", \"A major revision should be defined as something goes against our contribution to the original paper. Asking eight questions about a small step in the pipeline, such as implementation details, reward setup, or success rates, cannot be considered a major revision.\", \"**Is Static data harmful?**\", \"We provide experiments, code, pretrained models, 100+ demos, and related work, yet the reviewers remain unconvinced. Additionally, we find some feedback from WfZ5 to be subjective:\", \"WfZ5 states, \\\"I think this is not a suggested choice in the animation community,\\\" which is a biased conclusion. How can motion generation be limited to animation, ignoring applications in vision, robotics, and VR/AR?\", \"WfZ5 recommends reading an unofficial blog by Daniel Holden, claiming \\\"you will find it not self-stand if you read this well-known scientist's blog.\\\" After reading it, we found only empirical suggestions. Using this as evidence to question our results contradicts scientific principles, which should be based on truth and experimental evidence, not empirical opinions.\", \"This opinion leads to a logical contradiction: if static data is harmful, dynamic data should be of even higher quality. Otherwise, how could our results improve?\"]}", "{\"comment\": \"After checking everything here and the discussion\\nI\\u2019m keeping my score as is. The works itself is valuable with some minor concerns that can be addressed or progressed in a later work.\"}", "{\"comment\": \"Thank you for your thoughtful reply. While we deeply appreciate your feedback, we would like to respectfully address some points of disagreement and provide clarification.\\n\\n1. We believe there may be a misunderstanding in your question and suggestion. **How could we use to-be-refine data (our data) to train PHC for data to be refined (our data), akin to the idea of lifting oneself with their own hands.** It is important to note that no large-scale benchmark can guarantee 100% clean data. Therefore, even with a successful conversion rate of approximately one-third, this demonstrates the effectiveness of our data processing strategy with at least 30% additional improved motion with high quality.\\n\\n2. We respectfully note that citing an informal blog as valid evidence in a professional rebuttal may not be appropriate. Additionally, we respectfully disagree with the blog's conclusion as it has not been validated under the context of large-scale generative pretraining. **Considering this, we believe the use of static data are still an open question instead of a conclusion.** We kindly recommend that the reviewer refer to newest well-recognized generative works in the field, such as **UnifiedIO-2, VideoPoet, EMU3**, and multimodal tuning methods like **LLaVA, Qwen-2-vl**, to gain a deeper understanding of the capabilities of current generative models. If you have already read these works, you might observe that concerns about \\\"over-smoothing\\\" are not self-standing when considering large-scale pretraining, well-designed prompts, and effective training strategies. As we understand the reviewer might not specialize in the area of LMM, we have provided a more detailed explanation below to offer further clarity.\\n\\n 1. From a statistical learning perspective, directly combining data from different distributions without incorporating conditional information may result in learning a biased marginal distribution, which could potentially degrade generation quality. However, by introducing additional conditional information\\u2014in our case, specific language prompts\\u2014we guide the model to learn the conditional distribution P(motion|text, condition) rather than the marginal distribution P(motion). This allows the model to accurately switch between different distribution modes during generation, effectively avoiding distribution confusion. Moreover, static data can be seen as a special case of dynamic data, functioning as a \\\"snapshot\\\" of motion at a particular moment.\\n\\n 2. More importantly, static data contributes to improved performance by helping the model better understand spatial constraints between joints. Static poses provide abundant examples of valid joint positions and rotations, enabling the model to learn what constitutes physically plausible human postures. Furthermore, static data enhances the model\\u2019s ability to capture correlations between joints, such as motion range limitations and inter-joint dependencies. With the guidance of conditional prompts, the model can not only distinguish between static and dynamic data but also leverage the foundational knowledge learned from static data to enhance dynamic motion generation, thereby improving overall quality.\\n\\n 3. We carry out experimental results to validate the above points: when specific conditional prompts are added, the model effectively distinguishes between data distributions and generates outputs accordingly.\\n\\n3. We provided a new video with higher resolution to the supp. \\n\\n4. The robot videos are intended as demos to showcase the potential applications of our work.\\n\\n5. We kindly request the reviewer to revisit our response, as we have addressed your question. Key points have been clearly marked with significant symbols, such as (1.; 2.), **directly following the sentence you referenced.** We hope this format makes it easier to locate our responses.\"}", "{\"comment\": \"Dear Reviewer pT95,\\n \\nWe hope this message finds you well. We think it would be necessary to remind you that your prompt decision to lower the score may have been perceived by reviewers who invited you to further discussion as potentially careless and subjective, even considering to reduce the weight of your rating. Despite this, we deeply appreciate your positive perspective and the effort you have invested in engaging with our rebuttal. We believe your insights are critical to ensuring a constructive and well-rounded final discussion.\\n\\n**Are you still willing to participate in the ongoing discussion with WfZ5 and us?** If so, we would greatly appreciate it if you could provide additional details about your decision or share any unresolved questions or concerns. Should there be specific points we have not addressed to your satisfaction, please let us know\\u2014we would be more than happy to provide a prompt and thorough response.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Official Comment to Reviewer WfZ5's weakness 3-4\", \"comment\": \"**W3. The H2VQ proposed in Humantomato (ICML-24) is missing for discussion or comparison. And the technical and evaluation contribution of motion VQ is limited.**\\n\\nTo the best of our knowledge, Humantomato has not released their code, and no successful re-implementation has been found. As a result, it is currently impossible to conduct a quantitative comparison with the H2VQ method proposed in Humantomato. Additionally, we re-implemented RVQ and achieved better reconstruction performance (56.9 MPJPE) compared to H2VQ (62.34 MPJPE reported) on MotionX. Given this, we have chosen not to include a comparison with H2VQ in Table 6. We would be glad to perform such a comparison once Humantomato releases their code.\\nHowever, to improve the reference discussion, we also include this reference to our related work.\\n\\nThe primary contribution of our motion tokenization technique lies in the exploration of tokenization methods specifically designed for motion generation. While previous works have used 1D discrete vectors to represent the body, we highlight the information loss that this approach may cause. In this paper, we propose a novel direction for encoding motions by treating the motion sequence as a 2D image. It's important to note that this approach has not been extensively discussed in prior work, and we validate its potential and effectiveness through our experiments. Therefore, the inclusion of LFQ in our study serves as a preliminary exploration to assess the potential of alternative tokenization methods for motion representation. We intend to conduct a more thorough investigation of this approach in future work.\\n\\n**W4. This work does not include any demo video, which is unacceptable in the animation community.**\\n\\nThank you for your suggestion! To address your concern about the visualization of demo videos, we have created a website showcasing a variety of examples, including samples from our datasets and generated results from our models. Additionally, we present physically grounded examples, as well as examples deployed in both simulated and real-world environments ---- an aspect that has not been highlighted in previous works!\\n\\n**W4. The FID in Table 5 is extremely large, which strengthens my concerns about the motion quality.**\\n\\nWe figure out that the high FID values reported in Table 5 are primarily due to the limited encoding capability of the retrieval model used for evaluation. \\n\\nThe model originally used in our ICLR submission was trained on HumanML3D and was unable to fully capture the characteristics of the motion data, leading to abnormally high FID scores. To address this, we train a more powerful and robust encoding model on our MotionBase dataset. After re-evaluating with this improved model, we observe a significant reduction in FID values, bringing them into a reasonable and acceptable range. We include these updated evaluation results in the revised version to ensure clarity. We also plan to use more recently proposed text-motion retrieval models like TMR[1] to obtain a more accurate evaluation of model performance.\\n\\n[1] TMR: Text-to-Motion Retrieval Using Contrastive 3D Human Motion Synthesis.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear reviewer, here are our responses to your remaining questions.\\n\\n1. Our policy builds on the single primitive policy from PHC, with most configurations adhering closely to its design. For motions involving interaction like 'sit', we do not apply any pre-processing. Instead, we rely on the policy to perform directly tracking. Unsurprisingly, this often results in falling down to the ground at the very beginning, which is considered as a failure sample based on the previously defined criteria. This is probably the first reason why the success rate is relatively low. The second reason may stem from the fact that PHC is trained exclusively on the AMASS dataset. While AMASS is a high-quality dataset, its limited size constrains generalization. As the data scales up, the policy may struggle to track motions that fall outside the distribution it was trained on, further contributing to failures during tracking.\\n\\n2. There are many common strategies to avoid the \\\"over smoothing\\\" you mentioned: \\n 1. During fine-tuning, we design specific prompts to guide the model's generation. For static poses, we include additional language prompts, such as 'keep the action still,' to help the model distinguish between generating a static pose and a dynamic motion.This approach leverages the well-known ability of large generation models based on LLMs [1,2,3,4,5] to produce diverse outputs through carefully crafted prompts. In addition, no negative effects are observed when combining images and videos, or bounding boxes and segmentation masks, or 3D skeleton poses and sequences, or vision and audios during training. We believe this principle should extend to motion as well.\\n 2. Visualization. To better prove our conclusion, in the visualization appendix, we present generation results from both models trained with and without additional language prompts. The clear differences demonstrate that **with additional language prompts, the model can distinguish between different motion distributions, thus not affecting the generation of dynamic motions.**\\n 3. Many common training strategies, like weighted sampling, weighted gradient, progressively increasing the ratio of dynamic motion, all these strategies can improve the stability of training, ensuring the model to avoid \\\"over smoothing\\\". \\n\\n3. Sure, we have updated this video to the supp file on OpenReview\\n\\n4. It's worth to note that we claim physical plausibility only within simulated environments, not in the real world. For real robots, we demonstrate that the generated motion results can be successfully retargeted to the robot's joints, highlighting the potential applications for fields such as robotics. **We believe this has the potential to make a significant impact on the motion understanding community, extending to a wide range of areas.** A suspension system is required, as no one, to our knowledge, has yet been able to deploy physically plausible motions on real robots. Regarding your other question, our approach differs from using PHC to filter physically feasible motions. In our framework, PHC is used primarily during the data filtering stage to ensure the physical plausibility of training data. In the real robot demonstration, our focus is on accurate joint trajectory tracking.\\n\\n5. Regarding text evaluation, it's important to note that **we did not solely rely on text-only scoring. As mentioned in our initial response**,we also rendered the motions into videos and input them, along with the text descriptions, into Gemini Pro, which has visual understanding capabilities. This approach helps mitigate the hallucination issue that can arise when text is used without visual grounding. The final evaluation results showed that our text annotations achieved higher motion matching accuracy.\\n\\n6. The performance gap with T2M-GPT, we believe, is not due to differences in model size. The key factor lies in the text encoder used by T2M-GPT, which is a frozen CLIP text encoder. Due to the inherent limitations of CLIP's text encoding capabilities, models trained in this manner may struggle to comprehend a broader range of motion-related language, even if they demonstrate strong motion generation abilities. In contrast, decoder-only LLM-based large motion models, which jointly train text tokens and motion tokens, achieve both superior text-motion semantic alignment and enhanced motion generation capabilities.\\n\\nWe are sincerely expecting your further feedback!\\n\\nBest regards,\\n\\nThe Authors\\n\\n[1]Conditional Language Learning with Context. ICML 2024\\n\\n[2]Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization. ICML 2024\\n\\n[3]Emu3: Next-Token Prediction is All You Need\\n\\n[4]VideoPoet: A Large Language Model for Zero-Shot Video Generation. ICML 2024\\n\\n[5]Unified-IO 2: Scaling Autoregressive Multimodal Models with Vision, Language, Audio, and Action. CVPR 2024\"}", "{\"title\": \"Official Comment to Reviewer pT95's weakness\", \"comment\": \"We appreciate the reviewer\\u2019s careful assessment and acknowledgment of our paper's clarity, motivation, and contributions.\\n\\nBefore reading our feedback, we introduce a visualization website to validate the quality of our dataset at http://www.motionbase3d.com. (Please note: this link may be unstable, and refreshing the page 10~20 times may be required to view the content). In case the reviewer is unable to access the site, we have also provided an anonymous cloud storage link containing all the examples featured on the website: [click this link](https://www.dropbox.com/scl/fo/6w7yuhun8mpuz9yuaz5ej/ALUo_dOLYIyNgOZ3Ll85JCI?rlkey=yecydfnno43b5o602ae2fmr0w&st=rq97kdcw&dl=0). The platform includes visualized samples from our dataset, physically grounded examples of refined motions, rendered results generated by different motion models, and live demonstrations in both simulated environments and real-world settings (using H1 and G1 UNITREE humanoid robots). We hope this platform addresses the data quality concern reviewers may have. If the reviewer requires additional information or examples, we are more than happy to upload further relevant results upon request. Additionally, our dataset is highly scalable due to the robust data collection pipeline. As of the start of the rebuttal period, we have collected over 1.5 million motion trajectories and 4 million motion-text pairs, which are 1.5X compared to our first submission. These datasets have undergone three rounds of rigorous data filtering to ensure their quality.\\n\\nHere are our responses to weaknesses and questions.\\n\\n**W1. The layout of the paper is somewhat challenging for readers. It contains numerous messages and analyses, requiring readers to scroll up and down frequently to locate referenced tables. Additionally, due to page limitations, many explanations are placed in the Appendix. Tables and figures are positioned mid-page without aligning well with the paragraph height, disrupting the flow.**\\n\\nThank you for your comments on the paper's layout. We want to present as much content as possible therefore the layout could be not well-aligned. In the revised manuscript, we will: (1) Optimize Figure/Table Placement: Align figures and tables with the paragraph height and place them as close as possible to the relevant text to enhance readability. (2) Improve Referencing: Ensure clear and unambiguous references to figures and tables, reducing the need for excessive scrolling. (3) Streamline Appendix: Integrate critical explanations and analyses into the main text where appropriate, while reorganizing the remaining appendix content for better structure and clarity.\\n\\n**W2 & W3. Important reference and minor issues and typos.**\\nThank you for your notice. We have cited this important reference and corrected the issues and typos you pointed out in the revised paper.\"}", "{\"title\": \"Official Comment to Reviewer WfZ5's weakness 1\", \"comment\": \"We greatly appreciate the time and effort you invested in providing these detailed observations, questions, and comments. We have carefully considered your comments and have outlined our responses and proposed changes below. We hope these adjustments and explanations address your concerns and further enhance the manuscript.\\n\\nBefore reading our feedback, we introduce a visualization website to validate the quality of our dataset at http://www.motionbase3d.com. (**Please note: this link may be unstable, and refreshing the page 10~20 times may be required to view the content**). In case the reviewer is unable to access the site, we have also provided an anonymous cloud storage link containing all the examples featured on the website: [click this link](https://www.dropbox.com/scl/fo/6w7yuhun8mpuz9yuaz5ej/ALUo_dOLYIyNgOZ3Ll85JCI?rlkey=yecydfnno43b5o602ae2fmr0w&st=rq97kdcw&dl=0). The platform includes visualized samples from our dataset, physically grounded examples of refined motions, rendered results generated by different motion models, and live demonstrations in both simulated environments and real-world settings (using H1 and G1 UNITREE humanoid robots). We hope this platform addresses the data quality concern reviewers may have. If the reviewer requires additional information or examples, we are more than happy to upload further relevant results upon request. Additionally, our dataset is highly scalable due to the robust data collection pipeline. As of the start of the rebuttal period, we have collected over 1.5 million motion trajectories and 4 million motion-text pairs, which are 1.5X compared to our first submission. These datasets have undergone three rounds of rigorous data filtering to ensure their quality.\\n\\n\\n**W1. [The motion collection process] How do you evaluate and make sure the quality of collected motion data, avoid issues like jittering and foot sliding**\\n\\nWe have implemented several strategies to evaluate and refine the motions generated in MotionBase.\\nFirst, thanks to our constructed visualized platform, we can efficiently sample and assess a large amount of motions from MotionBase.\\nThen, to ensure motion quality, we adopt the following steps:\\n1. We first train an RL-based policy $ \\\\pi_{\\\\text{refine}} $ to elaborate and refine raw motions, ensuring they adhere to physical laws (e.g., maintaining balance) and appear more realistic. Specifically, this policy takes raw motion sequences as input, treating them as target poses, and generates new motion sequences that satisfy physical laws in a simulated environment, eliminating issues like jittering and foot-sliding.\\n2. While effective, the RL-based policy $ \\\\pi_{\\\\text{refine}} $ may struggle with drastic movements in target poses, leading to slipping within the simulation. For such cases, we adopt the following considerations: If a refined motion maintains balance in the simulated environment for a specific duration, it can be regarded as high-quality with no severe issues like jittering or sliding. If a motion fails to maintain balance from the start, it is considered a bad sample and discarded.\\n3. For motions that pass **STEP 2**, we utilize a pretrained motion model like RoHM[1] for further refinement. Additionally, we experiment with more powerful motion reconstruction models such as WHAM[2]. Based on our experience, existing motion models have been able to effectively enhance the quality of raw motions.\\n\\n[1] RoHM: Robust Human Motion Reconstruction via Diffusion\\n\\n[2] WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion\\n\\n**W1. [The motion collection process] If the quality of the ground truth is not good enough, how can you generate good motion? Therefore, the result in L395-402 is not solid and convincing.**\\n\\nWe understand your concerns regarding the impact of motion quality and the results discussed in lines L395\\u2013402. First, we believe the poor FID results in L395\\u2013402 primarily stem from the limitations of the retrieval model used in the ICLR version. The model [1], initially trained on HumanML3D/MotionX, was applied to evaluate MotionBase, which introduced a mismatch. To address this, we have adopted a more robust retrieval model specifically trained on MotionBase. With this improved model, we observed significant performance gains, particularly in the FID metric. \\nSecond, as previously detailed, we have developed a functional toolbox to refine both motion and text data through various strategies. MotionBase has undergone three rounds of rigorous data cleaning and verification, and we plan to implement additional filtering processes to further enhance its quality. For better visualization, we also showcase our data on a website and hope these enhancements address your concerns and bolster confidence in our results.\\n\\n[1] Generating Diverse and Natural 3D Human Motions From Text\"}", "{\"summary\": \"This paper claims to propose a large motion model with a very large motion database. However, the motion quality is not well evaluated. Besides, the authors propose a motion quantization method, which is borrowed from LFQ (Mentzer et al., 2023). The authors claim a good generation quality of the generated results, which is not provided in the demo.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The writing of this work is a bit fancy.\", \"The statistics of the dataset are clear.\"], \"weaknesses\": \"There are several fundamental concerns about this work. Each of these is fatal.\\n\\n1. **The motion collection process.** This process contains several issues. \\n - This work does not evaluate the quality of the video mocap data quality. To my knowledge, even the quality of the latest Motion-X++ suffers significant jittering and foot sliding. How can your method escape from this? **(my main concern)**\\n - If the quality of the ground truth is not good enough, how can you generate good motion? Therefore, the result in L395-402 is not solid and convincing. **(my main concern)** I suggest authors read the blog [1] written by a well-known graphics scientist, Daniel Holden. \\n - The limited contribution of the dataset. The video data comes from InternViD and WebVid and the data collection process is from motion-x and other methods. The dataset contribution is limited. \\n\\n2. **The text annotation.**\\n - The annotation quality of the text by Gemini-1.5-pro is not well evaluated. In my practice, it always contains some answers like \\\"sorry...\\\". The results should be corrected by researchers one by one. Has the >1M data been checked? \\n - The proposed contribution of hierarchical text is not discussed well. Has it been used in the model training? If I miss, please point it out. If this annotation is not used, what is the motivation for this hierarchical text contribution? Will it make the result more fine-grained? It is quite unclear. **(my main concern)** \\n\\n3. **Limited technical/evaluation contribution.** The LFQ is proposed by the original paper. The authors did not have a new understanding over this. Besides, the H2VQ proposed in Humantomato (ICML-24) is also missing for discussion or comparison. \\n\\n4. This work does not include any demo video, which is unacceptable in the animation community. The FID in Table 5 is extremely large, which strengthens my concerns about the motion quality. \\n\\n5. **Motivation.** The motivation for introducing LLM is not clear. The method misses a basic baseline of a transformer (like in T2M-GPT, CVPR-23) for comparison. Besides, it is also not clear whether the usage of pre-trained parameters of LLMs or not. Whether the fine-tuning method is LoRA or not is also not well discussed. Therefore, it is not technically sound. **This is my strong concern.**\\n\\n[1]: Daniel Holden, https://theorangeduck.com/page/animation-quality.\", \"questions\": [\"The vocabulary of the LLM and motion codebooks are different. How do authors handle this issue? What is the efficiency of the LLM-based motion generation method? Please compare with the fastest motion generation method, MotionLCM (ECCV-24).\", \"**I would like to know why authors should cite [1].**\", \"[1]: Zheng et al., Steve-eye: Equipping llm-based embodied agents with visual perception in open worlds.\"], \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety', 'Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": \"This paper contains a dataset of human subjects. The human RGB videos are also included. Besides, some of the data comes from other datasets, which should be clear on the usage of commercial or not.\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"@pT95, what is your opinion on these? I hope you can join our discussion as our ratings are diverse.\"}", "{\"comment\": \"I apologize for the confusion; I meant **6: marginally above the acceptance threshold**.\\n\\nI believe it would be reasonable to determine my score after the discussion between the authors and WfZ5 concludes, however, I am finalizing my score now as the deadline is only a few hours away. Hence, I have taken a conservative position on the unresolved discussion and leave it for the Area Chair to make the final decision.\\n\\nThis score adjustment reflects that I am not entirely confident in my evaluation, while still believing that the dataset would be very helpful for the community. I am confident in my evaluation of the various meaningful messages derived from the research question, *\\\"Can a large motion model be a promising direction for motion generation?\\\"* assuming that the experimental results are valid. These contributions are significant, and I believe that the progression from LLM to LMM represents the right future direction. \\n\\nWhile I value the direction the paper proposes, I find myself limited in my ability to fully evaluate the validity of the data collection pipeline with LLM due to my lack of expertise in LLM domain. Recognizing the potential issues raised by other reviewers, such as the effectiveness of static motions and hierarchical text evaluation. I cannot maintain my score without a stronger basis for my evaluation. As a result, I believe it is appropriate to adopt a more conservative stance.\"}", "{\"comment\": \"Thank you for taking the time to address my questions and concerns. I appreciate the effort the authors have put into this work, and I continue to believe that the dataset has the potential to be a valuable resource for the community. However, I also share the concern raised by other reviewers about the high proportion of static data (44%), which could potentially limit the dataset's utility for certain applications.\\nGiven the complexity of the topic and the specific concerns raised by Reviewer WfZ5, I feel it would be most appropriate to reserve my final evaluation until the ongoing discussion between the authors and Reviewer WfZ5 concludes.\"}", "{\"comment\": \"Dear reviewer,\\n\\nAs we near the conclusion of the discussion phase, we would like to inquire if our response has effectively addressed your inquiries. In the rebuttal, we have provided explanations for your questions and concerns. Should our explanations have met your expectations, we kindly request your consideration in revising the score accordingly.\\n\\nShould you have any additional comments or concerns, we are more than willing to address them promptly during the response period. We deeply appreciate your constructive and insightful feedback, which has greatly contributed to the refinement of our work.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": [\"Thanks for the response from the authors. I acknowledge your efforts in this.\", \"For PHC, do you use the default setting of the PHC? Any modification? Besides, the success rate of PHC is too low. Is there any reason for this? How do you deal with cases like \\\"sit\\\"?\", \"I acknowledge that \\\"static images contribute to video understanding or single-frame object bounding boxes aid video tracking\\\". However, the general motion generation aims to fit a data distribution. Once you introduce the static ones, it will make your motion over-smoothing. Could you please clarify this? Besides, the setting of Table 4 is not clear as to why this can strongly support the claim.\", \"Could you please concatenate the videos of motion and data in `example_zoo_of_dataset/` for checking jointly?\", \"The video of sim2real, like `sim_real_demo/video_real/s0000.mp4`, shows the robot needs a residual force. Because there is a line hanging it in the sky, which is not physics plausible motion. This violates the claim of physics simulation. Besides, one of the main claims in PHC is jumping out of the residual force. This is not aligned with the previous usage of PHC. I am quite confused.\", \"To independently assess the quality of the text description without any visual inputs, the LLM always exists hallucinations without any visual grouds. I suggest authors to improve the evaluation process.\", \"The FID of T2M-GPT is amazingly high than others. Have you ever slcae the size of T2M-GPT? I have strong concern on this.\", \"I hope the concerns can be resolved.\"]}", "{\"comment\": [\"Dear reviewers,\", \"We sincerely appreciate the time and effort each reviewer has dedicated to this discussion. However, with the rebuttal deadline approaching, there are still some remaining questions that we believe should be addressed.\", \"Before raising these questions, we want to clarify an important point: Why do some reviewers perceive the level of detail we provide as resembling a major revision? **In fact, we are also puzzled by this. We suggest after several rounds of rebuttal, some reviewers may have mistaken minor aspects of our work for major contributions.**\", \"For example, reviewer WfZ5 thinks we emphasize too much about robot demos. Here is a brief rebuttal summary of this point (**if anything of the following is wrong, please feel free to point it out**):\", \"Q: We need a recorded video.\", \"A: Sure, we made a website for you.\", \"Q: The external link is not allowed, you should provide a supp file to OpenReivew\", \"A: OK, we provide the zipfile that contains all demos in the website.\", \"Q: Is it challenging for you to concatenate them in a video?\", \"A: Sure, we concatenate our demos and update the supp file once again.\", \"Q: The resolution is too small, details can not be seen.\", \"A: OK, we provide a higher-resolution video for you.\", \"Q: Why do you hang the robot in your robot demo?\", \"A: The robot videos are only demos.\", \"Q: The author should not emphasize too much about the demo. If you emphasize this too much, reviewers will treat this as your unique contribution seriously.\", \"After addressing the reviewer's questions **about demos**, we believe they may have received an overwhelming amount of information from us, potentially leading to the mistaken impression that the robot demo is a significant aspect of our work. **In fact, we do not even discuss it in the main paper.** This misunderstanding occurred multiple times during the rebuttal process. For example, PHC is a technique proposed by others, and we use it only as one of the methods to refine our data, which is briefly mentioned in just 1\\u20132 sentences in our paper, just like most previous works (e.g., HumanPlus, MotionX). However, during the rebuttal, we provided two pages of detailed explanations about PHC. As a result, the reviewer mistakenly regarded it as a central aspect of MotionBase, **which we have never claimed.**\", \"Finally, we want to go back to our remaining questions, which we believe are important to draw a conclusion:\", \"Does reviewer suggest the ICLR's rating should at least consider the revised paper according to the spirit of ICLR?\", \"Do you believe our experimental results, or our code, checkpoints? If you do, why do you still have concerns about static data? If you do not, what else do we need to provide? Is there any possibility to convince you?\", \"The original dynamic data is near 400K, almost 5 times larger than before. The revised version is 800K, 10 times larger than before. If the reviewer is not convinced by static data, do you recognize the contribution of our MotionBase?\", \"If the reviewer thinks static data is harmful, do you believe that our dynamic data should have higher quality? This should be a conclusion that can be reached simply through logic.\", \"Best,\", \"Authors\"]}", "{\"title\": \"Official Comment to Reviewer qpUP's weakness 1\", \"comment\": \"We greatly appreciate the time and effort you invested in providing these detailed observations, questions, and comments. We have carefully considered your comments and have outlined our responses and proposed changes below. We hope these adjustments and explanations address your concerns and further enhance the manuscript.\\n\\nBefore reading our feedback, we introduce a visualization website to validate the quality of our dataset at http://www.motionbase3d.com. (**Please note: this link may be unstable, and refreshing the page 10~20 times may be required to view the content**). In case the reviewer is unable to access the site, we have also provided an anonymous cloud storage link containing all the examples featured on the website: [click this link](https://www.dropbox.com/scl/fo/6w7yuhun8mpuz9yuaz5ej/ALUo_dOLYIyNgOZ3Ll85JCI?rlkey=yecydfnno43b5o602ae2fmr0w&st=rq97kdcw&dl=0). \\nThe platform includes visualized samples from our dataset, physically grounded examples of refined motions, rendered results generated by different motion models, and live demonstrations in both simulated environments and real-world settings (using H1 and G1 UNITREE humanoid robots). We hope this visualization platform addresses the data quality concern reviewers may have. If the reviewer requires additional information or examples, we are more than happy to upload further relevant results upon request. Additionally, our dataset is highly scalable due to the robust data collection pipeline. As of the start of the rebuttal period, we have collected over 1.5 million motion trajectories and 4 million motion-text pairs, which are 1.5 times compared to our first submission. These datasets have undergone three rounds of rigorous data filtering to ensure their quality.\\n\\nHere are our responses to the weaknesses.\\n\\n**W1. Does the large proportion of single-frame motion data in the dataset contribute to static motion generation?**\\n\\nNo.\\n1. First, our dataset is designed to be highly scalable, with the latest version now including a reduced share (44%) of one-frame motion data and over 1.5 million motions ---- a significant expansion over the ICLR version. Notably, this update incorporates **more than 50% additional multi-frame motions** extracted from open-source human behavior datasets (e.g., NTU120, Kinetics-700) and publicly available web videos. We plan to further increase the proportion of multi-frame motions in future versions.\\n2. Second, a substantial portion of one-frame motions can be transformed into multi-frame sequences. To achieve this, we train a RL-based policy $\\\\pi_{\\\\rm multi-frame}$ using the AMASS dataset. This policy generates physically plausible motion sequences within a simulation environment by using the single-frame motion as the target pose. Due to potential instability caused by drastic lower-body movements, some generated motions may fail to maintain balance.\\n3. For single-frame motions that fail conversion in **STEP 2**, we employ a pretrained, target-conditioned motion generator based on existing high-quality motion data. This generator uses the single-frame motion as the target pose and generates its preceding motion, effectively producing the entire motion sequences. Compared to motions converted through **STEP 2**, these generated motions may not fully adhere to physical laws.\\n4. To further avoid static motion generation, we provide distinct prompts for single-frame and multi-frame motions during the LLM's fine-tuning, ensuring that the model is capable of generating dynamic motion in response to specific commands.\"}", "{\"comment\": [\"## **6-round questions about \\\"LLM Motivation and T2M-GPT Comparison\\\"**\", \"**Q1**: Hierarchical text contribution isn\\u2019t well discussed. Has it been used in training? What\\u2019s the motivation?\", \"**A1**: Yes, hierarchical text is used in pretraining. It improves (1) text corpus diversity, (2) finer-grained descriptions, and (3) performance over basic text.\", \"**Q2**: The motivation for introducing LLM is unclear. Why no T2M-GPT baseline? Are pre-trained parameters and fine-tuning (e.g., LoRA) used?\", \"**A2**: LLMs enhance language-motion understanding. Training: (1) pre-trained parameters, (2) full fine-tuning (LoRA struggles with motion tokens). HumanML3D results outperform T2M-GPT baseline.\", \"**Q3**: Why differ between hierarchical and full-param results? I do not see the result of T2M-GPT as a baseline.\", \"**A3**: Full-param used only basic text; hierarchical used both basic and detailed text. T2M-GPT results on MotionBase are in Appendix C.7, showing lower performance than ours.\", \"**Q4**: T2M-GPT's FID is surprisingly high. Did you scale its size?\", \"**A4**: The gap isn\\u2019t size-related but due to T2M-GPT\\u2019s frozen CLIP encoder, which limits text understanding versus our joint training approach.\", \"**Q5**: What proves frozen CLIP is ineffective? CLIP mainly helps T-M alignment.\", \"**A5**: We retrained T2M-GPT with comparable GPT-2 medium parameters (380M vs. 355M). It still underperforms our approach.\", \"**Q6**: Why does T2M-GPT struggle against GPT-2? Pretraining or CLIP settings?\", \"**A6**: T2M-GPT\\u2019s frozen CLIP encoder provides fixed text representations, while GPT-2 enables joint optimization of text and motion, achieving better semantic alignment.\", \"**Q7**: NO FURTHER REPLY\", \"---\", \"## **3-round questions of \\\"Should the reviewer base on initial paper?\\\"**\", \"**Q1**: I would like to clarify that the significant revisions related to contributions are not supported officially, according to the review guidance.\", \"**A1**:We have not made significant changes. All updates address reviewers' comments, align with our initial conclusions, and do not contradict them.\", \"**Q2**: According to the ICLR review pipeline, I have the right to ignore these major changes.\", \"**A2**: Disregarding our responses would be unfair. ICLR allows revisions during rebuttals to address reviewer concerns, unlike conferences like CVPR and NeurIPS. These updates aim to clarify misunderstandings and do not alter our original contributions.\", \"**Q3**: PHC and the dynamic data expansion are major revisions. The discussion process is for addressing misunderstandings in the reviews, not for providing major revisions.\", \"**A3**: The ICLR reviewer guideline states:\"], \"engage_in_discussion\": \"The discussion phase at ICLR is different from most conferences in the AI/ML community. During this phase, reviewers, authors and area chairs engage in asynchronous discussion and authors are allowed to revise their submissions to address concerns that arise. It is crucial that you are actively engaged during this phase. Maintain a spirit of openness to changing your initial recommendation (either to a more positive or more negative) rating.\\n- **Q4**: NO FURTHER REPLY\"}", "{\"title\": \"Official Comment to Reviewer pT95's question\", \"comment\": \"**Q1. What is the ratio between synthetic, static, and real data? in Table 4**\\n\\nIn the latest version, synthetic data accounts for approximately 28% of the total dataset, while real data makes up about 72%. Additionally, static data constitutes about 44% of all data. The proportion of synthetic and static data will continue to decrease as MotionBase expands. We have included these proportion numbers in the revised version of Table 4.\\n\\n**Q2. The quality of occlusion cases or blurred images. How do the authors recognize that the motion is blurred or occluded? In multi-person settings, occlusion is expected to be very common.**\\n\\nOcclusion and motion blur are indeed very common in human-related videos. To avoid these issues, we adopt the following steps:\\n\\n - 2D Keypoint Detection Fittering. We first sample key frames from each video. A pretrained 2D keypoint detector is then used to extract skeleton keypoints for each human in the key frames. If a significant portion of keypoints has predicted confidence scores below a specific threshold, we consider the human motion to be occluded and exclude it from further processing.\\n\\n - Segment Filtering. We utilize a visual foundation model, such as Segment Anything, to generate segmentation masks for each frame. If a large object is detected in front of the human, indicating occlusion, we filter out the corresponding motion data.\\n\\n - Adjacent Frame Smoothing. To handle motion blur, we track the trajectory of each human whose motion needs to be extracted from the video. For timestamps with low-confidence keypoint scores, we smooth the trajectory using adjacent detection results to ensure continuity and accuracy. \\n\\nIn addition to occlusion, we apply many additional post-processing techniques to enhance the quality of the dataset. If the reviewer has any further questions about the process, please feel free to ask. We are willing to provide more details.\\n\\nWe hope our clarifications address your questions, and we kindly ask if there are additional concerns we can address to further improve your support for the paper. Thank you again for your valuable suggestions.\"}", "{\"title\": \"Reply to rebuttal\", \"comment\": \"Thanks for the rebuttal. However, I still have the remaining questions.\\n\\n1. The content of dataset collections and processes, including the training of policies and motion generators, are not included in the main scripts and appendix. In addition, they lack the quantitative experiments to evaluate the policy. \\n\\n2. Furthermore, the motion qualities have not been verified through quantitative experiments. I suggest the author carefully compare their methods on some benchmarks, like RICH, EMDB2, etc. The Gemini can somehow evaluate the alignment between the motion and the text. However, it is improper to evaluate the motion qualities.\\n\\n3. LFQ is one of the main contributions of the paper, however, the authors scale the model with VQ instead of LFQ. I think this should not be regarded as a good thing in ICLR. \\n\\nOverall, I decided to keep my score now.\"}", "{\"comment\": \"6. For T2M-GPT, we follow the official training configuration, using the text embedding from the pretrained CLIP ViT-B/32 version, and for GPT-2, we similarly use the official pre-trained model [8]. The experimental results show inferior performance, which we believe is self-evident. T2M-GPT uses a frozen CLIP text encoder, resulting in fixed text representations. In contrast, GPT-2 enables joint training of text and motion tokens, allowing text representations to be optimized along with the task, thus achieving better text-motion semantic alignment.\\n\\nSincerely, we hope the discussion can be based on the officially published literature and experimental results in our paper, rather than unofficial blogs. **We also provide our code and checkpoints from the beginning. We are more than happy you can run it by yourself if you are not convinced by our shown results.** \\n\\nIf you have further questions, feel free to tell us, or if, our responses successfully address your concern, would you mind reconsidering your review based on these more reliable references?\\n\\nSincerely, \\n\\nAuthors\\n\\n[1] MotionX: A Large-scale 3D Expressive Whole-body Human Motion Dataset, CVPR 24\\n\\n[2] Holistic-Motion2D: Scalable Whole-body Human Motion Generation in 2D Space\\n\\n[3] Morph: A Motion-free Physics Optimization Framework for Human Motion Generation, 2024\\n\\n[4] https://github.com/IDEA-Research/DINO-X-API\\n\\n[5] https://github.com/ViTAE-Transformer/ViTPose, 1.4K star\\n\\n[6] https://github.com/caizhongang/SMPLer-X, 1K star\\n\\n[7] Reconstructing World-grounded Humans with Accurate 3D Motion, CVPR2024\\n\\n[8] https://huggingface.co/openai-community/gpt2-medium\"}", "{\"summary\": \"This paper collects a new large-scale text-motion dataset called MotionBase and then finetunes LLMs with different sizes. Additionally, for better scaling, the authors follow the video domain to train a new LFQ tokenizer with a large vocab size.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The authors try to scale the tokenizer vocab size and the model size.\\n2. The authors collect a new large-scale text-motion dataset.\", \"weaknesses\": \"1. The biggest server weakness is containing the one-frame pose data into the database. The Agora, mscoco, muco_3dhp, and other more datasets are used for 3d pose estimation, and they even occupy a large portion of the whole database, which may lead to static motion generation.\\n2. The motion quality has not been validated. Neither the estimated motions nor the texts generated by LLM have been checked manually or by any algorithm. The video collection process is not clarified clearly. A lot of web videos are long and contain various camera shots. Which film shot boundary detection algorithm are you using? And how many frames do you insert into LLM to get the text? More details need to be added. \\n3. The experiments with static data ablation study are not fair. Does the validation set contain static data and synthetic data?\", \"questions\": \"1. The FID in Table 6 is so wired. The FID of reconstruction is 1.76 while the generation FID in Table 3 is 0.166. This is impossible from my understanding. I suspect that the reconstruction result is not good enough. The original MPJPE calculation will subtract the root movement. If you calculate MPJPE similarly, the high reconstruction FID means the translations are not accurate.\\n2. What do the authors get from scaling experiments? Did the author see any hope for emerging? The shown examples are common cases, that can be also observed in other motion generation work.\\n3. Did the supervised label contain only motion tokens or both text and motion tokens? \\n4. Did the author try zero-shot text testing? For example, could the largest model do some texts like \\\"The old man with a broken leg is walking forward slowly with a crane\\\"?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer pT95.\\n\\nThank you again for your valuable review. We have responded to your concern about static data quality. We hope to hear back from you. **If you have any unresolved concerns, please feel free to let us know! Or if our responses have addressed your questions, we would be grateful if you could consider adjusting your score accordingly.**\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Final recommendation\", \"comment\": \"While I am favorable to the paper because I believe that the dataset proposed in the work will be helpful for the community, I am not fully confident in my evaluation of the paper outside the human motion generation aspect, and it seems that other reviewers still have some concerns. Therefore, I will position myself more conservatively by reducing my influence on the final decision.\\n\\n**Final Score** \\n**6: marginally below the acceptance threshold**\"}", "{\"comment\": \"Dear reviewer, thank you for your feedback.\\n\\n1. Thanks for your notice. We have included these contents in our new appendix version. For policy training, we use the official implementation from PHC[1] repository, which achieves a 97.1% success rate in tracking AMASS motions. For the motion completion model, we use the official model from MotionGPT[2].\\n\\n[1]Perpetual Humanoid Control for Real-time Simulated Avatars\\n\\n[2]MotionGPT: Human Motion as a Foreign Language\\n\\n2. The RICH and EMDB2 benchmarks are primarily designed for pose estimation evaluation, which differs from our verification objectives. Quantitative evaluation of motion quality in large-scale video data is challenging due to the lack of standard ground truth. In our motion estimation pipeline, we use officially pre-trained models that have been validated on their respective benchmarks, maintaining consistent performance with the original models. Considering the limitations of Gemini evaluation, we employ multiple methods to ensure data quality, including physical constraints and empirical filtering.\\n\\n3. The primary contributions of our paper are the introduction of a large-scale dataset and the validation of the scaling law. We chose VQ as the primary method to ensure a fair comparison with existing baselines, as most current text-to-motion approaches are based on VQ. However, we understand your concerns and have conducted additional experiments to highlight the capabilities of LFQ. Specifically, we present data scaling experiments using GPT-2 in the first block and parameter scaling experiments with 0.02M training samples in the second block. These results are consistent with our initial conclusions, demonstrating robustness across scaling scenarios. Moreover, LFQ shows a slight performance improvement over VQ when evaluated with GPT-2. Due to time and GPU resource limitations, we were unable to conduct further experiments on 7B/13B models with 1M data at this time, but we plan to include them in future revisions. The current additional results have been included in the appendix for reference.\\n\\n| Model | #Inst | MotionX | | | MotionBase | | |\\n|-------|-------|---------|---------|---------|------------|---------|---------|\\n| | | R@1 | R@3 | FID | R@1 | R@3 | FID |\\n| GPT-2-VQ | 1M | 0.36 | 0.62 | 5.08 | 0.26 | 0.54 | 0.52 |\\n| GPT-2-LFQ | 0.02M | 0.16 | 0.34 | 76.21 | 0.04 | 0.08 | 136.25 |\\n| GPT-2-LFQ | 0.08M | 0.33 | 0.55 | 6.24 | 0.06 | 0.14 | 128.07 |\\n| GPT-2-LFQ | 1M | 0.39 | 0.62 | 4.28 | 0.32 | 0.60 | 0.45 |\\n| GPT-2-LFQ | 0.02M | 0.16 | 0.34 | 76.21 | 0.04 | 0.08 | 136.25 |\\n| LLaMA-2-7B-LFQ | 0.02M | 0.22 | 0.38 | 68.54 | 0.06 | 0.15 | 125.08 |\\n| LLaMA-2-13B-LFQ | 0.02M | 0.20 | 0.35 | 71.23 | 0.08 | 0.18 | 119.03 |\\n\\nThank you again for your valuable review. If you have any unresolved concerns, please feel free to let us know! Or if our responses have addressed your questions, we would be grateful if you could consider adjusting your score accordingly.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Official Comment by Reviewer qpUP\", \"comment\": \"I also have concerns regarding the static data used in the study. References [4][5][6][7] all focus on detection algorithms, while in the context of data generation, the distribution of the data itself plays a critical role. To illustrate, consider the case of pose data taken from the middle frames of a \\u201crunning\\u201d sequence. The corresponding caption might be something like \\\"A man is running\\\" or a similar description. This type of caption is very close to the actual running motion data. When sampling, it could influence the generation process. While I agree that joint training on pose and motion data is beneficial, repeatedly using the same pose 64 times is problematic and not ideal.\\n\\nAdditionally, several important aspects of dataset construction were either not addressed or mentioned in the original submission, such as policy training, retargeting, and dataset evaluation. These details, I believe, are crucial and should not be overlooked. However, all of these details are supplemented during the rebuttal period. This is unfair to other submissions. \\n\\nI acknowledge and appreciate the authors' efforts, but I feel the paper is not fully prepared for the ICLR. If the authors can include these additional details and resubmit, I believe the work would be much stronger and would be a good paper.\\n\\nReviewer qpUP\"}", "{\"metareview\": \"The submission proposes MotionBase, a large-scale dataset of human motion, as well as methods that aim to develop a large motion model. The submission received mixed feedback before and after the rebuttal, with long, extensive, and somewhat intense discussions. The AC read the submission, reviews, rebuttals, and discussions.\\n\\nThe positive side of the submission is the usefulness of the dataset and the additional evaluation. The concerns are about the use of static frames, as well as the possibly \\\"major\\\" changes during the rebuttal. The AC agreed with the authors that links can be used and changes are allowed during rebuttal; meanwhile, the AC would also like to acknowledge that reviewers are indeed allowed to ignore changes if they are considered too major. Whether the changes are major is mostly a subjective decision, and people may have different opinions.\\n\\nThe AC carefully reviewed the submission and the anonymous link to better understand the dataset and technical contributions. The AC found the quality of the demos below expectations, with missing documentation and unclear visualizations. The authors are encouraged to thoroughly revise the submission to significantly improve the presentation of the main paper, supp, and video demos for the next venue.\", \"additional_comments_on_reviewer_discussion\": \"The discussion was intense, and eventually reviewers remained split.\"}", "{\"comment\": [\"Dear reviewer, we thank you once again for your valuable feedback, which has greatly helped improve the quality of our paper. As the rebuttal deadline approaches, we would like to confirm whether our responses have adequately addressed your concerns. Specifically, we have:\", \"(1) explained the details of the motion-capturing details.\", \"(2) clarified that the dataset includes massive dynamic motion sequences, which we believe is sufficient large to substantiate our data contribution, while also demonstrating the benefits of static data via our ablation in Table 4 and Appendix C.1 (Tables 7 & 8).\", \"(3) uploaded a supp material to OpenReview.\", \"(4) provided a detailed explanation of the scoring process based on the Gemini-score.\", \"(5) introduce the usage of hierarchical text.\", \"(6) address why the results of Hierarchical and Full settings differ.\", \"(7) clarified the purpose of citing reference [1].\", \"All new additions and modifications are highlighted in blue in the manuscript for your easy reference.\", \"We would greatly appreciate your confirmation on whether these responses have fully resolved your concerns. If not, we welcome any additional feedback or further questions you may have.\", \"Best regards,\", \"The Authors\"]}", "{\"summary\": \"Paper introduce motionbase, a motion generation benchmark trained on large amount of data with focus on motion generation with LLMs.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The work is well motivated in terms:\\n Showing the gap of prior work and lack of domain generlization \\nShowing limitation of prior metrics \\nA new motion codebook \\n\\n\\nThe new dataset is quite large in comparison with prior ones, which is a valuable addition to the community. It comes with a good set of text descriptions. \\n\\nEvaluation on multiple datasets and multiple models with strong baselines.\\nAnswer to important questions like the need of scale and model size impact on the task \\nDiscussion on OOD behaviour \\nAblation of motion quantization\", \"weaknesses\": \"I do not see much of concerns about the work, more of questions.\", \"questions\": \"\\u2013 Questions:\\nHow did the author verify the correctness/accuracy of the pose estimation\\nWhat do authors think about properties of a new metric?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment to Reviewer WfZ5's questions 3-7\", \"comment\": \"**Q3. The video demos on the website are not friendly for usage. Please provide them in supp.**\\n\\nWe provide a smaller zip file in Openreview Supp. It's important to note that this file only contains partial examples, due to OpenReview's file size limitation (**<100MB**).\\n\\n**Q4. For the Gemini-score from 1 to 5, how can the method be fair enough to evaluate whether the texts are aligned with motion? With motion input?**\\n\\nRegarding your question on how the Gemini score (1-5) fairly evaluates the alignment between text and motion, we have outlined two methods of text evaluation in our response to W2.\\n\\nFirstly, we independently assess the quality of the text description. More importantly, we evaluate the alignment between the text and motion by providing both the text description and the corresponding rendered motion video as input to the Gemini-Pro model. This approach allows us to measure how accurately the text reflects the visual content. Specifically, we randomly sampled 500 rendered motion sequences and their corresponding text descriptions from each dataset. These samples were input into the Gemini-Pro model, which evaluated them using our predefined 1-5 point scoring criteria.\\n\\nThe results showed that the average Gemini scores for text descriptions were 2.25 for the MotionX dataset and 3.08 for the HumanML3D dataset. Notably, the MotionBase dataset achieved a significantly higher average score of 3.82, demonstrating that its text descriptions align more effectively with the motion compared to the other datasets.\\n\\n**Q5. For the hierarchical text issue, is your evaluator trained with hierarchical text? Why is the R-P of generated results higher than GT?**\\n\\nYes, our evaluator was also trained using hierarchical text, which consists of both \\\"basic\\\" text and \\\"detailed\\\" text. Regarding the observation that the R-P of generated results exceeds that of the ground truth (GT), **we obtain similar observations in several works like MoMask [1].** This discrepancy could be attributed to distribution differences between the training and testing sets of text-motion data. The evaluator, being a network trained on the training set, is inherently tailored to fit the training distribution. As a result, it may exhibit variance or bias errors when applied to the test set. If the generated text-motion data happens to align more closely with the training distribution, it can lead to evaluation metrics that surpass those of the GT test set. The quantitative evaluation of motion generation performance will be an interesting topic to explore.\\n\\n[1] MoMask: Generative Masked Modeling of 3D Human Motions\\n\\n**Q6. Why do your results of Hierarchical and Full Paramare not the same in response? I do not see the result of T2M-GPT as a baseline in the response.**\\n\\nFirstly, regarding the inconsistency between the results of full-parameter and hierarchical-parameter experiments, it is important to clarify that the comparison between the full-parameter and LoRA-parameter experiments was conducted using models trained solely on \\\"basic\\\" text descriptions. As a result, these outcomes naturally differ from those of the hierarchical text experiments. However, the results from the full-parameter experiments are consistent with those obtained using \\\"basic\\\" text for training. To make this distinction clearer, we have included both sets of experimental results in Appendix C.5 and C.6 of the paper.\\n\\nSecondly, concerning the baseline results of T2M-GPT, we conducted additional experiments by training the T2M-GPT model on the MotionBase dataset and comparing it to a model based on GPT-2. As shown in the results, the T2M-GPT approach struggles to deliver competitive performance, further highlighting the critical role of pre-trained language models. Compared to methods that use a frozen CLIP model as the text encoder followed by a decoder, motion generation models based on decoder-only pre-trained language models achieve significantly better results. These findings have also been included in Appendix C.7 of the paper.\\n\\n| Model | R@1 \\u2191 | R@3 \\u2191 | FID \\u2193 | MMDist \\u2193 |\\n|-------|-------|-------|-------|-----------|\\n| Real | 0.290 | 0.563 | 0.011 | 3.480 |\\n| T2M-GPT | 0.111 | 0.250 | 73.063 | 9.208 |\\n| GPT-2 | 0.264 | 0.542 | 0.516 | 4.007 |\\n\\n**Q7. I am still a bit curious about citing reference [1]. Which part of reference [1] did the authors refer to?**\\n\\nThe citation of reference [1] serves to support the argument that recent research has extended instruction tuning to the multimodal domain, specifically when discussing Related Work Section of Large Language Models and Multi-modality in Section 2.1.\\n\\nOnce again, we sincerely appreciate the time and effort you have dedicated to reviewing our work. We hope that our responses have effectively addressed your additional questions.\"}", "{\"title\": \"Official Comment to Reviewer WfZ5's question\", \"comment\": \"**Q1. The vocabulary of the LLM and motion codebooks are different. How do authors handle this issue? What is the efficiency of the LLM-based motion generation method? Please compare with the fastest motion generation method, MotionLCM**\\n\\nTo address varying vocabularies, we extend the LLM's vocabulary by incorporating motion codebook tokens as additional entries. The LLM is then trained directly on text-motion paired data, enabling it to effectively learn the associations between textual descriptions and motion tokens.\\n\\nWe test our 7M LLM-based model on RTX-4090 GPU for inference, the token generation speed is 23 token/sec. Since each token represents 4-frame motion, the generation speed of our model is around 90 FPS, denoting a real-time inference spped which can be deployed in reality, although still can not beat the speed of fastest methods like MotionLCM. However, our focus is exploring the capabilities of large models for text-to-motion generation. We are not prioritizing speed optimization in this work. Speed improvements, such as using Flash Attention, are orthogonal to our current research goals and represent an important area for future work.\\n\\n**Q2. Why authors should cite [1]?**\\n\\nThe references cited here primarily aim to demonstrate that the development of multimodal large language models depends heavily on the availability of extensive data across various domains, including digital environments like games, egocentric scenarios, and others.\\n\\nWe hope our clarifications address your questions, and we kindly ask if there are additional concerns we can address to further improve your support for the paper. Thank you again for your valuable suggestions.\"}", "{\"title\": \"Official Comment to Reviewer CH9D\", \"comment\": \"We appreciate the reviewer\\u2019s careful assessment and acknowledgment of our paper's clarity, motivation, and contributions.\\n\\nBefore reading our feedback, we introduce a visualization website to validate the quality of our dataset at http://www.motionbase3d.com. (**Please note: this link may be unstable, and refreshing the page 10~20 times may be required to view the content**). In case the reviewer is unable to access the site, we have also provided an anonymous cloud storage link containing all the examples featured on the website: [click this link](https://www.dropbox.com/scl/fo/6w7yuhun8mpuz9yuaz5ej/ALUo_dOLYIyNgOZ3Ll85JCI?rlkey=yecydfnno43b5o602ae2fmr0w&st=rq97kdcw&dl=0). \\nThe platform includes visualized samples from our dataset, physically grounded examples of refined motions, rendered results generated by different motion models, and live demonstrations in both simulated environments and real-world settings (using H1 and G1 UNITREE humanoid robots). We hope this visualization platform addresses the data quality concern reviewers may have. If the reviewer requires additional information or examples, we are more than happy to upload further relevant results upon request. Additionally, our dataset is highly scalable due to the robust data collection pipeline. As of the start of the rebuttal period, we have collected over 1.5 million motion trajectories and 4 million motion-text pairs, which are 1.5 times compared to our first submission. These datasets have undergone three rounds of rigorous data filtering to ensure their quality.\\n\\nHere are our responses to the questions.\\n\\n**Q1: How did the author verify the correctness/accuracy of the pose estimation?**\\n\\nWe acknowledge that this is a key concern for all reviewers so we provide a visualization platform for all reviewers to easily examine our data. In addition, we outline the strategies used to verify the accuracy of the estimated human poses in our dataset:\\n - Physical Grounding: We train an RL-based policy $\\\\pi_{\\\\rm refine}$, which takes raw poses as input (target poses) and generates refined pose sequences. If the policy successfully tracks the raw poses and produces smooth, balanced motions, we assess the data as high-quality. This approach not only allows the policy $\\\\pi_{\\\\rm refine}$ to serve as a discriminator for verifying pose accuracy and avoiding issues like jittering or sliding, but also acts as an effective method to refine the raw motions. \\n - Rule-based Methods: In the latest version of MotionBase, we incorporate an additional level of text: rule-based descriptions derived from joint positions and angles. These precise descriptions enable us to assign rule-based scores to each motion. Motions with low scores are filtered out to improve overall quality. \\n - Manual Review via Visualization Platform: We conduct random sampling for manual check and leverage a visualization platform to facilitate efficient assessment of motion quality. \\n\\n\\n**Q2. What do authors think about properties of a new metric?**\\n\\nRegarding this, we believe that an ideal text-to-motion metric should exhibit the following properties:\\n - Fine-grained Quality Assessment: The new metric should capture detailed motion features, such as local hand and leg movements in addition to global postures. The metric should evaluate multiple aspects of quality, including naturalness, smoothness, fidelity, and style. In fact, the fine-grained assessment has long been ignored by previous works, partially because the lack of motions required to provide subtle differences. Our MotionBase provides an opportunity to achieve this.\\n - Human-like Perception: The metric should strongly align with human evaluation, considering factors like human kinematics and biomechanics. It should also account for perceptual similarities, even in the absence of reference motions.\\n - Physical Plausibility: The metric should evaluate adherence to physical laws, such as gravity and inertia, and analyze dynamic properties like joint torques and ground reaction forces. This is particularly important for applications in robotics.\\n - Strong Generalization: Ideal metrics should be capable of handling the one-to-many nature of text-to-motion mapping. They should generalize well across datasets and various motion types, such as walking, running, and jumping. \\n Due to time and space constraints, we could not comprehensively address or explore all these aspects. We plan to delve deeper into these directions in future work.\\n\\nWe hope our clarifications address your questions, and we kindly ask if there are additional concerns we can address to further improve your support for the paper. Thank you again for your valuable suggestions.\"}", "{\"comment\": \"Dear reviewer,\", \"we_have_further_questions\": \"Based on your feedback, do you believe the paper would be improved by removing all of our static data and corresponding experimental results? Doesn\\u2019t this suggestion seem somewhat unreasonable? Additionally, do you recognize the contribution of the dynamic part of MotionBase? **If you believe the static data is more problematic, then the dynamic data should be of higher quality to compensate, right?** Otherwise, how would the results improve?\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Official Comment to Reviewer WfZ5's questions 1-2\", \"comment\": \"Thank you for your prompt response! We sincerely appreciate the opportunity to address your concerns.\\n\\nTo begin, note the corresponding revisions in our latest manuscript have been masked in blue, and we would like to highlight two key points:\\n\\n1. **Revised manuscript:** We have not made significant modifications to our revised manuscript. All updates reflect responses to the reviewers' comments and questions, which align with our initial conclusions and do not contradict them.\\n\\n2. **External links:** The primary reason for not using the OpenReview Supp to provide our demos is due to **OpenReivew's file size limitation of 100MB**, which is insufficient for our larger demo files. Since the anonymous website may be unstable, we provide an anonymous cloud drive link to show the demos, which is a widely accepted practice in current conference submissions. Additionally, we include a smaller zip file in OpenReview Supp, containing a subset of examples that fit with the 100MB restriction. After reviewing ICLR's guidelines (https://iclr.cc/Conferences/2025/CallForPapers), **we confirm that anonymous links are not forbidden during both the paper submission and discussion phase.**\", \"the_following_sections_are_our_latest_replies_to_the_questions\": \"**Q1. For your motion-capturing process, what is the policy you use? PHC? RFC? ASE? For the statement \\u201cIf a refined motion maintains balance in the simulated environment for a specific duration, it can be regarded as high-quality with no severe issues \\u2026\\u201d, what is the duration, and how to compute the success rate? What is your success rate? Besides, how do you deal with videos with shot cuts?**\\n\\nWe train the single primitive policy from PHC, which takes a raw motion sequence as input and generates refined joint positions using the simulator (IsaacGym). To ensure consistency in data format, we then convert the global joint positions into HumanML3D's representation using HumanML3D's conversion scripts. The success is determined by the duration of the simulation process. Following PHC's termination conditions, the policy is tasked with tracking the provided motion sequences. During simulation, we log the step number (corresponding to the frame index of the motion sequence) at which termination is triggered and calculate the duration for each sample. Later for each sample, we set a threshold as 50% of its original length, and if the duration of the refined motion sequence exceeds this threshold, it is considered a successful sample. The success rate is then defined as the ratio of successful samples to the total number of samples, with our conversion achieving a success rate of 51.4%. Regarding short cuts, if you mean \\\"videos that have abrupt transitions between different scenes or camera angles\\\", we apply a human tracking algorithm to ensure consistency in human motions across each video, as detailed in the data construction section.\\n\\n**Q2. I also noticed reviewer qpUP has similar concerns on static motions. I think the data is the pose, not motion or animation. Our animation community does not recognize the pose as motion. When the data distribution includes such data, it will be harmful to your generation results. I have also provided the blog by Daniel in my previous review.**\\n\\n- Most importantly, our dataset comprises over 800K dynamic motion sequences, making it at least 10 times larger than existing benchmarks. This scale ensures that even if all static poses were removed from MotionBase, the dataset's overall contribution would remain largely unaffected.\\n\\n- We argue that static poses provide valuable additional knowledge about human activity, similar to how static images contribute to video understanding or single-frame object bounding boxes aid video tracking. The effectiveness of static data is validated in the ablation study presented in Table 4. In fact, we suppose static poses are particularly beneficial for our LLM-based decoder, as they enhance its understanding of positional and rotational relationships among joints. This is especially important when motion vocabulary is learned from scratch during pretraining. Furthermore, we ensure the reliability of our LLM-generated motion sequences by using purely dynamic motion data for instruction tuning. \\n\\n- Of course, we fully agree with both you and Daniel's blog that dynamic data plays a more critical role in motion learning. This is precisely why we prioritize collecting such data from human-related videos.\"}", "{\"comment\": \"6. We understand that you may have a different interpretation of the experimental results. However, we respectfully suggest that this difference in interpretation should not overshadow the significance of our contributions. We retrain a T2M-GPT model with a parameter count comparable to GPT-2 medium on MotionBase. Nevertheless, the results indicate that it struggles to match the performance of the GPT-2 architecture. We believe that large motion models based on decoder-only LLMs, which jointly train text tokens and motion tokens, achieve better text-motion semantic alignment and stronger motion generation capabilities.\\n\\n| Method | #Param. | R@1 \\u2191 | R@3 \\u2191 | FID \\u2193 | MMDist \\u2193 |\\n|--------|---------|--------|--------|---------|-----------|\\n| Real | - | 0.290 | 0.563 | 0.011 | 3.480 |\\n| T2M-GPT | 380M | 0.243 | 0.504 | 1.909 | 4.593 |\\n| GPT-2 Medium | 355M | 0.264 | 0.542 | 0.516 | 4.007 |\"}", "{\"comment\": \"Thank you for your feedback. We understand your concerns and want to clarify that we uploaded a supplementary file to OpenReview during the first-round review. Additionally, we provided a larger version of the demos via an anonymous Dropbox link: https://www.dropbox.com/scl/fo/6w7yuhun8mpuz9yuaz5ej/ALUo_dOLYIyNgOZ3Ll85JCI?rlkey=yecydfnno43b5o602ae2fmr0w&st=rq97kdcw&dl=0 , where the modification timestamp is manifest and your visit is not tracked.\\n\\nWe sincerely hope these efforts adequately address your concerns, and we remain open to further feedback.\"}", "{\"title\": \"Official Comment to Reviewer WfZ5's weakness 5\", \"comment\": \"**W5. Motivation. The motivation for introducing LLM is not clear. The method misses a basic baseline of a transformer (like in T2M-GPT, CVPR-23) for comparison. Besides, it is also not clear whether the usage of pre-trained parameters of LLMs or not. Whether the fine-tuning method is LoRA or not is also not well discussed. Therefore, it is not technically sound.**\\n\\nWe incorporate LLM to enhance language understanding, a critical aspect of text-to-motion generation, which relies on strong comprehension of textual nuances. LLMs, with their proven capabilities in natural language processing, enable our model to better capture the context and subtleties of text descriptions. As a result, this leads to the generation of more accurate and contextually appropriate motion sequences. The table compares our method to baseline T2M-GPT (CVPR 2023) on HumanML3D. Our results show significant improvements. These results highlight the advantages of leveraging LLMs. We use the pre-trained LLM parameters, which offer greater capacity and superior language understanding so that our model is able to learn more complex mappings between language and motion. In Table 9 of Appendix C.3, we compare experiments with and without pre-trained parameters, showing that fine-tuned models using pre-trained parameters consistently outperform models trained from scratch. During training, we utilize full fine-tuning. While we also experimented with LoRA, it struggled to achieve competitive results. We attribute this limitation to the introduction of new motion tokens, which demand substantial parameter adjustments. LoRA, with its constrained fine-tuning approach, appears less equipped to handle these requirements effectively.\\n\\n| Training method | R@1 \\u2191 | R@3 \\u2191 | FID \\u2193 | MMDist \\u2193 |\\n|-----------------|------:|------:|------:|----------:|\\n| Real | 0.290 | 0.563 | 0.011 | 3.480 |\\n| LoRA | 0.249 | 0.520 | 1.896 | 3.869 |\\n| Full Param | 0.264 | 0.542 | 0.516 | 4.007 |\"}", "{\"title\": \"Rebuttal Summary with WfZ5\", \"comment\": [\"# Rebuttal Summary with WfZ5\", \"---\", \"Given the lengthy and fragmented rebuttal history with WfZ5, we provide a brief summary to facilitate reading. We hope the AC, Senior AC, and PC will carefully review this summary before making a final decision. In total, WfZ5 raised 51 questions, including 40 detailed ones and 11 of less importance.\", \"---\", \"## **2-round questions of \\\"External links are not allowed\\\"**\", \"**Q1**: External links are not permitted as they cannot be tracked fairly. Why not use the OPenreview's Supp?\", \"**A1**: OpenReview's 100MB file size limit prevents us from including all materials in Supp. We also confirmed ICLR guidelines do not forbid anonymous links.\", \"**Q2**: So many motion generation papers submitted to ICLR/SIGGRAPH using Supp. I don't know what your barrier is. You can concatenate them into a video demo. Is it very challenging? External links are not appropriate.\", \"**A2**: This is the first time we\\u2019ve been informed that external links for codes and resources are not only discouraged but permitted, despite their common use in ICLR 2025 submissions.\", \"**Q3**: NO FURTHER REPLY\", \"---\", \"## **8-round questions about demos.**\", \"**Q1**: The work should include demo videos.\", \"**A1**: We provide an anonymous website and cloud link.\", \"**Q2**: The videos on the website are not user-friendly. Please add them to the Supp.\", \"**A2**: Our videos exceed the 100MB limit, so we offer an anonymous cloud link for easy access.\", \"**Q3**: Many motion papers use the Supp. Why can\\u2019t you concatenate them into a single video?\", \"**A3**: We provided a zip file with all demos to the Supp.\", \"**Q4**: Could you concatenate the videos?\", \"**A4**: We concatenated the demos and updated the supp file.\", \"**Q5**: The resolution is too low; I can\\u2019t see fine-grained motions.\", \"**A5**: We provided higher-resolution videos.\", \"**Q6**: I do not know why it cannot be downloaded now. I will try it later.\", \"**A6**: We verified successful downloads worldwide via OpenReview.\", \"**Q7**: I am not clear what the core scientific problem you resolve for robot demos.\", \"**A7**: The robot videos demonstrate potential applications, not a core contribution.\", \"**Q8**: The demos seem straightforward with no technical innovation. If you emphasize them, reviewers might treat them as a major contribution. It\\u2019s better to include them in the appendix or show the scientific problem solved.\", \"**A8**: The robot demos are in the supp file only. We never claimed them as a core contribution. These comments contradict our statements.\", \"**Q9**: NO FURTHER REPLY\", \"---\", \"## **6-round Questions about \\\"Text annotations\\\"**\", \"**Q1**: Has hierarchical text been used in model training?\", \"**A1**: Yes, hierarchical text was used during pretraining, and relevant experiments were provided.\", \"**Q2**: Has the text data been verified?\", \"**A2**: Yes, using two methods: (1) a text-only method and (2) a vision-text method.\", \"**Q3**: How can a text-only method fairly evaluate alignment with motion?\", \"**A3**: As stated in A2, we have two evaluation methods: Firstly, we independently use text. More importantly, we use both vision and text.\", \"**Q4**: LLMs hallucinate without visual input. Should the method be improved?\", \"**A4**: We don\\u2019t rely solely on text-only scoring. Both text-only and vision-text methods are used, as explained in A1 and A2.\", \"**Q5**: You stated \\\"Firstly, we independently use...\\\". I identify the evaluation is text-only according to \\\"independently\\\".\", \"**A5**: We clearly outlined two methods using \\\"1... 2...\\\" in A1 and \\\"Firstly... More importantly...\\\" in A2.\", \"**Q6**: I asked the evaluation protocol previously and got the reply \\\"Firstly, we independently assess the quality of the text description.\\\". According to the word \\\"independently\\\", I treat the evaluation as text only. However, after checking the further response, I noticed that it has visual grounds. As a result, I clarified why I treated it as text only in the last reply. The authors asked me to revisit the response, I do not know what I missed.\", \"**A6**: We don't know why the reviewer only focused on the word \\\"independently\\\" while overlooking the initial word, \\\"Firstly.\\\"\", \"**Q7**: NO FURTHER REPLY\"]}", "{\"summary\": \"The paper is to answer the research question of \\\"can a large motion model be a promising direction for motion generation?\\\", and the designed a data colletion pipeline which collects multi-modal information including RGB, depth and bounding box with multi-person.\\nIn addition the paper introduces a method to expand the codebook capacity, named lookup-free approach for motion tokenization, for better motion representation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"This paper presents the first large-scale dataset specifically designed for motion generation, featuring richly multi-modal data accompanied by hierarchical text descriptions. MotionBase, the dataset introduced, is expected to be highly beneficial for the advancement of future research in motion generation and to serve as a valuable resource for the computer vision community. The dataset offers researchers access to an extensive collection of motion data, enabling more robust analysis and development of large motion model.\", \"weaknesses\": \"I have minor concerns on this paper.\\n\\nThe layout of the paper is somewhat challenging for readers. It contains numerous messages and analyses, requiring readers to scroll up and down frequently to locate referenced tables. Additionally, due to page limitations, many explanations are placed in the Appendix. Tables and figures are positioned mid-page without aligning well with the paragraph height, disrupting the flow.\\n\\nThis paper was the first to introduce the concepts of partitioning body parts and 2D quantization, making it a valuable reference. (Pi, H., Peng, S., Yang, M., Zhou, X., & Bao, H. (2023). Hierarchical generation of human-object interactions with diffusion probabilistic models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15061-15073.)\", \"minor_issues_and_typos\": \"\", \"appendix_d\": \"\\\"quantitative results\\\" should be \\\"qualitative results.\\\"\", \"figure_4\": \"It may improve clarity to add a y-axis label.\", \"questions\": \"In Table 4, what is the ratio between synthetic, static, and real data? It can be brefiely explained in table caption.\\n\\nI have the concern of the quality of occlusion cases or blurred images. How the authors recognize the motion is blurred or occluded?\\nIn multi-person settings, the occlusion might be very common.\\n\\nSince this is a dataset paper, I expect the more detailed explanation and instructions of the benchmark will be released once the paper upon the paper acceptance.\", \"flag_for_ethics_review\": \"['Yes, Responsible research practice (e.g., human subjects, data release)']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Dear reviewer WfZ5**,\\n\\nThank you for your unique and valuable feedback. Inspired by you, we believe it would be beneficial to include our response in the top general response, allowing other reviewers and AC to easily access it due to the lengthy reply history. After this extensive journey of rebuttal, we sincerely hope to reach a final conclusion.\\n\\n1. To answer your first question. You mention \\\"how can it be generalized to the motion you captured?\\\". However, we do not claim a generalizable PHC with 100% success rate from the beginning. Instead, our goal is to present PHC as an additional technique compared to MotionX [1] and Holistic-Motion2D [2], which also rely on pretrained models to extract motions from videos without any physical-plausible refinement. Compared with them, our approach achieves 30% more physically plausible motions, effectively reducing jittering and sliding artifacts. **We believe this makes our dataset a self-stand contribution, otherwise all other works using pretrained models to extract motion without physical-plausible refinement, would become of no value.** Considering this, we argue that an ideal method you are expecting is another large topic [3] which obviously falls outside the scope of this paper.\\n2. The details of the static data ablation can be clearly found in L435\\u2013L438 and L1049\\u2013L1059, with the marked title **\\\"TRAIN SET w/o static data\\\"** clearly indicating the relevant settings. The test sets used are MotionBase-test and MotionBase-test w/o static/syn, as described in L1055. In both test sets, using the complete MotionBase dataset significantly outperforms the \\\"w/o static data\\\" configuration. I do not know why you can not see it. We also note that no other reviewers, apart from you, have raised similar questions, suggesting that the provided setting details should already be sufficiently clear.\\n\\nIn addition to this, for the static pose:\\n - We show its effectiveness through our experiments in Table.4 and Table.8\\n - We emphasize once again that static data serves as a bonus to our dataset contribution. MotionBase already includes 800K dynamic motion samples\\u201410 times more than previously available datasets.\\n - We note that the reviewer has overlooked several important references despite claiming being familiar with motion and LMM community. To assist with the reviewer's evaluation, we kindly provide some references for better judgment. Reference [4], for example, has already demonstrated that static data can smoothly produce whole-body and hand poses in a video. Many well-known works in motion and keypoint [5][6][7] heavily rely on well-pretrained backbones using static pose (e.g., COCO) or directly incorporate static pose during training. **These works consistently validate that static pose contributes positively to smooth generation and poses no harm**. We strongly recommend the reviewer consult these officially published and well-known works and their repositories, rather than relying on unofficial blogs as evidence. As a specialist of LMM, we believe the reviewer can readily appreciate the paradigm of using static and dynamic data for pretraining and dynamic data for fine-tuning. \\n - We kindly point out that the reviewer\\u2019s statement is not entirely accurate. No other reviewers except you have raised further concerns about static data following our earlier responses.\\n3. We believe the supplementary material is accessible, as we successfully downloaded the file from OpenReview using various IP addresses across different locations worldwide. All attempts were successful, except for your instance. \\n4. We are so puzzled by this feedback, as the robot demo was only included in the supp file and was not presented as a contribution in the main paper ever. **It is concerning when a reviewer misrepresents our stated contributions by suggesting**: \\\"I think a better choice is to include it in the appendix\\\" or suggesting, or \\\"If you emphasize this too much, reviewers will treat this as your unique contribution seriously.\\\" **These words definitely go against our statement, which risk misleading the AC or other reviewers, potentially creating misunderstandings about our work.** In fact, we provided the demo to all reviewers, and none of them raised similar concerns.\\n5. **We are quite puzzled that the reviewer has focused on the word \\\"independently\\\" while overlooking the initial word, \\\"Firstly.\\\"** As clearly stated, we have two approaches for evaluation: one is text-only, and the other is vision-text. \\\"Firstly\\\" refers to the first approach, while \\\"More important\\\" introduces the second. In our response (22 Nov 2024, 21:03), we had already clarified this by using clearer numerical markers (1..., 2...) to enhance readability. Notably, this explanation was also shared with reviewers qpUP and pt95, neither of whom expressed any misunderstanding.\"}", "{\"comment\": \"Dear reviewer,\\n\\nThank you again for your valuable review. We have responded to your question and concern. We hope to hear back from you! If you have any unresolved concerns, please feel free to let us know! Or if our responses have addressed your questions, we would be grateful if you could consider adjusting your score accordingly.\\n\\nBest,\\n\\nAuthors\"}", "{\"comment\": \"Dear reviewer pT95.\\n\\nSincerely thank you for your reviewering time. We would like to kindly remind you that **the rebuttal period has been extended to December 2nd, giving us an additional week to address your concerns.** Please feel free to reach out if you have any further questions, and **we guarantee a prompt response.**\\n\\nConsidering your mentioned the concerns about static data, we attach this part in the bottom. If you have any additional question, please let us know. We have provided over 20 pages of rebuttal material, which has been a time-intensive effort. We greatly value this opportunity to discuss and refine our work, and we sincerely hope we are moving in the right direction.\\n\\nBest regards,\\nAuthors\\n\\n\\n**Responses to static data concern:**\\n- Most importantly, our dataset comprises over 800K dynamic motion sequences, making it at least 10 times larger than existing benchmarks. This scale ensures that even if all static poses were removed from MotionBase, the dataset's overall contribution would remain largely unaffected.\\n- We argue that static poses provide valuable additional knowledge about human activity, similar to how static images contribute to video understanding or single-frame object bounding boxes aid video tracking. The effectiveness of static data is validated in the ablation studies presented in Table 4 and Table 8 in the appendix, where we test on datasets without static data. In fact, we suppose static poses are particularly beneficial for our LLM-based decoder, as they enhance its understanding of positional and rotational relationships among joints. This is especially important when motion vocabulary is learned from scratch during pretraining.\\n- The static data will not result in \\\"over smoothing\\\" because: \\n - During fine-tuning, we design specific prompts to guide the model's generation. For static poses, we include additional language prompts, such as 'keep the action still,' to help the model distinguish between generating a static pose and a dynamic motion.This approach leverages the well-known ability of large generation models based on LLMs [1,2,3,4,5] to produce diverse outputs through carefully crafted prompts. In addition, no negative effects are observed when combining images and videos, or bounding boxes and segmentation masks, or 3D skeleton poses and sequences, or vision and audios during training. We believe this principle should extend to motion as well.\\n - Visualization. To better prove our conclusion, in the visualization appendix, we present generation results from both models trained with and without additional language prompts. The clear differences demonstrate that **with additional language prompts, the model can distinguish between different motion distributions, thus not affecting the generation of dynamic motions.**\\n - Many common training strategies, like weighted sampling, weighted gradient, progressively increasing the ratio of dynamic motion, all these strategies can improve the stability of training, ensuring the model to avoid \\\"over smoothing\\\".\"}", "{\"title\": \"reply\", \"comment\": \"Thanks for your discussion. I have carefully checked reviews from other reviewers and checked the latest response. I think we still need some discussions.\\n\\nBefore stating my comments, I would like to clarify that the significant revisions related to contributions are not supported officially, according to the review guidance. The review should be based on the original submission, not the major revision. Although there are these issues, I would also like to discuss more about them. Besides, external links are not permitted because this will not be track by the fair reviewing process. Why not use the supp.? \\n\\n1. For your motion-capturing process, what is the policy you use? PHC? RFC? ASE? For the statement \\u201cIf a refined motion maintains balance in the simulated environment for a specific duration, it can be regarded as high-quality with no severe issues \\u2026\\u201d, what is the duration, and how to compute the success rate? What is your success rate? Besides, how do you deal with videos with shot cuts? \\n2. I also noticed reviewer `qpUP` has similar concerns on static motions. I think the data is the pose, not motion or animation. Our animation community does not recognize the pose as motion. When the data distribution includes such data, it will be harmful to your generation results. I have also provided the blog by Daniel in my previous review. \\n3. The video demos on the website are not friendly for usage. Please provide them in supp..\\n4. For the Gemini-score from 1 to 5, how can the method be fair enough to evaluate whether the texts are aligned with motion? With motion input?\\n5. For the hierarchical text issue, is your evaluator trained with hierarchical text? Why is the R-P of generated results higher than GT?\\n6. Why do your results of `Hierarchical`and `Full Param`are not the same in response? I do not see the result of T2M-GPT as a baseline in the response.\\n7. I am still a bit curious about citing reference [1]. Which part of reference [1] did the authors refer to?\\n\\nBesides, I did not see any revision in the manuscript. (w/o any highlights)\\n\\nUp to now, I will temporarily keep my rating and wait for replies from other reviewers. If any point of my review or statement is wrong, please directly point it out. This will help us to clarify issues.\"}", "{\"title\": \"Official Comment to Reviewer qpUP's weakness 2-3\", \"comment\": \"**W2. Could you clarify the video collection process? Many web videos are lengthy and include various camera shots. Which film shot boundary detection algorithm are you using? Additionally, how many frames are input into the LLM to generate the text? Providing more details on these aspects would be helpful.**\\n\\nSure. Due to space limitations, we could not provide a detailed explanation of our data curation process for web videos before. Our videos are sourced from two main resources: open-source datasets like InternVid and self-collected web videos. For these videos, We use a pretrained 2D human keypoint detection model to filter out videos without visible human activities. Additionally, rule-based methods are applied to ensure the human bounding box occupies a significant portion of the frame, making the human movement clearly visible. In addition, videos containing only partially visible humans are removed to ensure the quality of potential motion data extracted from the videos. By doing this, we ensure the quality of remaining human-related videos.\\n\\nFor the shot boundary problem you concern, here is a brief introduction of our solution.\\n1. For videos under 30 seconds or those with explicit temporal boundaries, we directly use the video clip or provided boundaries to segment the video into shorter clips.\\n2. For videos longer than 30 seconds, we use a scene detection model to roughly divide the video into smaller segments. \\n3. For each segment, we adopt the following steps to further slice it into shorter clips:\\n - At the beginning, the human with the largest bounding box is selected as the anchor, and we track their trajectory throughout the segment.\\n - When the trajectory is interrupted, the start and end times are marked as the boundaries of a new clip.\\n - The process repeats by identifying the next largest visible human in subsequent frames and tracking their trajectory.\\n - This recycling process continues until no humans are visible in the video.\\n - Clips without any visible humans are filtered out.\\n4. After these steps, if a clip is still longer than 60 seconds, we randomly slice it into several sub-clips, ensuring each of them is shorter than one minute.\\nFor you last question, we use 4 frame to represent one second following most previous works, the input length of the LLM is varied to increase the training performance.\\n\\n**W3. The experiments with static data ablation study are not fair. Does the validation set contain static data and synthetic data?**\\n\\nTo address your concerns, we conducted experiments on a test set filtered to exclude static and synthetic data. The results in Table 2 indicate that static and synthetic data still provide significant value. By incorporating additional static semantic prompts during model training, we enable the model to effectively distinguish between dynamic and static actions. This allows us to leverage a larger volume of data to establish a stronger model prior, ultimately enhancing overall model performance.\\n| Train Set | R@1 | R@3 | FID | MMDist |\\n|-------------------|-------|-------|--------|---------|\\n| Real | 0.196 | 0.474 | 0.006 | 1.647 |\\n| w/o static & syn | 0.167 | 0.396 | 1.740 | 2.323 |\\n| w/o static | 0.166 | 0.393 | 1.780 | 2.356 |\\n| MotionBase | **0.168** | **0.399** | **1.614** | **2.300** |\"}", "{\"title\": \"Official Comment to Reviewer phNv's question.\", \"comment\": \"Here are our responses to questions.\\n\\n**Q2. Regarding the automated evaluation metrics referenced by the authors, it is also noteworthy that the R-precision scores are relatively low on the proposed large-scale MotionBase dataset, potentially weakening the benchmarking results. Implementing text-motion retrieval models like TMR[3] may provide a more accurate evaluation of model performance.**\\n\\nWe also notice the unsatisfying results of R-precision. To address this, we have trained a more robust retrieval model on MotionBase and re-evaluated the results based on this model. The new evaluation results are updated in our revised paper, which demonstrate the benchmarking conclusions more clearly. Due to time constraints, we have not yet conducted evaluations using TMR or other more recent retrieval models, but we plan to perform these experiments in future work.\\n\\nWe hope the above clarifications address your questions, and we kindly ask if there are additional concerns we can address to further improve your support for the paper. Thank you again for your valuable suggestions.\"}", "{\"title\": \"Official Comment to Reviewer qpUP's weakness 2\", \"comment\": \"**W2. The motion quality has not been validated. Neither the estimated motions nor the texts generated by LLM have been checked manually or by any algorithm.**\\n\\nIn fact, we adopt several strategies to evaluate (**and keep improving**) both the motions and the texts of MotionBase.\\n\\nFor the motions, our visualization platform allows us to efficiently sample and manually check a large number of samples. \\n\\nIn addition, we employ the following steps to ensure the quality of collected motions: \\n1. RL-based Refinement. We first train a RL-based policy $\\\\pi_{\\\\rm refine}$ to elaborate and refine raw motions, ensuring they adhere to physical laws (e.g., maintaining balance) and appear more realistic. Specifically, this policy takes raw motion sequences as input, treating them as target poses, and generates new motion sequences that satisfy physical laws in a simulated environment, eliminating issues like jittering and foot-sliding.\\n2. Empirical Assessment. While effective, the RL-based policy $\\\\pi_{\\\\rm refine}$ may struggle with drastic movements in target poses, leading to slipping within the simulation. For such cases, we adopt the following empirical considerations: If a refined motion maintains balance in the simulated environment for a specific duration, it can be generally regarded as high-quality with no significant jittering or sliding. If a motion fails to maintain balance from the start, it is considered a bad sample and discarded.\\n 3. Motion Model Refinement. For motions that pass in the last step, we still utilize a well-pretrained motion model like RoHM[1] for further refinement. Additionally, we experiment with more powerful motion reconstruction models such as WHAM[2]. Based on our experience, existing motion models now have been able to effectively enhance the quality of raw motions.\\n\\nFor the texts, we apply two approaches to evaluate the quality of textual motion descriptions:\\n1. we sample 10,000 motion descriptions from our MotionBase generated by Gemini, along with 10,000 descriptions each from MotionX and HumanML3D. These descriptions are scored using GPT-4o, which evaluates each description on a scale of 1 to 5 based on predefined criteria focused on clarity, detail, and accuracy. The scoring criteria are as follows:\\n - Score 1: The description is vague, lacks specific details about body movement, and contains confusing or unclear expressions. \\n - Score 2: The description covers some movement and posture content but lacks sufficient detail or accuracy and includes unclear expressions.\\n - Score 3: The description clearly outlines movement and posture, providing basic details but lacking in-depth analysis.\\n - Score 4: The description is accurate and detailed, effectively conveying the movement process and changes in body posture, with some analytical depth.\\n - Score 5: The description is precise, comprehensive, and fluent, offering in-depth analysis of every detail of the movement and posture, demonstrating a high level of professionalism and clarity.\", \"we_then_calculate_the_average_scores_for_each_dataset\": \"MotionBase (3.837) and MotionX (1.386) and HumanML3D (1.703). These scores suggest that the text descriptions of MotionBase are generally more detailed and accurate compared to MotionX and HumanML3D.\\n2. To further evaluate the quality of the generated texts for vision-based motions, we prompt Gemini-pro with text descriptions and corresponding rendered motions. Our primary focus is on the accuracy with which the text descriptions reflect the content of the visual cues. To assess this, we present 500 rendered samples with their corresponding text descriptions from each dataset to Gemini, requesting a score based on the criteria we established earlier. The evaluation results provide valuable insights. The texts of MotionX and HumanML3D receives an average score of 2.25 and 3.08, respectively. Notably, MotionBase achieves a significantly higher average score of 3.82, outperforming the other two datasets.\\n\\nTo improve the quality of text descriptions, we adopt the following method. Specifically, our latest motion descriptions are structured into three hierarchical and complementary levels: an overall description of the entire body, detailed part-level description for each body part, rule-based description of each joint's relative movement derived from joint positions (e.g., \\\"The left hand is wide apart from the right hand. It is on the ground, the left upper arm, both legs and the left forearm are aligned horizontally ...\\\"). We condition GPT-4o with two levels of text description while using the remaining level as the target to be refined. GPT-4o then assesses and refines the textual content of the target level, enabling the generation of more precise and reliable text descriptions.\\n\\n[1] RoHM: Robust Human Motion Reconstruction via Diffusion\\n\\n[2] WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion\"}", "{\"comment\": \"Thanks for the response. Before checking your latest response, I still suggest you revise the website into supp.. There are so many motion generation papers submitted to ICLR/SIGGRAPH using the supp.. I don't know what your barrier is. You can concatenate them into a video demo. Is it very challenging? I would like to state that external links are not appropriate. This is because we cannot track external links without a third-party timestamp to make sure no new material is updated after my previous response to you. Besides, I am not sure whether my visit will be tracked by you. I hope the authors can understand my points. According to the ICLR review pipeline, I have the right to ignore these major changes. However, as I think the review process is to reduce concerns, I choose to check them. If these files on the website are changed, it might make my statement inconsistent with the material you provided. Thanks for your understanding.\"}", "{\"comment\": \"Dear reviewer,\\n\\nThank you again for your valuable review. We have responded to your every question and concern. We hope to hear back from you! If you have any unresolved concerns, please feel free to let us know! Or if our responses have addressed your questions, we would be grateful if you could consider adjusting your score accordingly.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Official Comment to Reviewer qpUP's questions\", \"comment\": \"**Q1. The FID in Table 6 is so wired. The FID of reconstruction is 1.76 while the generation FID in Table 3 is 0.166.**\\n\\nThere might be a misunderstanding regarding the results. The FID of 1.76 in Table 6 corresponds to using LFQ as the motion tokenizer, whereas in Table 3, we use VQ as the tokenizer, as explained in the implementation details. To clarify further, as shown in the first row of Table 1, VQ achieves an FID of 0.078 on HumanML3D, which aligns with the results in Table 3. Due to time constraints and limited resources, we were unable to fully validate the effectiveness of combining LFQ and LLM. The inclusion of LFQ in our study is mainly intended as a preliminary exploration to observe the potential of alternative tokenization methods for motion representation. Even though, we plan to conduct a more comprehensive investigation into this direction in future work.\\n\\n**Q2. What do the authors get from scaling experiments? Did the author see any hope for emerging? The shown examples are common cases, that can be also observed in other motion generation work.**\", \"our_experiments_reveal_a_clear_trend\": \"larger motion models and more extensive training data consistently improve motion generation performance. This finding contrasts with some previous works, such as MotionGPT [1], which reports performance drops with larger T5 models. Furthermore, due to limitations in data scale, no prior research has demonstrated a scaling law in human motion understanding, where expanding the dataset with a massive amount of motion data leads to consistent improvements. In contrast, our results show the hope that both increasing model parameters and expanding the dataset contribute to enhanced generalization. For example, models trained on MotionBase-1.0 (with 1 million samples) outperform those trained on MotionBase-0.5 (with 500,000 samples). Additionally, larger motion models built on more powerful language models (e.g., Llama2-13B vs. GPT2-medium) consistently achieve higher R-precision and lower FID scores. Beyond the common results, our model performs well on **out-of-distribution** and unseen commands, highlighting the value of scalable data and model architectures. We also present motion examples that are ready for deployment in both **simulated and real-world** environments.\\n\\n[1] MotionGPT: Human Motion as a Foreign Language\\n\\n**Q3. Do the supervised labels include only motion tokens, or does it contain both text and motion tokens?**\\n\\nThe supervised labels include both text and motion tokens. As shown in Table 10 in Appendix C.4, we compare the performance of two supervised training methods: one using only motion tokens and the other incorporating both text and motion tokens. The results demonstrate that incorporating both significantly enhances performance. We hypothesize that this improvement is due to the potential issue of catastrophic forgetting in the language model when supervision is limited to motion tokens alone. Including text tokens helps maintain the language model's capabilities, leading to better overall performance.\\n\\n**Q4. Did the authors conduct zero-shot text testing? For instance, could the largest model generate motions for descriptions like, \\\"The old man with a broken leg is walking forward slowly with a cane\\\"?**\\n\\nYes, we have conducted **out-of-domain evaluations** on 90K unseen motions to assess the zero-shot capabilities of our model. As shown in Table 5, our model, powered by the large-scale MotionBase dataset, demonstrates significantly better performance compared to models trained on smaller datasets, such as HumanML3D. Additionally, we provide a set of generation examples on our visualization platform to illustrate the model's capabilities. However, the specific example you mentioned may not be directly applicable to current motion models. This is because existing motion data predominantly represents individuals with two healthy feet. In this work, motion data is further refined using a physical-law-based RL policy to ensure balance and realism in the generated motions. Therefore, it's merely possible to generate motions of ``an old man with a broken leg''.\\n\\nWe hope our clarifications address your questions, and we kindly ask if there are additional concerns we can address to further improve your support for the paper. Thank you again for your valuable suggestions.\"}", "{\"comment\": [\"Thanks for your answer. After checking your response, I have some concerns that I am not clear about currently.\", \"For PHC, why is your design choice of primitives single and not scales the numbers? Besides, if you directly use the PHC trained on AMASS, the model you use to track in-the-wild motions is quite hard. Why not train a new PHC on your dataset? Besides, I still think the success rate is low, which might result from the inappropriate choice.\", \"I am a bit sorry that I still cannot understand why static motions enhance the quality. If you read Daniel's blog, you will find that the provided result is not self-standing. These motions will introduce noise into the dataset.\", \"Although you concatenate the motions and the original videos, the resolution is still low. Thus, I cannot see the fine-grained motions.\", \"For the claim of applying motion on the robot, I am not clear what the core scientific problem you resolve here. From your response, the efforts seem to be engineering.\", \"You stated \\\"Firstly, we independently assess the quality of the text description.\\\". I identify the evlauation is text-only according to \\\"independently\\\".\", \"Is there any evidence to suppor that a frozen CLIP is not effective? The CLIP mainly contribute the T-M alignment, which is not highly related to the quality itself. I am currently not convinced by this clarification.\", \"I hope these issues can be resolved. If anything is not objective, you can directly point it out, becauce the review process mainly focuses on the reserach quality itself.\", \"Good luck!\"]}", "{\"summary\": \"In this paper, the authors introduce MotionBase, a large-scale human motion generation benchmark featuring over one million motion sequences, a fifteen-fold increase over previous datasets, with multimodal data and detailed text descriptions. The authors demonstrate that scaling both data and model size significantly improves motion model performance, particularly with synthetic data and pseudo labels to reduce data acquisition costs. The authors also propose a novel 2D lookup-free motion quantization approach to enhance motion information retention and expand codebook capacity. Experimental results on various datasets validate the efficacy of their approach, with notable performance on out-of-domain data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper introduces MotionBase, a large-scale dataset comprising over one million human motion sequences, designed to support more comprehensive training and evaluation of motion generation models.\\n\\n2. The paper identifies key factors influencing the effectiveness of large motion models, underscoring the importance of scaling both data and model size.\\n\\n3. This paper proposes a 2D lookup-free motion quantization method that enhances motion representation while retaining essential information, thereby contributing to improved model performance.\", \"weaknesses\": \"1. While MotionBase is introduced as a benchmark with the potential to enhance motion model performance, the paper lacks a thorough comparative analysis across varied methods to demonstrate MotionBase's influence on model efficacy. Additional baselines and a broader selection of models trained on MotionBase would more robustly substantiate its claimed advantages.\\n\\n2. The paper does not include visual comparisons of motions generated by models trained on the baseline Motion-X dataset versus those trained on the proposed MotionBase dataset.\\n\\n3. Including the ground truth R-Precision and FID scores in relevant tables would strengthen the presentation and transparency of the results.\\n\\n4. The paper would benefit from dynamic visualizations within the qualitative analysis of the motions in the proposed datasets, which could provide a clearer and more engaging illustration of the dataset's scope and quality.\", \"questions\": \"1. It will be interesting to see the scalability of different architectures. Have the authors explored fine-tuning existing methods, such as MotionGPT[1] or MoMask[2], with larger parameter settings on the MotionBase dataset?\\n\\n2. Regarding the automated evaluation metrics referenced by the authors, it is also noteworthy that the R-precision scores are relatively low on the proposed large-scale MotionBase dataset, potentially weakening the benchmarking results. Implementing text-motion retrieval models like TMR[3] may provide a more accurate evaluation of model performance.\\n\\n[1] MotionGPT: Human Motion as a Foreign Language.\\n\\n[2] MoMask: Generative Masked Modeling of 3D Human Motions.\\n\\n[3] TMR: Text-to-Motion Retrieval Using Contrastive 3D Human Motion Synthesis.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"Thanks for the reply from the authors.\", \"I found that some of our discussions are not aligned. Thus I would like to specify.\", \"For the PHC part, my main concern is about the retraining of the PHC. If you only used the pre-trained PHC from AMASS, how can it be generalized to the motion you captured? If you assume your data is shifted from AMASS, the tracked motions are not iid. How can you ensure the success rate? If you assume your data is not shifted from AMASS, the dataset diversity and contribution will be weakened. I hope the authors can specify this. If it is the best choice to apply this choice, I will treat it is as an ideal choice.\", \"I do agree that the blog is not peer-reviewed, which cannot serve an important role in the discussion. The motivation I would like to refer to this is to clarify my points on the static data. I note that other reviewers have similar concerns. The authors stated their experiments show that static motion works. I ask for the ablation setting in the previous reply. However, I did not note the detailed setting. Is it trained with static data and the test set is not included with the static data? Did I miss something? Besides, I hope authors do not assume I do not know much about LLM/VLMs. Although I come from the animation community, I have several publications related to LLM/VLMs and even tuned related models by myself. Therefore, I am very clear about these technical details. The mentioned works are most related to VL understanding, which is mainly not related to motion generation. As a result, the evidence seems not convincing enough to me up to now.\", \"For the updated supp., I do not know why it cannot be downloaded now. I will try it later.\", \"I know the robot demos are used for showing the demo. However, the shown results seem to be a straightforward application, without additional technical innovation. If you emphasize this too much, reviewers will treat this as your unique contribution seriously. Otherwise, it will be questioned. I think a better choice is to include it in the appendix. If authors would like to claim the contribution to this, a better choice is to show the key scientific problem you resolved.\", \"I think you missed my meaning regarding the Gemini evaluation. I asked the evaluation protocol previously and got the reply \\\"Firstly, we independently assess the quality of the text description.\\\". According to the word \\\"independently\\\", I treat the evaluation as text only. However, after checking the further response, I noticed that it has visual grounds. As a result, I clarified why I treated it as text only in the last reply. The authors asked me to revisit the response, I do not know what I missed.\", \"Why does T2M-GPT struggle to match the performance of the GPT-2 architecture? The pretrained process of GPT-2 or the setting of CLIP?\", \"I do hope the discussion are aligned between authors and reivewers. If any points are not clear, please feel free to discuss more. Good luck!\"]}", "{\"comment\": [\"## **7-Round Questions about \\\"Static data\\\"**\", \"**Q1**: How is motion quality ensured? I suggest authors read **Daniel Holden's blog**, who is a well-known graphics scientist.\", \"**A1**: We use a 3-step method to ensure motion quality.\", \"**Q2**: Our animation community considers static data harmful. Refer to **Daniel's blog**.\", \"**A2**: (1) Our dataset contains 400K motions in the initial version, 800K in the latest version, 5 times and 10 times larger than the previous ones, respectively. (2) Results on Table 4 show effectiveness of static data. (3) Dynamic motion tuning ensures high-quality generation.\", \"**Q3**: Once you introduce the static ones, it will make your motion over-smoothing. Could you please clarify this?\", \"**A3**: Strategies like (1) prompt design, (2) weighted sampling, and (3) progressive training help prevent over-smoothing. We provided cases to show the effectiveness of prompt design to avoid \\\"over-smoothing\\\" in Supp.\", \"**Q4**: I still cannot understand why static motions enhance the quality. **If you read Daniel's blog**, you will find that the provided result is not self-standing. These motions will introduce noise into the dataset.\", \"**A4**: Informal blogs are not valid evidence. Static data is an open question, not a conclusion. Ablations comparing training \\\"with\\\" and \\\"without static data\\\" demonstrate its effectiveness.\", \"**Q5**: I asked for ablation settings but missed the details.\", \"**A5**: Details are clearly stated in L435\\u2013L438 and L1049\\u2013L1059 under \\\"**TRAIN SET w/o static data.**\\\"\", \"**Q6**: I think this is not a suggested choice to use static data in the animation community.\", \"**A6**: We\\u2019ve provided experiments, codes, and checkpoints proving effectiveness. Subjective conclusions alone are unconvincing. Besides, our datasets contain 400K motions in the original version and 800K in the latest one.\", \"**Q7**:Expanding from 400K to 800K is a major revision and is not worthy of being considered.\", \"**A7**: Based on your feedback, do you believe the paper would be improved by removing all static data, experiments and discussion? Doesn\\u2019t this suggestion seem somewhat unreasonable?\", \"**A7**: Do you recognize the contribution of our dynamic data, no matter 400K or 800K?\", \"**A7**: There is a logical contradiction in reviewer's opinion: The more harmful you believe static data is, the higher-quality dynamic data should be. Otherwise, how would our results improve?\", \"**Q8**: NO FURTHER REPLY\", \"---\", \"## **8-round questions about \\\"Data Quality and PHC\\\"**\", \"**Q1**: The data contribution is limited. The videos comes from InternViD and WebVid.\", \"**A1**: We don\\u2019t use raw videos directly. Instead, we process them through a rigorous framework: (1) 2-step video selection, (2) 4-step boundary detection, (3) 3-step occlusion removal, and (4) 2-step single-frame processing. (5) ...\", \"**Q2**: How is motion quality ensured?\", \"**A2**: (1) An RL policy refines motions to be physically plausible; (2) empirical quality assessment; (3) a pretrained motion model for further refinement.\", \"**Q3**: What RL policy do you use? PHC? RFC? ASE? What is the duration and success rate?\", \"**A3**: We use PHC: 50% of the original duration, 51.4% success rate.\", \"**Q4**: Why is PHC\\u2019s success rate so low, and how do you handle cases like \\\"sit\\\"?\", \"**A4**: Low rates stem from (1) interaction motions like \\\"sit\\\" failing and (2) PHC being trained only on AMASS, limiting generalization.\", \"**Q5**: Why not train a new PHC on your dataset?\", \"**A5**: It\\u2019s illogical to use to-be-refined data to train PHC for refining that data to-be-refined.\", \"**Q6**: If PHC doesn\\u2019t generalize well, how can success be ensured?\", \"**A6**: We never claim 100% success. PHC refines 30% of the data, which is still an improvement over prior works with 0%.\", \"**Q7**: **(Repeat of Q5)** Why not train a new PHC on your dataset?\", \"**A7**: Same as A5.\", \"**Q8**: PHC is highlighted as a revision and should be seen as a major change.\", \"**A8**: PHC is a minor step in data refinement, itself a small part of the pipeline. It\\u2019s not a core contribution of our paper, and implementation details are not central to our work. Highlighting it for clarity doesn\\u2019t make it a major revision.\", \"**Q9**: NO FURTHER REPLY\"]}", "{\"comment\": \"Dear reviewers:\\n\\nWe noticed that we haven't received your response to our latest responses, and we're eager to move forward with your feedback. Given the approaching deadline, would it be possible for you to provide your feedback at your earliest convenience? Do our responses answer your questions? We would be grateful for even brief comments. Thank you again for your expertise and consideration.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Official Comment to Reviewer phNv's weaknesses\", \"comment\": \"Thank you for your appreciation of our work, and proposing thoughtful feedback which helps us further improve our work.\\n\\nBefore reading our feedback, we introduce a visualization website to validate the quality of our dataset at http://www.motionbase3d.com. (**Please note: this link may be unstable, and refreshing the page 10~20 times may be required to view the content**). In case the reviewer is unable to access the site, we have also provided an anonymous cloud storage link containing all the examples featured on the website: [click this link](https://www.dropbox.com/scl/fo/6w7yuhun8mpuz9yuaz5ej/ALUo_dOLYIyNgOZ3Ll85JCI?rlkey=yecydfnno43b5o602ae2fmr0w&st=rq97kdcw&dl=0). \\nThe platform includes visualized samples from our dataset, physically grounded examples of refined motions, rendered results generated by different motion models, and live demonstrations in both simulated environments and real-world settings (using H1 and G1 UNITREE humanoid robots). We hope this visualization platform addresses the data quality concern reviewers may have. If the reviewer requires additional information or examples, we are more than happy to upload further relevant results upon request. Additionally, our dataset is highly scalable due to the robust data collection pipeline. As of the start of the rebuttal period, we have collected over 1.5 million motion trajectories and 4 million motion-text pairs, which are 1.5 times compared to our first submission. These datasets have undergone three rounds of rigorous data filtering to ensure their quality.\\n\\nHere are our responses to weaknesses.\\n\\n**W1 & Q1. The paper lacks a thorough comparative analysis across varied methods to demonstrate MotionBase's influence on model efficacy. Additional baselines and a broader selection of models trained on MotionBase would more robustly substantiate its claimed advantages.**\\n\\nIn this work, our primary focus was on constructing the high-quality MotionBase dataset and investigating the scaling law of model size and dataset scale. Due to time limitation, we were unable to perform a comprehensive comparison among all existing methods. We acknowledge this as an important area for improvement and plan to include additional baseline models and experimental results to provide a more thorough analysis of our findings.\\n\\n**W2. The paper does not include visual comparisons of motions generated by models trained on the baseline Motion-X dataset versus those trained on the proposed MotionBase dataset.**\\n\\nWe showcase the generated results from different models on the model generation result page of our visualization platform, which allows you to directly compare the motions generated by models trained on Motion-X and MotionBase. We believe this visual comparison will provide a clearer and more intuitive demonstration of the improvements MotionBase brings to model training.\\n\\n**W3. Including the ground truth R-Precision and FID scores in relevant tables would strengthen the presentation and transparency of the results.**\\n\\nWe have added the ground truth R-Precision and FID scores to Table.2 and Table.3 in our revised version. Here are the results:\\n| Dataset | Fid Real | R@1 Real | R@3 Real | MMDist Real |\\n|--------------|----------|----------|----------|-------------|\\n| HumanML3D | 0.002 | 0.511 | 0.797 | 2.974 |\\n| MotionX | 0.038 | 0.496 | 0.821 | 2.438 |\\n| MotionBase | 0.011 | 0.290 | 0.563 | 3.480 |\\n\\n**W4. The paper would benefit from dynamic visualizations within the qualitative analysis of the motions in the proposed datasets, which could provide a clearer and more engaging illustration of the dataset's scope and quality.**\\n\\nWe fully agree that dynamic visualizations are essential for effectively demonstrating the value of our dataset. This concern has also been raised by other reviewers. To address this, we build a visualization platform that allows all reviewers to directly and clearly explore a wide range of motion examples, providing a more intuitive and comprehensive understanding of the richness and high quality of our dataset. We sincerely invite you to visit the platform and experience these dynamic examples. Your comment is important.\"}", "{\"comment\": \"Dear reviewer,\\n\\nAs we near the conclusion of the discussion phase, we would like to inquire if our response has effectively addressed your inquiries. In the rebuttal, we have provided explanations for your questions and concerns. Should our explanations have met your expectations, we kindly request your consideration in revising the score accordingly.\\n\\nShould you have any additional comments or concerns, we are more than willing to address them promptly during the response period. We deeply appreciate your constructive and insightful feedback, which has greatly contributed to the refinement of our work.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Dear reviewer,\\n\\nWith only about 2 days remaining until the rebuttal deadline, we are still eagerly awaiting your response. We sincerely hope that when you have a moment, you could spare a few minutes to check the summary and reply above. Have your previous questions and concerns been addressed? We are very keen to know whether our rebuttal has changed your recommendation regarding our work.\\n\\nSincerely, \\n\\nAuthors\"}", "{\"comment\": \"Dear AC/PAC/PC,\\n\\nFor the long time during ICLR's history, we believe ICLR allows authors to revise their papers in response to reviewers' concerns. However, we are now concerned about potential misunderstandings regarding the rebuttal phase, as some reviewers may be confusing ICLR\\u2019s rebuttal process with that of traditional conferences. **The spirit of ICLR is the only reason we have been willing to provide such detailed responses before the official release of our paper (over 30 ICLR pages in rebuttal), especially given that we initially submitted a content-rich version (over 25 ICLR pages).**\\n\\nSome reviewers have suggested that \\\"major revisions are not worthy of consideration in the review process\\\" and that \\\"basing a decision on revisions would be unfair to other papers.\\\" We would appreciate clarification from the AC, PAC, PA on this matter, as we\\u2019ve noticed that one of the main reasons reviewers have not raised their scores is their belief that the decision should be based solely on the original version of the paper.\\n\\nFor reference, here is the reviewer\\u2019s guideline on the official website:\\n \\n**Engage in discussion: The discussion phase at ICLR is different from most conferences in the AI/ML community. During this phase, reviewers, authors and area chairs engage in asynchronous discussion and authors are allowed to revise their submissions to address concerns that arise. It is crucial that you are actively engaged during this phase. Maintain a spirit of openness to changing your initial recommendation (either to a more positive or more negative) rating.**\", \"this_raise_another_question_we_wonder\": \"if the reviewer suggest \\\"the decision should base on initial version\\\", why do they ask so many questions refering to our data construction details (65 reply so far)? The level of detail we have provided has become so extensive that reviewers might mistakenly consider it the paper's main contribution. To our knowledge, no other works (e.g., MotionX, Smpler-X, EgoBody) have released such extensive details, even after their official publication.\\n\\nIn addition to this, another concern is about the static data. We have provide experimental results to validate our proposal. If the reviewer does not believe the results, we also provide corresponding code, checkpoints. If the reviewer overlooks the results and code we provided and relies on subjective experiences alone to draw conclusions, how can we convince a reviewer?\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"It seems that the authors' tone is quite unkind. Please stay calm.\\n\\nFor the PHC part, the pre-trained PHC is trained on AMASS, not your dataset. This will give you a low success rate for your tracking. It is also a choice to train a new PHC training on your dataset, which can promise generalization ability. \\n\\nI notice that `qpUP` has a similar concern on the static motion, which is not an animation but poses. I think this is not a suggested choice in the animation community. \\n\\nBesides, the MotionBase dataset construction pipeline introduced in Section 3 does not include any introduction of PHC, which plays an important role in the annotation process. \\n\\nAccording to the review guidance, the major revisions are not worthy of consideration in the review process. However, I tried to improve the quality of the submission with the authors and to provide my feedback on this, which encouraged me to discuss with the authors more about the unclear details. \\n\\nThe authors have made major revisions to the dataset. The authors' response:\\n> As of the start of the rebuttal period, we have collected over 1.5 million motion trajectories and 4 million motion-text pairs, **which are 1.5X compared to our first submission.**\\n\\nIf no special notes, all contents related to PHC are in the Appendix and highlighted as revision color, which can be recognized as a major revision on the technical pipeline. \\n\\n**The discussion process is for addressing misunderstandings in the reviews, not for providing major revisions. Up to now, the authors have made major revisions to the original submission, on the dataset and the annotation pipeline. Besides, the details of these parts are not clearly clarified during the discussion.**\\n\\nReviewer WfZ5\"}", "{\"title\": \"Official Comment to Reviewer WfZ5's weakness 1\", \"comment\": \"**W1. [The motion collection process] The limited contribution of the dataset. The video data comes from InternViD and WebVid and the data collection process is from motion-x and other methods. The dataset contribution is limited.**\\n\\nIt is important to note that we do not directly use videos on the Internet, but have undertaken extensive efforts and developed a strategic combination to collect useful motion data from the Internet. Without these efforts, raw video data would remain a noisy, unusable collection. In addition to the strategies we introduced to improve the quality of motion and text descriptions, we provide more details of the data construction process.\\n\\n1. **Video Collection and Selection:** It is worth noting that MotionX data constitutes only 5% of the entire dataset. Meanwhile, rather than relying solely on open-source datasets like InternVid, we also extract motion data from self-collected videos sourced from the Internet. For these videos, we use a pretrained 2D human keypoint detection model to filter out those without visible human activity. Additionally, rule-based methods are applied to ensure that the human bounding box occupies a significant portion of the frame, making human movement clearly visible. Videos with only partially visible humans are removed to maintain the quality of the potential motion data. Through these methods, we ensure the extracted motion data is of high quality.\\n\\n2. **Short Boundary Detection:** Web videos are generally lengthy and feature varied camera shots. To address this challenge, we adopt the following steps: \\n\\n (1) First, for videos shorter than 30 seconds or those with explicit temporal boundaries, we directly use the video clip or the provided boundaries to segment the video into shorter clips. \\n\\n (2) For videos longer than 30 seconds, we employ a scene detection model to roughly divide the video into smaller segments.\\n\\n (3) For each segment, we further slice it into shorter clips using the following process: \\n - At the beginning, the human with the largest bounding box is selected as the anchor, and their trajectory is tracked throughout the segment.\\n - When the trajectory is interrupted, the start and end times of the interruption are marked as the boundaries of a new clip.\\n - The process repeats by identifying the next largest visible human in subsequent frames and tracking their trajectory.\\n - This process continues until no humans are visible in the video.\\n - Clips without visible humans are filtered out.\\n\\n (4) After these steps, if a clip is still longer than 60 seconds, we randomly slice it into several sub-clips, ensuring that each sub-clip is shorter than one minute. \\n\\n3. **Removing Occlusion and blur:** Occlusion and motion blur are common issues in human-related videos. To address these problems, we adopt the following steps:\\n\\n (1) First, we sample key frames from each video and use a pretrained 2D keypoint detector to extract skeleton keypoints for each human in the key frames. If a significant portion of the keypoints has predicted confidence scores below a specific threshold, we consider the human motion to be occluded and exclude it from further processing.\\n\\n (2) We then use a visual foundation model, such as Segment Anything, to generate segmentation masks for each frame. If a large object is detected in front of the human, indicating occlusion, we filter out the corresponding motion data.\\n\\n (3) To address motion blur, we track the trajectory of each human whose motion data needs to be extracted. For timestamps with low-confidence keypoint scores, we smooth the trajectory using adjacent detection results to ensure continuity and accuracy.\\n\\n4. **Single-frame Motion Processing:** A substantial portion of our data consists of single-frame motions, many of which can be transformed into multi-frame sequences to enhance data diversity. To achieve this, we train a RL-based policy $\\\\pi_{\\\\rm multi\\\\\\\\_frame}$ using the AMASS dataset. This policy generates physically plausible motion sequences within a simulation environment, using the single-frame motion as the target pose. However, due to potential instability caused by drastic lower-body movements, some generated motions may fail to maintain balance. For single-frame motions that cannot be successfully converted, we use another pretrained, target-conditioned motion generator based on existing high-quality motion data. This generator uses the single-frame motion as the target pose and generates the preceding motion, effectively producing a complete sequence. While these generated motions are not fully constrained by physical laws, resulting in less consistent quality compared to those generated by the RL-based policy, they still provide an effective solution for motion conversion.\"}", "{\"comment\": \"Dear reviewer,\\n\\nIt would be unfair to disregard all the details we provided during the ICLR rebuttal, especially after we took the time to respond thoroughly to your questions. ICLR allows authors to revise their papers during the rebuttal period, which is different from other conferences like CVPR and NeurIPS. Most importantly, the provided details are provided to address your questions, which does not go against our contribution in the submitted version. **Here, we really need @AC to clarify this.**\\n\\n**If the reviewer only refers to the original submission, then what is the purpose of the rebuttal process? Why ask for all these details, instead of directly telling us that you would not change your score at the beginning?**\\n\\nRegarding the static data, we have mentioned several times that we provided 800K dynamic data, which takes a larger proportion of the dataset (56%). Wouldn\\u2019t it be inappropriate to overlook this significant portion and focus solely on the static data (we have also verified the benefit of static data experimentally)? We have provided experimental results and the implemented code. While we understand the reviewers\\u2019 concerns, if they are unconvinced by our experimental results, wouldn't it be more appropriate for the reviewers to run the code themselves? If the reviewer overlooks the results and code we provided and relies on subjective experiences alone to draw conclusions, how can we convince a reviewer?\\n\\nBest,\\n\\nAuthors\"}" ] }
9QPH1YQCMn
Infilling Score: A Pretraining Data Detection Algorithm for Large Language Models
[ "Negin Raoof", "Litu Rout", "Giannis Daras", "Sujay Sanghavi", "Constantine Caramanis", "Sanjay Shakkottai", "Alex Dimakis" ]
In pretraining data detection, the goal is to detect whether a given sentence is in the dataset used for training a Large Language Model LLM). Recent methods (such as Min-K % and Min-K%++) reveal that most training corpora are likely contaminated with both sensitive content and evaluation benchmarks, leading to inflated test set performance. These methods sometimes fail to detect samples from the pretraining data, primarily because they depend on statistics composed of causal token likelihoods. We introduce Infilling Score, a new test-statistic based on non-causal token likelihoods. Infilling Score can be computed for autoregressive models without re-training using Bayes rule. A naive application of Bayes rule scales linearly with the vocabulary size. However, we propose a ratio test-statistic whose computation is invariant to vocabulary size. Empirically, our method achieves a significant accuracy gain over state-of-the-art methods including Min-K%, and Min-K%++ on the WikiMIA benchmark across seven models with different parameter sizes. Further, we achieve higher AUC compared to reference-free methods on the challenging MIMIR benchmark. Finally, we create a benchmark dataset consisting of recent data sources published after the release of Llama-3; this benchmark provides a statistical baseline to indicate potential corpora used for Llama-3 training.
[ "Pretraining data detection", "Large language models" ]
Accept (Poster)
https://openreview.net/pdf?id=9QPH1YQCMn
https://openreview.net/forum?id=9QPH1YQCMn
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qLlk8fQ16S", "pyFcwzOqQS", "oL96YMbV5V", "o5bW0iZzSp", "kkg4Fxdiv8", "ibbspU5EeS", "bqpNKcIUR6", "WyTO7GYX3N", "SaydAKaisr", "NoMuYebvmw", "MHdcT0foq0", "GhANOCdwMM", "G7jvA9p3J7", "9cUElLKvkK", "3SOZSwm4ym", "2dwfq5XSfC", "1Lka5cPJLO" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732229433669, 1730702561220, 1732770261763, 1732765045871, 1730572010481, 1737523977521, 1731047378971, 1732585548740, 1732228659061, 1732613238212, 1730716381279, 1733207410567, 1734677693597, 1732612855666, 1732230362293, 1732356144916, 1732523546792 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9352/Authors" ], [ "ICLR.cc/2025/Conference/Submission9352/Reviewer_7osK" ], [ "ICLR.cc/2025/Conference/Submission9352/Authors" ], [ "ICLR.cc/2025/Conference/Submission9352/Reviewer_kRF2" ], [ "ICLR.cc/2025/Conference/Submission9352/Reviewer_T2Q9" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9352/Reviewer_kRF2" ], [ "ICLR.cc/2025/Conference/Submission9352/Reviewer_ZiHR" ], [ "ICLR.cc/2025/Conference/Submission9352/Authors" ], [ "ICLR.cc/2025/Conference/Submission9352/Authors" ], [ "ICLR.cc/2025/Conference/Submission9352/Reviewer_ZiHR" ], [ "ICLR.cc/2025/Conference/Submission9352/Authors" ], [ "ICLR.cc/2025/Conference/Submission9352/Area_Chair_4qTt" ], [ "ICLR.cc/2025/Conference/Submission9352/Reviewer_7osK" ], [ "ICLR.cc/2025/Conference/Submission9352/Authors" ], [ "ICLR.cc/2025/Conference/Submission9352/Authors" ], [ "ICLR.cc/2025/Conference/Submission9352/Reviewer_T2Q9" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your review. We are glad and encouraged that you found InfillingScore a novel method.\", \"on_the_weakness_point_you_have_mentioned\": \"Yes, thanks for highlighting this very important point. The first step in evaluating a model with InfillingScore on a corpus is to determine the optimal classification threshold. This can be achieved using 100 positive (seen) and 100 negative (unseen) sample sentences, requiring approximately 1.5 hours on H200 GPU nodes with Llama3-8B. Importantly, this process of determining the threshold is a one-time process per model, ensuring runtime is feasible and InfillingScore is practical for testing purposes. The second step, which is testing the samples, takes tens of seconds for a typical sentence of 100-200 tokens, which is feasible for membership inference purposes.\"}", "{\"summary\": \"This paper proposed a pretraining data detection approach that utilizes the non-causal token likelihoods which depend on both preceding and succeeding token probabilities.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. The paper has enough novelty.\\n2. It includes all the previously related works and lists the differences.\\n3. It has detailed experiments and results analysis.\\n4. The paper writing is clear, and the visuals are good.\", \"weaknesses\": \"The proposed approach has a slower speed compared to the previous best approach. However, I believe it has little effect on the impact because contamination detection may not need to be so fast.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your valuable feedback, we have updated the paper.\"}", "{\"title\": \"Reviewer Response\", \"comment\": \"Thank you for the response and sorry for the late reply. After reading your response and others' reviews, I think my concerns are not resolved.\\n1. Contributions are marginal: Applying Bayesian probability for output tokens and the approximation of exact infilling scores to the existing Min-k% and Min-k%++ method is hardly a contribution to me. It is simply adding two intuitive tricks to an existing framework that is shown to be not effective for general applications in the past and recent literature [1, 2, 3] (including these papers but not limited to) that the token probability-MIA methods are not performing well for LMs and LLMs. Therefore, I don't think this paper has made substantial amount of contributions.\\n\\n2. The performance is not statistically \\\"much better\\\" than the baseline methods as claimed in the paper and rebuttals, where the improvement is very marginal considering the binary classification tasks. In the experiments from Sec 4.4.2, Infilling Score did not perform well for about half the cases. Considering the conclusions from [1] where most methods perform nearly random for some MIA evaluations, I don't think this paper can be applied to real downstream tasks. \\n\\nTherefore, I think my current evaluation is reasonable for this paper and will keep the current ratings.\\n\\n[1] Duan, Michael, et al. \\\"Do membership inference attacks work on large language models?.\\\" arXiv preprint arXiv:2402.07841 (2024).\\\\\\n[2] Carlini, Nicholas, et al. \\\"Extracting training data from large language models.\\\" 30th USENIX Security Symposium (USENIX Security 21). 2021.\\\\\\n[3] Carlini, Nicholas, et al. Is ami (attacks meet interpretability) robust to adversarial examples? arXiv preprint arXiv:1902.02322.\"}", "{\"summary\": \"This paper studies algorithms to detect whether a given sentence is the pretraining data used for training an LLM. The paper proposes \\u201cInfilling Score\\u201d, a test-statistic score based on non-causal token likelihoods. The proposed score works by computing the infilling probability of a token based on both its past and future tokens, whereas similar existing methods, such as MIN-K% and MIN-K%++, rely on a statistic based on past tokens only.\\n\\nThe motivation for Infilling Score is that incorporating future tokens should yield a more indicative measure. While the most intuitive method to incorporate future tokens is based on the Bayes rule, the paper argues that it would involve marginalising over all possible tokens in the vocabulary at each time step (i.e., each token in the test sequence). Therefore, this paper proposes an approximation (in Equation 6) to circumvent the expensive marginalisation, requiring 2 * the number of test tokens calls instead of the number of vocab size * the number of test tokens calls.\\n\\nExperiments are conducted on standard datasets, including WikiMIA and MIMIR, and the proposed method is compared against other reference-free methods (e.g., Min-K% and Min-K%++) as well as other reference-based methods. Experimental results show that Infilling Score outperforms existing methods in the majority of tasks. In addition, the paper complies a more recent data from book excerpts and applies the detection algorithm on Llama3.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The idea is simple, yet novel and effective. The method extends the existing reference-free pretraining data detection algorithm (Min-K++%) by incorporating the information from future tokens, and the proposed method yields improved performance.\", \"While it is not as efficient as Min-K%, the proposed algorithm is more efficient than the implementation of the naive Bayes rules. This is noted and examined in the paper.\", \"Comprehensive experiments on both standard datasets as well as a case study on new datasets (to avoid likely contamination).\", \"Overall, this paper improves the understanding in the field of contamination/training data detection.\"], \"weaknesses\": [\"It is not clear in Section 4 (experiments) how Infilling Score (Equation 6) performs compared to the naive Bayes rule (Equation 5). The paper provides run-time in Section 4.5, but I cannot find the performance comparison.\", \"[nitpick] similar to Min-K% & Min-K%++, this method is limited to grey-box scenarios.\"], \"questions\": \"In the of data contamination detection, I wonder how this method works if the example to be tested (x) is partially contaminated. Would it be more or less sensitive to partial changes compared to other methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper introduces the Infilling Score, a method developed based on Min-k% and Min-k%++ to detect whether a given text sequence was part of a language model\\u2019s pretraining data, which is the traditional MIA setting. This method builds on existing approaches for membership inference attacks by using non-causal token likelihoods to improve detection accuracy. The authors propose a ratio test-statistic for efficient computation, and demonstrate its effectiveness on various benchmarks such as WikiMIA and MIMIR, where the proposed method performs slightly better than the compared baselines, especially on longer text sequences.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The motivation of combining the exisitng method and likelihood probability is straightforward and easy to understand.\\n2. This paper presents extensive experiments to demonstrate the effectiveness of the proposed method. And the case study with the newly-released model Llama 3 could be useful for real practices.\", \"weaknesses\": \"1. The contribution in this paper is quite marginal. The propose framework is more like an extension of the previous methods like Min-k% and Min-k%++, which utilizes token probabilities to make predictions. The token probabilities approach have been shown to be less effective for white-box settings in MIA studies. While I understand the grey-box access might raise more challenges, the modifications of previous methods in this paper can hardly contribute to new insights into this direction.\\n\\n2. It seems that the complexity of the proposed framework is proportional to the number of tokens considered in the method. As shown in Sec. 4.5, the runtime of the proposed method is indeed much higher than the previous Min-k%++ methods. Would this become a concern when using such method in real practices?\\n\\n3. Despite the increased complexity, the improvements over the baseline models in terms of performance in these benchmarkes are also not obvious. In many cases, Infiling Score can only achieve similar performances with Min-k% and Min-k%++.\", \"questions\": \"See the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for replying to my comments. These addressed my concerns. As for the last part in W2, I suggest you to add the explanation in your final version of the paper.\"}", "{\"comment\": \"### W1. Marginal Contribution\\nThanks for your review. We want to clarify what are contributions are: \\n- The first contribution is the introduction of the infilling score metric and showing that the involved Bayesian probabilities significantly improve membership performance compared to methods like MinK% and MinK%++ which only rely on past tokens. This new statistic is an improvement over the previous state of the art methods, so it is non-trivial. The problem is that the computation of the Exact Infilling score is very expensive: It requires a number of LLM calls proportional to the vocabulary size. Roughly, vocabulary sizes are typically 30-50k tokens and an exact calculation of the infilling score would require 100-200k LLM calls. \\n- The second key contribution is algorithmic: how to approximate the infilling score with only two LLM calls, as opposed to thousands. The technical innovation of our method is how to make our approximation algorithm not depend on the vocabulary size |V|. This is a key that we don\\u2019t want our reviewers to miss.\\n\\nAlso, InfillingScore performs significantly better than Min-K% and Min-K++ on LLMs with larger pretraining datasets, such as LLama. This is particularly valuable because as the size of the training corpus grows, it is more likely to be held internally as proprietary data rather than being publicly shared.\\n\\n### W2. Complexity Concerns\\nAlthough InfillingScore has a longer runtime compared to other inference-based methods like MinK% and MinK%++, it is important to note that this method is still faster and cheaper than training data detection methods that require further model training (example: Zhang et. al., https://www.arxiv.org/pdf/2410.10880)\\nThe process of detecting the best classification threshold needs testing with two subsets of seen and unseen examples as our groundtruth (labeled) dataset. Using 100 sample sentences (of 128 to 256 tokens) per class, this process takes less than 1.5 hour for the Llama-7B model on one H200 GPU. Noe that this is a one-time process. Once the optimum threshold is detected, testing a paragraph of text takes tens of seconds in this setting.\\n\\n### W3. Performance Improvements\", \"response\": \"InfillingScore significantly outperforms MinK% and MinK%++ on models which have been pretrained on larger training datasets like Llama. The advantage of InfillingScore in detection accuracy comes from the fact that it incorporates tokens from both past and future tokens in the sequence, which allows it to determine outlier (seen) examples more accurately.\", \"so_to_summarize\": \"Yes indeed our method will take a few more seconds, but this is worth it for contamination detection, in our opinion.\"}", "{\"comment\": \"Dear Reviewer kRF2,\\n\\nWe hope the above clarifications and the additional experiments in the revised draft addressed your concerns about the novelty of this work. We kindly request you to discuss if there are any further comments as we remain committed to addressing any remaining points you may have during the discussion phase.\\n\\nThanks.\"}", "{\"summary\": \"In pre-training data detection, this paper proposed Infilling Score, a new test-statistic based on non-causal token likelihood. This paper also proposed a ratio test-statistic whose computation is invariant to vocabulary size. Experiments showed the method achieves an accuracy gain over state-of-the-art methods, including Min-K%, and Min-K%++ on the WikiMIA benchmark. Finally, the paper constructed a benchmark dataset that consists of recent data sources published after the release of Llama-3; this benchmark can provide a statistical baseline to indicate potential corpora used for Llama-3 training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"It is natural but good idea to introduce a new test-statistic that is based on non-causal token likelihood. Empirically, the method achieved an accuracy gain over state-of-the-art methods.\", \"weaknesses\": \"1. While the method empirically achieved an accuracy gain over state-of-the-art methods in Tables 1 and 3, I wonder the differences are really significant. If so, it is better to mention in what significance test the authors found they are significant.\\n\\n2. While the computation can be invariant to the vocabulary size in the proposed method, it seems that there exists a tradeoff in terms of the sequence length. The performance is lower for shorter instances than longer ones. However, for longer instances, the computation is higher. In Sec. 4.5, since the authors did not show the size of the datasets with different lengths, it is difficult to judge whether the differences in the speed in Table 5 are negligible or not. It is better to clearly mention the number of instances for each dataset. \\n\\nIt is more convincing when, if possible, the authors can try to estimate the runtime for actual settings where we try pre-training data detection for a specific dataset of a certain size, rather than for the standard benchmark dataset.\", \"questions\": \"1. I wonder \\\\tau in Sec. 3.2 is also a hyperparameter to be fixed. The value and how the authors fixed it were not mentioned in the experiments Please explain them.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank all the reviewers for their effort in helping us improve our manuscript. We are glad that most reviewers have found the proposed method novel and experiments to be extensive.\\n\\n\\nWe want to emphasize on the main contribution of the work again here. Using the non-causal likelihoods is a natural and intuitive method for improving the accuracy of methods like MinK%++. However, it is not trivial and how to compute this statistic with a feasible algorithm which does not require order |V| number of LLM calls (where |V| is the vocabulary size). The innovation of our work is in introducing the ratio statistic that enables the method to incorporate non-causal token likelihoods.\\n\\nWe have shown empirical results in all existing datasets which are used for membership inference in the literature. Our method achieves a higher accuracy compared to existing membership inference methods. There are very few exceptions which are listed in our results. Based on reviewers' great suggestions, we have also conducted a bootstrap hypothesis test and calculated the two-sided p-value to indicate the statistical significance of our method. In addition, we have also clearly indicated the limitations of our work (in runtime) compared to existing methods, and have measured and reported the comparative runtime results.\\n\\nWe want to thank the reviewers again for their comments and suggestions towards improving this work.\"}", "{\"metareview\": \"The proposed Infilling Score method contributes novel technical innovations by incorporating both past and future token data for more accurate contamination detection, addressing existing shortcomings in the current methods that rely solely on past tokens. The paper provides a comprehensive experimental evaluation across multiple datasets, demonstrating an improvement over previous methods. The improvements, particularly for larger pretraining datasets and longer sequences, and the efficient computation of the proposed score are noteworthy achievements. The setup of a new benchmark dataset is another valuable resource for the research community.\\n\\nWhile some reviewers described the contribution as incremental compared to Min-k% and Min-k%++, the authors contended that the contribution is non-trivial and explained how they developed a feasible algorithm for computing this statistic without necessitating an order of |V| language model calls (where |V| is the vocabulary size). The innovation in this work lies in the introduction of the ratio statistic, enabling the method to incorporate non-causal token likelihoods. I agree with the authors' assertion that a computationally feasible algorithm qualifies as a significant contribution.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers initially had differing opinions about the paper's contributions. Reviewer kRF2 regarded the contributions as marginal, citing that the improvements are minor and do not substantially advance the field. On the other hand, reviewers ZiHR and 7osK found the paper to be a solid contribution, emphasizing its novelty and performance gains over existing methods. After the author's rebuttal clarified points regarding statistical significance testing and practical applicability concerns, reviewer ZiHR was satisfied, while reviewer kRF2 maintained their stance. Reviewer T2Q9 provided a nuanced view, acknowledging the improvements but requesting clarity on specific experimental questions, which was addressed by the authors.\"}", "{\"comment\": \"Thank you for the replies! I will keep the score.\"}", "{\"comment\": \"Thank you so much for your review, we appreciate the points you have brought up.\\n\\n### W1. \\nAbout the weakness point 1, where you have asked about performance comparison of the InfillingScore methods vs. the exact score computation:\\nThanks for bringing up this point. It definitely would have been ideal to compare the accuracies of exact score computation, against our proposed method. However, it is not feasible to run, as it is extremely compute and runtime intensive to run the naive approach on the entire dataset. Based on the runtime provided in Section 4.5, it takes approximately a couple of days to run the tests for a single subset of WikiMIA. Hence, this was infeasible for us.\\n\\n### W2. \\nAbout the weakness point 2, where you have mentioned grey-box access limitation:\\nThis is correct. However, it is important to note that a majority of the foundational LLMs in industry such as Llama, Mistral, Qwen, and Gemma, are open-weights models where can be tested with these methods.\\n\\n### Q1.\", \"about_the_question_you_have_brought_up\": \"This is very interesting comparison to do. Can you elaborate a bit more on the \\u201cpartial contamination\\u201d setting? Do you mean a setting where only a portion of a sentence tokens were seen by the model during training? \\nThe benefit of our method compared to MinK% and MinK%++ methods is that InfillingScore incorporates future tokens as well as part tokens for membership detection. So, my prediction is that this advantage still holds under noise.\\nIf your suggestion is to test and compare a setting where a only a subset or a random set of tokens have been seen by the model during training, we should be able to test that by randomly changing a subset of tokens from WikiMIA samples and compare how InfillingScore performs on this noisy dataset compared to MinK%++.\"}", "{\"comment\": \"Thanks you for your review, we appreciate the great suggestions and feedback.\\n\\n### W1.\", \"thank_you_for_your_great_suggestion\": \"we run a bootstrap hypothesis testing and calculate the two-sided p-value, comparing Infilling Score AUROC results with Mink%++ results.\\nWe have updated the paper with these results in Appendix B.1, Table 11.\\nHere's the table converted to markdown format:\\n\\n| Sequence Length | Model | Infilling Score | | MinK++ | | Comparison | |\\n| --- | --- | --- | --- | --- | --- | --- | --- |\\n| | | AUROC (%) | Std Err | AUROC (%) | Std Err | Difference (%) | p-value |\\n| 32 tokens | llama-7b | 89.185 | 1.173 | 85.182 | 1.328 | 4.003 \\u00b1 1.130 | 0.000*** |\\n| | llama-13b | 88.850 | 1.232 | 84.852 | 1.333 | 3.998 \\u00b1 1.222 | 0.004** |\\n| | llama-30b | 87.628 | 1.236 | 84.390 | 1.329 | 3.239 \\u00b1 1.157 | 0.006** |\\n| 64 tokens | llama-7b | 89.788 | 1.341 | 85.922 | 1.659 | 3.866 \\u00b1 1.492 | 0.012* |\\n| | llama-13b | 90.029 | 1.265 | 85.692 | 1.642 | 4.338 \\u00b1 1.539 | 0.010* |\\n| | llama-30b | 88.206 | 1.447 | 84.828 | 1.705 | 3.378 \\u00b1 1.601 | 0.040* |\\n| 128 tokens | llama-7b | 87.364 | 2.272 | 84.896 | 2.395 | 2.468 \\u00b1 2.654 | 0.348 |\\n| | llama-13b | 88.145 | 2.214 | 83.740 | 2.463 | 4.405 \\u00b1 2.649 | 0.080 |\\n| | llama-30b | 86.207 | 2.797 | 82.398 | 2.602 | 3.809 \\u00b1 1.993 | 0.064 |\\n| 256 tokens | llama-7b | 96.307 | 1.761 | 82.354 | 4.662 | 13.952 \\u00b1 4.296 | 0.000*** |\\n| | llama-13b | 95.124 | 2.271 | 82.326 | 4.740 | 12.797 \\u00b1 3.952 | 0.000*** |\\n| | llama-30b | 90.737 | 3.782 | 77.411 | 5.643 | 13.326 \\u00b1 4.459 | 0.002** |\\n\\nResults show bootstrap estimates with 1000 iterations. The mean difference indicates Infilling Score's improvement over MinK++. Statistical significance is denoted as: * (p < 0.05), ** (p < 0.01), *** (p < 0.001).\\n\\n### W2.\\nThanks for bringing up this point. The dataset has 776 sequences of length 32, 542 sequences of length 64, 250 sequences of length 128, and 82 sequences of length 256. Hence, there is a trade-off between accuracy and runtime, as testing the 256-token sequences takes about 82 x 30 = 2460 sec (vs. 776 sec for 32-token sequences).\\n\\nNote that it is still quite feasible to test the dataset on a single GPU node.\\nWe have added this explanation in the paper.\\n\\nAlso, the process of testing an actual dataset (with no labels) has two steps. The first step is detecting the optimum classification threshold, where one can use 100 positive (seen) and 100 negative (unseen) sample sentences to choose the optimum threshold. \\nA book or a corpus of 100,000 words, or ~75,000 tokens, when split into 256-token chunks, takes (75000 / 256) * 30 seconds, or ~ 2.5 hrs to test fully using InfillingScore. Note that in many cases, we can randomly sample a number of paragraphs from a corpus to test, and calculate the contamination rate (example in: Shi. et. al., Table 2. https://arxiv.org/pdf/2310.16789)\\n\\n### Q1.\\nThanks for mentioning this, indeed determining this parameter can cause confusion. The parameter \\u03c4 is the classification threshold which changes based on the data distribution. To find the optimum classification threshold, the process is to construct labeled positive (seen) and negative (unseen) subsets, and use the sample scores to find the best threshold based on accuracy or AUROC. We added the explanation for this in lines 131-134 in the paper.\"}", "{\"comment\": \"thank you for the response:\\n\\nRegarding Weakness 1, I understand your point. Sometimes, it could be useful to report the performance on a smaller subset ofthe dataset (subject to the compute resource).\\n\\nRegarding the question, yes you are correct. That was my interpretation. Thank you for your explanation anyway.\"}" ] }
9Q9KXUTjmd
Neighborhood and Global Perturbations Supported SAM in Federated Learning: From Local Tweaks To Global Awareness
[ "Boyuan Li", "Zihao Peng", "Yafei Li", "Mingliang Xu", "Baofeng Ji", "Shengbo Chen", "Cong Shen" ]
Federated Learning (FL) can be coordinated under the orchestration of a central server to build a privacy-preserving model without collaborative data exchange. However, participant data heterogeneity leads to local optima divergence, affecting convergence outcomes. Recent research focused on global sharpness-aware minimization (SAM) and dynamic regularization to enhance consistency between global and local generalization and optimization objectives. Nonetheless, the estimation of global SAM introduces additional computational and memory overhead. At the same time, the local dynamic regularizer cannot capture the global update state due to training isolation. This paper proposes a novel FL algorithm, FedTOGA, designed to consider optimization and generalization objectives while maintaining minimal uplink communication overhead. By linking local perturbations to global updates, global generalization consistency is improved. Additionally, linking the local dynamic regularizer to global updates increases the perception of the global gradient and enhances optimization consistency. Global updates are passively received by clients, reducing overhead. We also propose neighborhood perturbation to approximate local perturbation, analyzing its strengths and working principle. Theoretical analysis shows FedTOGA achieves faster convergence $O(1/T)$ under non-convex functions. Empirical studies demonstrate that FedTOGA outperforms state-of-the-art algorithms, with a 1\% accuracy increase and 30\% faster convergence, achieving state-of-the-art.
[ "Federatd Learning; Heterogeneous Data" ]
https://openreview.net/pdf?id=9Q9KXUTjmd
https://openreview.net/forum?id=9Q9KXUTjmd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x1S85CDXEz", "vgzsP9MKpR", "uEdq0FDvRQ", "q2PPxJPinX", "hvsHupTRcT", "SvePLG5XpV", "Mjsm224LSC", "JKGzq1XCrS", "6BhRzruaJH", "0y2PjTO2o4" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_review", "official_comment", "comment" ], "note_created": [ 1731508346627, 1731725011911, 1731508430609, 1730776238487, 1729514194934, 1730368364596, 1731508622365, 1730660137891, 1731508213801, 1731739878964 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2748/Authors" ], [ "ICLR.cc/2025/Conference/Submission2748/Authors" ], [ "ICLR.cc/2025/Conference/Submission2748/Authors" ], [ "ICLR.cc/2025/Conference/Submission2748/Reviewer_gca5" ], [ "ICLR.cc/2025/Conference/Submission2748/Reviewer_PARN" ], [ "ICLR.cc/2025/Conference/Submission2748/Reviewer_agaF" ], [ "ICLR.cc/2025/Conference/Submission2748/Authors" ], [ "ICLR.cc/2025/Conference/Submission2748/Reviewer_fUrX" ], [ "ICLR.cc/2025/Conference/Submission2748/Authors" ], [ "ICLR.cc/2025/Conference/Submission2748/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer fUrX,\\nWe are very grateful for your constructive comment.\", \"weakness_1\": \"Thank you for your valuable comments on our paper. Firstly, we would like to clarify that our primary contribution is not to introduce an entirely new theory but rather to propose a novel global SAM estimation method and the introduction of dynamic regularization for global updates. Additionally, we innovatively introduce the technique of neighborhood perturbation. For a deeper understanding of the limitations of existing algorithms, such as FedSMOO and FedLESAM, please refer to Appendix A.2. Notably, these methods universally require additional local storage space for computation. In contrast, FedTOGA passively receives global updates from the server and integrates them effortlessly, significantly reducing the requirements for stable client connections and storage space.\", \"weakness_2\": \"Thank you for your insightful comments. \\\"Standing on the shoulders of giants,\\\" we acknowledge that our convergence analysis is based on FedSpeed. Unlike FedSMOO and FedDyn, which assume that local clients stop at a local stable point after each training round\\u2014an assumption that is almost impossible in federated learning (FL)\\u2014our approach does not rely on this assumption. FedSpeed extends the analysis to K local iterations but imposes more restrictions on the perturbation learning rate. FedTOGA introduces a different perturbation mechanism compared to FedSpeed, thereby relaxing these restrictions. Adjusting the value of \\\\beta can further tighten $1/omega$. We have also corrected some errors in the analysis process of FedSpeed. While our method may not show significant advantages in convergence speed, it is more broadly applicable compared to the analyses of FedSMOO and FedSpeed.\", \"weakness_3\": \"Thank you for your valuable feedback. We have thoroughly revised the manuscript to improve clarity and correctness, addressing the issues with presentation, typos, the missing line in Algorithm 1, undefined terms in Theorem 2, and the error in Equation (11). We hope these changes enhance the readability and accuracy of our paper.\"}", "{\"title\": \"General Response\", \"comment\": \"We are eager for further discussions to elucidate this paper's contributions and correct relevant errors, and we thank the reviewers for their valuable time. Most studies go a step further based on existing studies. Please refer to Table 5 to see how we differ from existing studies. Once again, we would like to highlight our proposed $\\\\textbf{neighborhood perturbation}$ technique, which is a completely new attempt in FL.\"}", "{\"comment\": \"Dear Reviewer agaF,\\n\\nWe sincerely appreciate your constructive feedback, which has greatly contributed to the improvement of our manuscript.\", \"weakness_1\": \"We appreciate the reviewers' feedback and would like to clarify that our approach with FedTOGA no longer presupposes that local training for each client must reach a stable equilibrium. Unlike FedSpeed, which adheres to a more rigid framework, FedTOGA introduces a unique perturbation mechanism that effectively relaxes this constraint, allowing for more dynamic and flexible local training processes.\\n\\nFurthermore, by adjusting the value of \\\\beta, we can precisely control the $1/omega$ ratio, thereby optimizing the balance between local and global model updates. This adjustment is a key feature that enhances the adaptability and performance of our federated learning system.\", \"weakness_2\": \"To address the reviewers' concerns, we have initiated testing on the TinyImageNet dataset. In the previous version, we adhered to the standard settings used in FedSMOO and FedLESAM to ensure fair and accurate benchmarking of all algorithms. Due to space limitations, we omitted these results in the earlier submission.\", \"weakness_3\": \"In the appendix, we have included a detailed critical analysis of the local training rounds K. This analysis explores the impact of different K values on model performance, providing deeper insights into the optimization process.\\n\\nWe understand the importance of addressing the reviewers' concerns and believe that these additions will significantly enhance the clarity and robustness of our paper. If you have any further questions or require additional information, please feel free to contact us.\"}", "{\"summary\": \"This paper presents FedTOGA, a Federated Learning (FL) algorithm designed to prevent the heterogeneity of client data cause the global model to converge to a sharp local minimum. FedTOGA achieves this in a\", \"communication_efficient_way_by_combining_new_variants_of_two_techniques\": \"(i) Sharpness-Aware Minimization (SAM) to add perturbations to the training process and (ii) local dynamic regularization. In contrast to existing literature, FedTOGA uses the global gradient update to adjust both the global perturbation and the local regularization. The authors obtain analytic convergence guarantees for their methodology, and show the effectiveness of their approach by conducting extensive experiments.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Comprehensive validation: In Sec. 6, FedTOGO is compared against a lot of FL algorithms and is able to outperform all of them.\"], \"weaknesses\": [\"In the first contribution bullet point in Sec. 1, the claim about FedTOGA being the first global perturbation technique and first local dynamic regularizer needs to be rephrased to emphasize more on the fact that it is the first to do so using the global update. The current version of this statement overlooks the contributions of FedSMOO, FedLESAM, FedSpeed and papers that use dynamic regularizers, which have developed these ideas but without using global updates.\", \"In Sec. 2's Sharpness-Aware Minimization, the parameters \\\\rho and \\\\delta need to clearly defined.\", \"In Eq. (4) given in Sec. 2.1, the notation \\\\theta_i seems to imply the different local models for clients in the FL setup. If this is the case, this needs to be clarified by defining \\\\theta_i, which is missing currently. The same comment applies for Eq. (6) in Sec. 4.1, where the minimization is being done over a single vector \\\\theta while there are multiple \\\\theta_i's in the loss function formulation.\", \"In Assumption 3 in Sec. 5, can the authors explain why they need a bounded variance of the unit gradient, and why just a bounded variance of the gradient itself is not sufficient? Adding some references which also make this assumption for the unit gradient is recommended.\", \"In Sec. 6, does FedTOGO and most of the other FL algorithms perform similarly if the data distribution among clients is IID? Currently all experiments are done in non-IID settings and it would be insightful to\", \"see how these FL methods compare in IID setups.\", \"The global perturbation used in this paper build heavily on prior research, doing a gradient ascent similar in existing algorithms with the addition of some global update information. The same goes for the local regularizer. While it is interesting to see how much accuracy increase can be achieved by making these adjustments, I am concerned about the novelty of this incremental idea. Alongside other issues mentioned above, I do not think this paper is ready to be published at ICLR in its current version.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes an improved optimizer based on SAM, which further enhances the global perspective when applying SAM in federated learning (FL) and maintains global generalization. Both the theoretical analysis and extensive experiments confirm the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The design motivation of this paper is reasonable. Although there are some flaws in the writing, the overall structure is understandable.\\n\\n2. The experiments are very thorough, with extensive empirical studies conducted under standard settings to validate the efficiency of the proposed techniques.\\n\\n3. A convergence analysis was conducted for the proposed method to demonstrate that its convergence remains at the same level as the previous works.\", \"weaknesses\": \"1. The section on Methods (Section 4) is quite disorganized. I suggest that the authors include a notation table to explain all the variables that appear later in the text. I noticed that many variables are introduced without explanation, see questions.\\n\\n2. The definition of Equation (9) seems somewhat obscure and difficult to understand. The vanilla local objective (like FedDyn) adopts the the augmented alternating direction method of multipliers to solve the consensus problem. The term $h_i$ is the dual variable to balance the dual problem. Performing operations on the dual term seems to affect the consistency solution of the primal problem. I suggest that the authors remove the problem formulation related to Equation (9) and directly introduce the use of certain variables to replace or correct the gradient. Actually, from the optimization perspective, the proposed method still solve eq.(8), but with the novel proposed method (momentum-based gradient estimator and SAM-based local optimizer). If the authors modify the entire Lagrangian function, it still need to be proven that the solution of this function is consistent with the solution of the original problem. I believe this is unnecessary for the techniques proposed in this paper.\", \"questions\": \"1. A technical question: in line 245, according to the motivation of FedCM that $\\\\Delta^t\\\\approx \\\\nabla f(\\\\theta^t)$, why local perturbation is not $\\\\delta_k^t=\\\\rho\\\\frac{(1-\\\\kappa)g_{i,k}^t + \\\\kappa\\\\Delta^t}{\\\\Vert (1-\\\\kappa)g_{i,k}^t + \\\\kappa\\\\Delta^t \\\\Vert}$? It looks like a external momentum in current version. Will the current external momentum setting outperform the original inner momentum form estimation of FedCM?\\n\\n2. I am confused on line 264. What is the term $g_{i,k}$? Why the fusion term performs as $g_{i,k} + \\\\widetilde{g}_{i,k-1}^t + \\\\kappa\\\\Delta^t$?\\n\\n3. Line 11 in Algorithm 1, what is the term $g_{i,k}^{t}[\\\\widetilde{g}_{i,k-1}^t]$?\\n\\n4. What is the conection between the term in line 264 and \\\"lookahead\\\" optimizer? Although the paper claims that this form is an extension of lookahead, the authors could provide the corresponding extended formulas to further explain why this form corresponds to lookahead optimizers.\\n\\n5. Could the authors compute the variance of the SAM perturbations? In fact, if the global perturbation is approached more closely, the variance of their local perturbations should be smaller. Additionally, the authors could calculate the global perturbation at the beginning of each round and compare whether this approach leads to further improvements.\", \"some_typos\": \"(1) There are issues with the references in this paper. I suggest that the authors distinguish between the \\\\citet and \\\\citep commands to provide correct citations.\\n\\n(2) Line 901 \\\"waht\\\" to what\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"To address the local optima divergence in Heterogeneous Federated Learning, this paper proposes a FedTOGA method by linking the local dynamic regularizer to global updates to enhance the consistency of optimization and generalization. The method efficiently links local perturbations to global updates and achieves a non-convex convergence rate of $\\\\mathcal{O}(1/T)$. The authors also propose neighborhood perturbation to approximate local perturbation. The authors also provide empirical validations of the theoretical results as well.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method is well-motivated, the paper shows that existing methods suffer from the local optima divergence issue, and show how to fix it.\\n2. The empirical results show that the FedTOGA method is better than other HtFL methods using global sharpness-aware minimization (SAM) and dynamic regularization, as expected.\", \"weaknesses\": \"1. In Theorem 2, compared with other methods using global sharpness-aware minimization (SAM) and dynamic regularization, such as [1], a more distinct summary of the superiority of FedTOGA may be needed.\\n2. In addition to CIFAR10/100, the authors should also consider the performance of FedTOGA in the TinyImageNet task.\\n3. In FL research, the local epoch is an important parameter. The authors should study this parameter's impact on performance.\", \"questions\": \"See in weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer PARN,\\nWe are grateful for your constructive feedback.\", \"weakness_1\": \"We greatly appreciate your comments. To address concerns regarding the use of symbols, we have added a table of variable explanations in the appendix.\", \"weakness_2\": \"Your suggestions have been highly beneficial to us. We have decided to eliminate the description related to Eq. 9 to maintain the original expression of AL while introducing the global gradient as a direct technical means in the article. We believe these adjustments will enhance the clarity and coherence of our work.\", \"question_1\": \"Regarding the consideration of not treating the global gradient as a weight factor in the perturbation calculation, we have made the following deliberations. We aim to preserve the integrity of local gradients because we have observed that with the progression of training in FedCM and MoFedSAM, the advantages in accuracy and convergence speed gradually diminish. We hypothesize that this may be due to the trade-off with the global gradient, which hinders the model's ability to effectively learn local knowledge.\", \"questions_2_3\": \"We apologize for any confusion caused by our textual presentation. Firstly, let us clarify that \\\"g_i^k\\\" refers to an unbiased estimate of \\\"\\\\nabla f_i(\\\\theta_i,\\\\xi_i)\\\", while \\\"\\\\tilde{g}_i^k\\\" denotes the gradient calculated after model perturbation. The key aspect of our proposed neighborhood perturbation is its effective utilization of historical gradients stored in the gradient cache. Since the gradient cache needs to be cleared before each training session, and SAM performs a gradient clear operation before integrating gradient ascent, by disabling this operation, we can retain the previous gradient, thus effectively integrating it into the current SAM computation, leading to a fused computation method. After the SAM computation, the gradient cache is cleared again without interfering with subsequent SGD calculations. For a more detailed observation of the changes in the gradient cache, please refer to Appendix A.3.\", \"question_4\": \"This scheme is not an extension of LookAhead (though it draws inspiration from it). Specifically, given the computation method of SAM, we consider incorporating the gradient from the previous update step into the gradient ascent process similar to the gradient lookback operation in LookAhead. Therefore, we integrate this scheme with existing methods to validate the effectiveness of neighborhood perturbation. Detailed experiments can be found in Appendix B.7.\", \"question_5\": \"We have promptly included the model drift experiment in the appendix under the experimental settings of LeNet on CIFAR10.\\n\\nWe understand your concerns, and we sincerely hope that our responses will fully address them and contribute to the further improvement of our manuscript. If you have any additional questions or require further clarification, please do not hesitate to let us know.\"}", "{\"summary\": \"The paper proposes a global-aware perturbation method for sharpness-aware minimization (SAM) in federated learning (FL). Compared with existing methods, this paper combines the idea of local dynamical regularizer with global updates to mitigate the effect of data heterogeneity. Convergence of the proposed algorithm is theoretically analyzed with the rate $O(1/T)$. Empirical results are shown to justify the performance of the algorithm.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The main contribution of the paper is to propose a local and global perturbation-based algorithm (FedTOGA) that enjoys communication and computation efficiency for SAM in FL. Both theoretical convergence guarantee and empirical evaluation of the algorithm are provided.\", \"weaknesses\": \"1. The novelty and contribution of the paper is limited. I think the main advantage of the proposed algorithm that authors claim is to leverage the global and neighborhood perturbation to reduce communication overhead and computational cost. However, this is not clear in the paper. As in Theorem 2 the authors claim that the rate of $O(1/T)$ is achieved when setting $K=O(T)$, which is faster than existing literature. However, FedSMOO also achieves the same rate $O(1/T)$ and linear speedup in the number of clients without any constraint on $K$. Thus, the advantage of FedTOGA is not clear.\\n\\n2. The analysis tools look quite similar to those in FedSpeed paper. Thus, the technical contribution of this paper seems limited.\\n\\n3. The presentation of the paper is not good, which makes the reader hard to follow. There are some mistakes and typos. To list a few, in Line 300, there is no \\\"Line 16\\\" in the algorithm. In Theorem 2, $z_t$ is not defined and in eq. (11) the LHS should be the sum of $\\\\Vert z_t \\\\Vert^2$.\", \"questions\": \"1. Could the authors further explain why their algorithm is better comparing to previous methods in terms of e.g. convergence rate, computational cost, challenge in implementation, etc?\\n\\n2. What are the technical difficulties of the theoretical analysis, comparing to literature?\\n\\n3. As in Theorem 2, the choice of learning rate is with order $O(1/\\\\sqrt{T})$ while constants are ignored. However, these neglected constant may significantly influence the actual performances of the algorithms. Could the authors explain why the learning rates are identical for all algorithms? In the above sense, does it cause a fair comparison? If yes, could the authors explain the reason why this renders a fair comparison?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer gca5, We are deeply appreciative of your constructive comment.\", \"strengths\": \"We are grateful for your encouragement. In future versions (subject to submission size limitations), we will provide a thoroughly segmented dataset to facilitate the reproducibility of our work by all researchers. Additionally, all comparative baseline results can be referenced from FedSMOO, which will further support the reproducibility of our study.\", \"weakness_1\": \"We appreciate your feedback and recognize that there may have been an issue with our articulation. Our contribution primarily emphasizes being the first to introduce global updates into SAM and dynamic regularization, which, to the best of our knowledge, has not been addressed in previous studies. We certainly do not intend to overlook the significant contributions of FedSMOO, FedLESAM, and FedSpeed regarding the use of dynamic regularization. Instead, we have repeatedly acknowledged these contributions throughout the main text and appendices. For example, see Abstract lines 015-016, Introduction lines 073-075, Appendices A.1 and A.2, as well as the Contributions section. Additionally, we revisit the limitations of dynamic regularization in prior research in Section 2.2. This combination is innovative, and to our knowledge, no study has mentioned this ingenious integration.\", \"weakness_2\": \"The parameter \\\"\\\\rho\\\" refers to the perturbation learning rate, and \\\"\\\\delta\\\" denotes the global update gradient. We have added more detailed explanations in the main text to clarify these points.\", \"weakness_3\": \"\\\"theta_i\\\" represents the local model loaded on the client side. We have further refined the notation explanations and added a table of variable definitions in the appendix to enhance clarity.\", \"weakness_4\": \"The introduction of bounded variance for unit gradients is aimed at facilitating the derivation of Theorem 1, which demonstrates that generalization error and optimization error are geometrically amplified as the local interval expands. This assumption is crucial for supporting the rationale behind our motivation (Section 3). It is also consistent with the assumptions made in related works such as FedSAM and FedLESAM.\\n[1]. Qu Z, Li X, Duan R, et al. Generalized federated learning via sharpness aware minimization[C]//International conference on machine learning. PMLR, 2022: 18250-18280.\\n[2]. Fan Z, Hu S, Yao J, et al. Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization[C]//Forty-first International Conference on Machine Learning.\", \"weakness_5\": \"We appreciate your concern regarding the IID testing. In response, we have initiated comprehensive IID testing to address this issue. Specifically, we show that by adjusting the hyperparameters, the performance of FedTOGA can be restored to its original version, ensuring that it will not perform worse than existing algorithms.\\nFurthermore, we would like to emphasize that our comparison scenarios strictly adhere to the Non-IID experimental settings used in FedSMOO, FedSpeed, and FedLESAM. This alignment with established experimental protocols allows us to compare our results directly with the original experimental data, thereby ensuring the reliability and validity of our findings.\", \"weakness_6\": \"Thank you for your valuable comments on our paper. \\\"Standing on the shoulders of giants,\\\" our algorithm is indeed inspired by FedSMOO, FedCM, FedSpeed, and FedLESAM. We acknowledge that it is challenging to reasonably integrate existing technologies and provide further explanations. However, rethinking and combining established techniques from a completely new perspective is highly significant. Most importantly, our integration significantly reduces the resource requirements of local clients in both FedSMOO and FedLESAM while achieving better results.\\nIn addition, we have introduced the novel concept of Neighborhood perturbation for the first time. This innovative use of gradient caching optimizes the computation of perturbations, a solution that can be effectively applied to existing SAM-based algorithms. For more details, please refer to Appendix B.7. After implementing this technology, there has been a significant improvement in the convergence speed and accuracy of FedSpeed and FedSMOO.\\n\\nWe would also like to kindly remind you that the name of our algorithm is FedTOGA, not FedTOGO.\\n\\nWe appreciate your concerns and hope that our response adequately addresses them and contributes to the further refinement of our manuscript. Should you have any additional questions or require further clarification, please do not hesitate to let us know.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
9PYCz4cDuZ
Theoretical Aspects of Bias and Diversity in Minimum Bayes Risk Decoding
[ "Hidetaka Kamigaito", "Hiroyuki Deguchi", "Yusuke Sakai", "Katsuhiko Hayashi", "Taro Watanabe" ]
Text generation commonly relies on greedy and beam decoding that limit the search space and degrade output quality. Minimum Bayes Risk (MBR) decoding can mitigate this problem by utilizing automatic evaluation metrics and model-generated pseudo-references. Previous studies have conducted empirical analyses to reveal the improvement by MBR decoding, and reported various observations. However, despite these observations, the theoretical relationship between them remains uncertain. To address this, we present a novel theoretical interpretation of MBR decoding from the perspective of bias-diversity decomposition. We decompose errors in the estimated quality of generated hypotheses in MBR decoding into two key factors: *bias*, which reflects the closeness between utility functions and human evaluations, and *diversity*, which represents the variation in the estimated quality of utility functions. Our theoretical analysis reveals the difficulty in simultaneously improving both bias and diversity, and highlights the effectiveness of increasing diversity to enhance MBR decoding performance. This analysis verifies the alignment between our theoretical insights and the empirical results reported in previous work. Furthermore, to support our theoretical findings, we propose a new metric, pseudo-bias, which approximates the bias term using gold references. We also introduce a new MBR approach, Metric-augmented MBR (MAMBR), which increases diversity by adjusting the behavior of utility functions without altering the pseudo-references. Experimental results across multiple NLP tasks show that the decomposed terms in the bias-diversity decomposition correlate well with performance, and that MAMBR improves text generation quality by modifying utility function behavior. Our code will be available at https://github.com/[Anonymized].
[ "Minimum Bayes Risk (MBR) decoding", "Minimum Bayes risk (MBR) decoding", "Minimum Bayes Risk decoding", "Minimum Bayes risk decoding", "MBR decoding" ]
Reject
https://openreview.net/pdf?id=9PYCz4cDuZ
https://openreview.net/forum?id=9PYCz4cDuZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zVvwLooOhI", "xu96FGCmia", "p6KMUGI5FP", "kEAfk7BAVD", "j9aVTrPz3P", "fVCxUm748D", "XI0iL7g4FL", "QAwFS76EqW", "Q0sXTBjr8s", "PfVHC4GDRw", "LjFMNTG1ef", "HbLPmnBHcE", "3ZxKRh5urN", "3F8E20Iikl" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment" ], "note_created": [ 1733292794223, 1730974356409, 1730781680556, 1733292767841, 1730684740499, 1733080259627, 1730670281054, 1733079654455, 1733292722017, 1733079748081, 1737524217194, 1734644552644, 1733292668976, 1733080433512 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12816/Authors" ], [ "ICLR.cc/2025/Conference/Submission12816/Reviewer_VnrG" ], [ "ICLR.cc/2025/Conference/Submission12816/Reviewer_bzKq" ], [ "ICLR.cc/2025/Conference/Submission12816/Authors" ], [ "ICLR.cc/2025/Conference/Submission12816/Reviewer_zsMt" ], [ "ICLR.cc/2025/Conference/Submission12816/Authors" ], [ "ICLR.cc/2025/Conference/Submission12816/Reviewer_Fq8u" ], [ "ICLR.cc/2025/Conference/Submission12816/Authors" ], [ "ICLR.cc/2025/Conference/Submission12816/Authors" ], [ "ICLR.cc/2025/Conference/Submission12816/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12816/Area_Chair_Uzfx" ], [ "ICLR.cc/2025/Conference/Submission12816/Authors" ], [ "ICLR.cc/2025/Conference/Submission12816/Authors" ] ], "structured_content_str": [ "{\"title\": \"Reminder\", \"comment\": \"We believe that the reason you are not responding is not because you irresponsibly abandoned your role as a reviewer but because the concerns regarding this paper have been resolved. Therefore, if the concerns have indeed been resolved, we kindly request you to update your score regarding Soundness, Presentation, Contribution, and Overall Rating to reflect the outcome that no concerns remain, as is typically expected in the role of a reviewer after this discussion period.\"}", "{\"summary\": \"This work aims at analyzing the characteristics of Minimum Bayes Risk (MBR) decoding through a bias-diversity/variance decomposition. On various tasks (e.g., MT, summarization, image captioning) and with a collection of sampling methods for generating pseudo-references, the authors show correlations between the bias, diversity, and overall MSE measures in MBR and the task performance. The authors argue for the importance of the diversity/variance measure in particular and propose a MAMBR approach that uses multiple loss functions to enhance MBR decoding.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper tackles an interesting problem of analyzing and enhancing MBR decoding. The experiments were conducted with multiple models, sampling methods, and tasks, ranging from pure text to multimodal scenarios. The writing is overall clear.\", \"weaknesses\": \"(1) Eq. (8) has counterintuitive namings. The L.H.S. of the equation corresponds more to the usual \\\"bias\\\" term and the current bias term on the R.H.S. corresponds more to the usual \\\"MSE\\\" term. Using the regular terminology, isn't the equation just MSE = bias^2 + variance?\\n\\n(2) The work motivates with human-estimated quality (f hat) throughout the work, but uses a pseudo-bias that measures the metric/loss to gold references. The correlation between the pseudo-bias and actual human-estimated quality is not discussed.\\n\\n(3) Meaningfulness of Figure 1. Since the pseudo-biases are direct functions of the gold references, the correlation between them and the task performance (also based on gold references) is less interesting. The diversity/variance measure can be interesting, but the correlation is also weaker. Also, for all of the reported correlation numbers, there are no associating p-values. \\n\\n(4) Effectiveness of MAMBR. The proposed MAMBR simply uses multiple rather than one metric/loss in the MBR calculation. The results in Table 1, 2 and 3 consistently show no or extremely small improvement of MAMBR over regular MBR decoding.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"### tldr;\", \"This paper provides valuable theoretical insights through its new framework decomposing bias and diversity in MBR decoding's quality estimation errors. The theoretical contribution is significant and well-presented. However, the empirical section needs improvement. The experimental results are unclear and would benefit from better explanation and presentation. I would appreciate if the authors could address the specific concerns raised in the Weakness section.\", \"### Summary\", \"This paper presents a bias-diversity decomposition of the quality estimation error in Minimum Bayes Risk decoding\", \"The quality estimation error measures the discrepancy between the estimated quality using MBR decoding and the human judgement.\", \"Based on this decomposition the authors provide a novel theoretical interpretation of MBR decoding. -\", \"The bias term captures the closeness between utility function and human evaluations i.e. the closeness between the human estimated quality and the quality estimated with a utility function.\", \"The diversity term captures the variation in the estimated quality across different utility functions. Surprisingly this has a negative sign, suggesting that more diverse generations lead to lower error.\", \"Background in MBR:\", \"In MBR decoding the goal is not to find the most probable generation as is the case in typical beam-search decoding but rather find a generation that minimizes the expected risk for a given loss function and true posterior distribution\", \"In practice, instead of expected risk, expected utility is computed by taking 1-risk. The expected utility is computed by comparing a generation to all other generation samples. Intuitively, MBR decoding ends up doing consensus decoding, i.e. picking generations which on average are most similar to other generations.\", \"The authors propose a new metric, pseudo-bias to approximate the bias term using gold-references since human judgements are expensive to obtain.\", \"The authors also propose a new MBR decoding approach called Metric-augmented MBR which increases diversity by adjusting the behavior of utility functions without altering the pseudo-references.\", \"In Section 3.3 the authors provide an interpretation of this theoretical decomposition and its relation to prior studies.\"], \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"### Strengths\", \"The interpretation provided in section 3.3 is particularly nice as it provides additional evidence from prior work. I also like the explicit calling out of \\\"Unexplored Aspect\\\" which is lated discussed in section 4.2 and 5.4.\", \"I admire the scholastic writing rigor of the paper. The authors have generally cited prior work well going back decades, a practice which is commonly missing in recent ML literature. However, some more details on Minimum Bayes Risk Decoding could further strengthen the paper and make it self-contained.\", \"The results of experiment in Section 5.4 for adjusting the diversity term by varying model parameters and pseudo-references respectively are convincing\"], \"weaknesses\": [\"### Weakness\", \"Section 5.2:\", \"Based on the theoretical decomposition a reader might expect the overall MSE to have highest correlation with performance; followed by overall bias and overall diversity. It is not clear as to why the one best bias or one best MSE correlates more strongly with performance than overall MSE. This does not provide a strong evidence for the theoretical decomposition presented.\", \"I found Figure 2 to be confusing. For e.g. consider this sentence \\\"The results show that while ancestral sampling exhibits the highest bias, except in the case of the SAMSum dataset, it sometimes outperforms other sampling methods owing to its greater diversity.\\\" This statement is clearly false based on the plots. For many plots bias and diversity move in the same direction as is predicted by the theoretical relationship but for many plots they move in opposite directions. For e.g. consider the plot for MSCOCO dataset. The overall bias and the one best bias go down while the overall diversity increases.\", \"Going beyond the math, at an intuitive level, it is difficult to interpret the diversity term. From my understanding, MBR decoding tries to select generations which on average are similar to most other generations. However, the diversity terms suggests that having diverse generations actually leads to better estimation of quality error and thereby better performance. Though MBR is trying to do something exactly opposite to this. It is difficult to reconcile this tension and some more discussion in building the intuition around this is lacking in the paper.\"], \"questions\": \"Please refer to the concerns raised above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reminder\", \"comment\": \"We believe that the reason you are not responding is not because you irresponsibly abandoned your role as a reviewer but because the concerns regarding this paper have been resolved. Therefore, if the concerns have indeed been resolved, we kindly request you to update your score regarding Soundness, Presentation, Contribution, and Overall Rating to reflect the outcome that no concerns remain, as is typically expected in the role of a reviewer after this discussion period.\"}", "{\"summary\": \"This paper introduces a bias-diversity decomposition for Minimum Bayes Risk Decoding. The bias reflects the distance between the utility functions and human evaluations, while diversity represents the variation. The analysis and experiments show that a lower bias and a higher diversity will lead to better model performance. It also proposes a Metric-augmented MBR, which use multiple utility functions with various parameters to enhance diversity.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. It provides a bias-diversity view of MBR, which highlights the importance of several factors in MBR, such as the quality of utility functions / pseudo reference.\\n2. The analysis is verified by the correlation between the bias / diversify of MBR with the model performance. It further investigates the influence of different sampling methods and the size of pseudo preferences.\", \"weaknesses\": \"1. The main weakness is the novelty and the contribution is limited. Although it proposes a bias-diversity decomposition of MBR, such decomposition is straightforward. The observations listed in Section 3.3 have mostly been covered in other literature, resulting in limited new insights.\\n2. The Performance of MAMBR with ancestral sampling/epsilon sampling/beam decoding is not significantly better than the baseline model, which weakens the paper's claim.\", \"questions\": \"For all three tasks, the utility function and the evaluation metric are the same. What will Figure 1 be if the utility function is different from the evaluation metric?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses to Weaknesses and Questions\", \"comment\": \"Thank you for reviewing our paper and sharing your feedback. We have updated our manuscript based on your opinion.\\n\\n## Responses to Weaknesses\\n\\n> The main weakness is the novelty and the contribution is limited. Although it proposes a bias-diversity decomposition of MBR, such decomposition is straightforward. The observations listed in Section 3.3 have mostly been covered in other literature, resulting in limited new insights.\\n\\nAs explained in Line 145 to 147, the decomposition for the prediction of ensembled estimators shown by Krogh & Vedelsby, 1994 [1] differs from the well-known decomposition for the prediction of a single estimator shown by Geman et al., 1992 [2].\\n\\n- [1] Krogh, Anders, and Jesper Vedelsby. \\\"Neural network ensembles, cross validation, and active learning.\\\" Advances in neural information processing systems 7 (1994).\\n- [2] Geman, Stuart, Elie Bienenstock, and Ren\\u00e9 Doursat. \\\"Neural networks and the bias/variance dilemma.\\\" Neural computation 4.1 (1992): 1-58.\\n\\nEven though MBR decoding has a long history in NLP, nobody has found that it can be explained by the decomposition of Krogh & Vedelsby, 1994 [1]. We believe this viewpoint covers the empirically observed characteristics introduced in subsection 3.3. If you think this decomposition on the MBR decoding is straightforward and the novelty is limited, could you show papers explaining MBR decoding based on Geman et al., 1992 [1] or Krogh & Vedelsby, 1994 [2]?\\n\\n> The Performance of MAMBR with ancestral sampling/epsilon sampling/beam decoding is not significantly better than the baseline model, which weakens the paper's claim.\\n\\nAs explained in subsection 3.3.2, our purpose is to show the exchangeability of the diversity of pseudo references and utility functions' output. Thus, if there are no changes between the results of MAMBR and that of diversified pseudo references, it supports our theoretical analysis. Therefore, if you feel there are no differences in the experimental results, it means that you recognize our experiment's successful results that show the exchangeability of the diversity of pseudo references and utility functions' output.\\n\\n## Responses to Questions\\n\\n> For all three tasks, the utility function and the evaluation metric are the same. What will Figure 1 be if the utility function is different from the evaluation metric?\\n\\nWe have explained the case using BLEURT, which is different from the utility functions, as the metric for measuring performance in Appendix D with Figure 3. This result shows similar tendencies to Figure 1.\"}", "{\"summary\": \"The paper offers a theoretical perspective on MBR decoding by decomposing the discrepancy between automatic MBR scores with the human evaluations using a bias-diversity decomposition. Their analysis suggest that one may want systems that minimize bias but also the diversity should be increased, which aligns with previous paper\\u2019s empirical results. They then analyse performance of MBR decoding in several scenarios and show that the analysis holds, where models with higher diversity and lower bias have better correlations. They then introduce MAMBR, which aims to maximise diversity by modifying the behavior of the utility functions. Results show marginal gains in performance across different tasks, while using a similar number of sampled pseudo-references.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper decomposes the MBR decoding into cleaner interpretations of bias and diversity, which can shed light on the MBR expression. The resulting expression appears fairly elegant, especially if it explains existing empirical observations, which the paper linked to, where increasing diversity was shown to be helpful.\", \"Their results show that increasing the diversity through MAMBR can result in better performance across 3 different tasks and various numbers of models without the need for any further references.\"], \"weaknesses\": [\"As I understand it, the approach uses pseudo-references, which are then used to approximate the human evaluation scores (by averaging over different references), which may impact the results. The generated hypothesis and the references appear to be generated from the same model just using different sampling strategies, which may result on a lot of similarity between the two systems and underestimate the bias for different approaches in Figure 2 than if independent human evlauation scores were used to assess the bias.\", \"The results in Table 1, 2 and 3 seem quite marginal, and there hasn\\u2019t been any quantification of the significance of the results.\"], \"questions\": [\"I personally found parts of the experimental section a bit confusing. Though the theoretical aspect was mostly quite clear and made sense, from section 5 onwards, I found the results a bit less clear, possibly due to my having less familiarity with previous works. I wasn\\u2019t clear on what was used as the references, the hypothesis and how performance was measured. As I understand it, the same models generated both the hypothesis and the references which are used in MBR decoding (just with different sampling approaches). Is this correct? Also are there are ground-truth human evaluation scores available, which performance is measured against, or are all results pseudo results using equation 10?\", \"In MAMBR you stated that \\u201cwe train evaluation metrics with different initial random seeds to generate \\u0398 as a set of diverse model parameters\\u201d but the evaluation metrics used appear to be external packages (e.g. COMET, BERTScore) that were leveraged. How exactly did you then implement MAMBR?\", \"In line 93 you say \\u201cHere, instead of using \\u2026\\u201d Is the here intentional, and describing the previous equation, or just introducing the new approach of using manual selection of the best hypothesis (equation 3). There are a few more instances of this in the earlier sections\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses to Weaknesses\", \"comment\": \"Thank you for reviewing our paper and sharing your feedback. We have updated our manuscript based on your opinion.\\n\\n> (1) Eq. (8) has counterintuitive namings. The L.H.S. of the equation corresponds more to the usual \\\"bias\\\" term and the current bias term on the R.H.S. corresponds more to the usual \\\"MSE\\\" term. Using the regular terminology, isn't the equation just MSE = bias^2 + variance?\\n\\nAs explained in Line 145 to 147, the decomposition for the prediction of ensembled estimators shown by Krogh & Vedelsby, 1994 [1] differs from the well-known decomposition for the prediction of a single estimator shown by Geman et al., 1992 [2].\\n\\n- [1] Krogh, Anders, and Jesper Vedelsby. \\\"Neural network ensembles, cross validation, and active learning.\\\" Advances in neural information processing systems 7 (1994).\\n- [2] Geman, Stuart, Elie Bienenstock, and Ren\\u00e9 Doursat. \\\"Neural networks and the bias/variance dilemma.\\\" Neural computation 4.1 (1992): 1-58.\\n\\n> (2) The work motivates with human-estimated quality (f hat) throughout the work, but uses a pseudo-bias that measures the metric/loss to gold references. The correlation between the pseudo-bias and actual human-estimated quality is not discussed.\\n\\nTo deeply understand how the pseudo-bias correlates to human evaluation, we have added the actual correlation of the metrics we used to calculate the pseudo-bias into the footnote of subsection 4.2. As explained in this part, the Pearson correlations of COMET (Unbabel/wmt22-comet-da) and BERTScore with microsoft/deberta-xlarge-mnli are 0.990 on the system-level task for English to German [3] and 0.7781 (https://github.com/Tiiiger/bert_score) on WMT16 to English [4], respectively.\\n\\n- [3] Freitag, Markus, et al. \\\"Results of WMT23 metrics shared task: Metrics might be guilty but references are not innocent.\\\" Proceedings of the Eighth Conference on Machine Translation. 2023.\\n- [4] Bojar, Ond\\u0159ej, et al. \\\"Results of the wmt16 metrics shared task.\\\" Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers. 2016.\\n\\n> (3) Meaningfulness of Figure 1. Since the pseudo-biases are direct functions of the gold references, the correlation between them and the task performance (also based on gold references) is less interesting. The diversity/variance measure can be interesting, but the correlation is also weaker. Also, for all of the reported correlation numbers, there are no associating p-values.\\n\\nTo clarify how the correlations are significant, we have conducted the statistical significance test and underlined the correlation scores with statistical significance with p<0.05. Furthermore, instead of the previous setting All, we report more mathematically appropriate averaged correlations over all datasets as Avg. in Figure 1 using Fisher transformation [5]. Corresponding to these modifications, we have added further explanations to the caption of Figure 1 and the paragraph named Settings in subsection 5.2.\\n\\n[5] David M Corey, William P Dunlap, and Michael J Burke. Averaging correlations: Expected values and bias in combined pearson rs and fisher\\u2019s z transformations. The Journal of general psychology, 125(3):245\\u2013261, 1998.\\n\\n> (4) Effectiveness of MAMBR. The proposed MAMBR simply uses multiple rather than one metric/loss in the MBR calculation. The results in Table 1, 2 and 3 consistently show no or extremely small improvement of MAMBR over regular MBR decoding.\\n\\nAs explained in subsection 3.3.2, our purpose is to show the exchangeability of the diversity of pseudo references and utility functions' output. Therefore, if there are no changes between the results of MAMBR and those of diversified pseudo references, this supports our theoretical analysis and shows the success of our experimental results.\"}", "{\"title\": \"Reminder\", \"comment\": \"We believe that the reason you are not responding is not because you irresponsibly abandoned your role as a reviewer but because the concerns regarding this paper have been resolved. Therefore, if the concerns have indeed been resolved, we kindly request you to update your score regarding Soundness, Presentation, Contribution, and Overall Rating to reflect the outcome that no concerns remain, as is typically expected in the role of a reviewer after this discussion period.\"}", "{\"title\": \"Responses to Weaknesses\", \"comment\": \"Thank you for reviewing our paper and sharing your feedback. We have updated our manuscript based on your opinion.\\n\\n>Section 5.2: Based on the theoretical decomposition a reader might expect the overall MSE to have highest correlation with performance; followed by overall bias and overall diversity. It is not clear as to why the one best bias or one best MSE correlates more strongly with performance than overall MSE. This does not provide a strong evidence for the theoretical decomposition presented.\\n\\nAs explained in section 2, subsection 3.1, and lines from 151 to 153 in subsection 3.2, MBR decoding supports to estimate the precise quality of all candidates to choose the best one from them. Thus, both one best and overall MSE are important. However, since the final results of the MBR decoding are calculated by the one best result, One Best Bias is also important. We explained this finding in lines 420 to 422. To easily read the tendency of Figure 1, we have added the results of significance tests as underlined scores explained in the caption of Figure 1 and the paragraph named Settings in subsection 5.2. Moreover, instead of the previous setting All, we report more mathematically appropriate averaged correlations based on Fisher transformation [1] over all datasets as Avg. in Figure 1.\\n\\n[1] David M Corey, William P Dunlap, and Michael J Burke. Averaging correlations: Expected values and bias in combined pearson rs and fisher\\u2019s z transformations. The Journal of general psychology, 125(3):245\\u2013261, 1998.\\n\\n> I found Figure 2 to be confusing. For e.g. consider this sentence \\\"The results show that while ancestral sampling exhibits the highest bias, except in the case of the SAMSum dataset, it sometimes outperforms other sampling methods owing to its greater diversity.\\\" This statement is clearly false based on the plots. For many plots bias and diversity move in the same direction as is predicted by the theoretical relationship but for many plots they move in opposite directions. For e.g. consider the plot for MSCOCO dataset. The overall bias and the one best bias go down while the overall diversity increases.\\n\\nFirst of all, as explained in Line 368 of subsection 5.2, lower bias and MSE are better for performance. To easily understand that, we have added an explanation about that in the caption of Figure 2.\\n\\n> Going beyond the math, at an intuitive level, it is difficult to interpret the diversity term. From my understanding, MBR decoding tries to select generations which on average are similar to most other generations. However, the diversity terms suggests that having diverse generations actually leads to better estimation of quality error and thereby better performance. Though MBR is trying to do something exactly opposite to this. It is difficult to reconcile this tension and some more discussion in building the intuition around this is lacking in the paper.\\n\\nTo support the intuitive understanding, we have added explanations about the importance of diversity as lines from 205 to 208 in subsection 3.3.2 based on Jinnai et al. (2024) [2].\\n\\n[2]: Yuu Jinnai, Ukyo Honda, Tetsuro Morimura, and Peinan Zhang. Generating diverse and high-quality texts by minimum Bayes risk decoding. In Lun-Wei Ku, Andre Martins, and Vivek Sriku- mar (eds.), Findings of the Association for Computational Linguistics ACL 2024, pp. 8494\\u20138525, Bangkok, Thailand and virtual meeting, August 2024a. Association for Computational Linguistics.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The authors study the MBR decoding objective, comparing the objective from maximizing the pseudo-reference similarity to a human-estimated ground truth. The core object of study is a bias-diversity term (not to be mistaken with bias variance..) where the bias term corresponds to the per-sample error between the similarity f and the human estimate \\\\hat{u} and the diversity term is the same thing but with \\\\bar{u} the mean.\\n\\nThe paper studies an important problem (MBR decoding, and understanding when and why such techniques help), and the experimental coverage is nice and extensive.\\n\\nAs reviewers note, however, the theory part doesn't seem quite as strong (which especially seems important given the title). Reviewer VnrG is not right that this is exactly bias-variance, but they are right that the term that's named \\\"bias\\\" is really much closer to a traditional MSE term. The 'diversity' term here is really showing that you need to match the variance you see in the MSE term (otherwise you're not distributionally matching u, instead you might be matching only the mode) and this leads to some counterintuitive confusion like the one mentioned by reviewer bzKq. Finally, I don't necessarily think reviewer zsMt is right that this exact result appears in other works, but I think the argument that the decomposition itself is technically simple (appendix A and B are fairly standard manipulations of a quadratic sum) is right, and that the paper can't quite argue that the decomposition itself is a major technical achievement.\", \"additional_comments_on_reviewer_discussion\": \"The authors made some writing changes during the rebuttal, though I think the main complaints placed by the reviewers that I saw were closer to more of the underlying framing of the bias-diversity tradeoffs posed by the authors.\"}", "{\"title\": \"Reminder\", \"comment\": \"We believe that the reason you are not responding is not because you irresponsibly abandoned your role as a reviewer but because the concerns regarding this paper have been resolved. Therefore, if the concerns have indeed been resolved, we kindly request you to update your score regarding Soundness, Presentation, Contribution, and Overall Rating to reflect the outcome that no concerns remain, as is typically expected in the role of a reviewer after this discussion period.\"}", "{\"title\": \"Responses to Weaknesses and Questions\", \"comment\": \"Thank you for reviewing our paper and sharing your feedback. We have updated our manuscript based on your opinion.\\n\\n## Responses to Weaknesses\\n\\n> As I understand it, the approach uses pseudo-references, which are then used to approximate the human evaluation scores (by averaging over different references), which may impact the results. The generated hypothesis and the references appear to be generated from the same model just using different sampling strategies, which may result on a lot of similarity between the two systems and underestimate the bias for different approaches in Figure 2 than if independent human evaluation scores were used to assess the bias.\\n\\nBasically, MBR decoding uses the hypothesis and pseudo-references generated by the same model. Thus, we follow this setting to investigate the behavior of MBR decoding in common situations. Furthermore, using more than two models is a so-called model combination and totally out-of-scope of our paper. Therefore, we don't refer to this setting in our paper. However, to include the limitation of MBR decoding, we explain the necessity of investigating the use of hypothesis in the limitation of the Appendix part.\\n\\n> The results in Table 1, 2 and 3 seem quite marginal, and there hasn\\u2019t been any quantification of the significance of the results.\\n\\nAs explained in subsection 3.3.2, we aim to show the exchangeability of the diversity of pseudo references and utility functions' output. Therefore, if there are no changes between the results of MAMBR and those of diversified pseudo references, this supports our theoretical analysis and shows the success of our experimental results.\\n\\n## Responses to Questions\\n\\n> I personally found parts of the experimental section a bit confusing. Though the theoretical aspect was mostly quite clear and made sense, from section 5 onwards, I found the results a bit less clear, possibly due to my having less familiarity with previous works. I wasn\\u2019t clear on what was used as the references, the hypothesis and how performance was measured. As I understand it, the same models generated both the hypothesis and the references which are used in MBR decoding (just with different sampling approaches). Is this correct? Also are there are ground-truth human evaluation scores available, which performance is measured against, or are all results pseudo results using equation 10?\\n\\nAs explained in section 5.1, we used the commonly used datasets in Machine Translation, Text Summarization, and Image Captioning to evaluate the performance of MBR decoding. Thus, we used their test split for the evaluation using the reference-based metrics COMET and BERTScore. Also, we used the same specific models for each task to generate both hypothesis and pseudo references as the orthodontically used setting in MBR decoding. Regarding the correlation to human evaluation results of COMET and BERTScore, we have added explanations on lines 319 to 322. As explained in section 5.2, Equation 10 is used to calculate only overall and one-best bias in Figures 1 and 2.\\n\\n> In MAMBR you stated that \\u201cwe train evaluation metrics with different initial random seeds to generate \\u0398 as a set of diverse model parameters\\u201d but the evaluation metrics used appear to be external packages (e.g. COMET, BERTScore) that were leveraged. How exactly did you then implement MAMBR?\\n\\nWe preserved multiple checkpoints locally and then stored them onto GPU memory when running MBR decoding. For that purpose, we modified the original implementation of BERTScore, whereas COMET does not require modifications. We will release our code when the paper is accepted.\\n\\n> In line 93 you say \\u201cHere, instead of using \\u2026\\u201d Is the here intentional, and describing the previous equation, or just introducing the new approach of using manual selection of the best hypothesis (equation 3). There are a few more instances of this in the earlier sections\\n\\nThank you so much for pointing out the need for the details. To support this explanation, we have added citations of conventional approaches, assuming that human judgment results are the ideal decision in MBR decoding.\"}" ] }
9P5I9zTUAd
Mixture-of-Instructions: Aligning Large Language Models via Mixture Prompting
[ "Bowen Xu", "ShaoyuWu", "Kai Liu", "lulu hu" ]
With the proliferation of large language models (LLMs), the comprehensive alignment of such models across multiple tasks has emerged as a critical area of research. Existing alignment methodologies primarily address single task, such as multi-turn dialogue, coding, mathematical problem-solving, and tool usage. However, AI-driven products that leverage language models usually necessitate a fusion of these abilities to function effectively in real-world scenarios. Moreover, the considerable computational resources required for proper alignment of LLMs underscore the need for a more robust, efficient, and encompassing approach to multi-task alignment, ensuring improved generative performance. In response to these challenges, we introduce a novel technique termed Mixture-of-Instructions (MoI), which employs a strategy of instruction packing combined with diverse system prompts to boost the alignment efficiency of language models. We have also compiled a diverse set of seven benchmark datasets to rigorously evaluate the alignment efficacy of the MoI-enhanced language model. Our methodology was applied to the open-source Qwen-7B-chat model, culminating in the development of Qwen-SFT-MoI. This enhanced model demonstrates significant advancements in generative capabilities across coding, mathematics, and tool use tasks.
[ "language model", "alignment", "supervised fine-tuning" ]
Reject
https://openreview.net/pdf?id=9P5I9zTUAd
https://openreview.net/forum?id=9P5I9zTUAd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "znUgDlWyFf", "xvJGGRtpwq", "qN0CVm9FXu", "pcBi9t5nw0", "m64miipmox", "jVuGQfVp8A", "jOjBj7ZvJK", "hvqzJrWesf", "f6fiXlInKG", "bRgAMTTOTx", "YlGB25kIXY", "TlNk7X1gxK", "R0QY8Odgu0", "Q9xrmVefaJ", "L77PCr7f7S", "GvVGeR61F5", "GKpfIl9Wfq", "GIx2yQ417n", "GF2iAX7lko", "EtnEEaCbkc", "EPnbrpRqTH", "Dn7PvDtgIi", "CVXBCYuMVP", "9pRnua8GbI", "7v0BuEMUR9", "7W8P4eRohn", "6HLV1LpHW4", "5d63utleaN", "52zPrr35Qb", "4NRI1fbfiH", "2XChOknzEn", "0kTxJocKDy" ], "note_type": [ "official_review", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730618231448, 1732690004899, 1734890998834, 1732175498324, 1730681958664, 1732606532688, 1732526129672, 1732353473920, 1732774465736, 1732710652136, 1732174428785, 1737523712352, 1732606472116, 1732693414480, 1730620347730, 1732659254596, 1732773861812, 1732530528502, 1732176526655, 1732172371825, 1730702694848, 1732511537477, 1732248032873, 1732773887262, 1732353846463, 1729520383016, 1732604981199, 1732689897576, 1732354326213, 1732806195731, 1732606649834, 1732845353504 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5536/Reviewer_bxFP" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Area_Chair_FYX8" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Reviewer_DmtX" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Reviewer_Dhj1" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Reviewer_ZeTr" ], [ "ICLR.cc/2025/Conference/Submission5536/Reviewer_ZeTr" ], [ "ICLR.cc/2025/Conference/Submission5536/Reviewer_DmtX" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Reviewer_Dhj1" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Reviewer_nEEX" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Reviewer_nEEX" ], [ "ICLR.cc/2025/Conference/Submission5536/Reviewer_nEEX" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ], [ "ICLR.cc/2025/Conference/Submission5536/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces Mixture-of-Instructions (MoI), a novel methodology aimed at enhancing the alignment of large language models (LLMs) across multiple tasks through a combination of instruction balanced packing and diverse system prompts. Traditional alignment methods focus on single tasks, which limits the effectiveness of LLMs in real-world, multi-faceted applications. To address this, the authors propose MoI, which assigns unique system prompts to different tasks and integrates them into a unified instruction set, facilitating comprehensive multi-task training. The approach is evaluated using seven benchmark datasets encompassing mathematics, programming, tool usage, common sense, and dialogue, demonstrating significant improvements in generative performance. The methodology is applied to the open-source Qwen-7B-chat model, resulting in the enhanced Qwen-SFT-MoI model, which outperforms existing models across various benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**1. Important Research Questions**\\n\\nThe paper addresses the critical challenge of multi-task alignment in LLMs, which is increasingly relevant given the diverse applications of AI-driven products. By focusing on harmonizing various capabilities\\u2014such as coding, mathematical reasoning, and multi-turn dialogues\\u2014the study tackles the complex interplay between different task domains, which is essential for creating versatile and reliable language models.\\n\\n**2. Rich Experiments and Evaluations**\\n\\nThe authors conduct extensive experiments across seven diverse benchmark datasets, including MT-Bench, HumanEval, MBPP, MATH, GSM8K, MMLU, and T-EVAL. This comprehensive evaluation framework ensures that the proposed MoI methodology is rigorously tested across multiple domains, demonstrating its efficacy in improving generative performance. Additionally, ablation studies and comparisons with existing models underscore the robustness of the approach.\\n\\n**3. Carefully Crafted Implementation Details**\\n\\nThe introduction of chunk-based attention masking is a meticulous enhancement that addresses the issue of attention cross-contamination during multi-task training. Coupled with balanced sampling techniques, these implementation strategies effectively mitigate dataset biases and ensure that the model maintains high performance across all tasks. The detailed explanation of the loss function modifications further showcases the depth of the methodological advancements.\", \"weaknesses\": \"**1. Lack of Explanation on MoI's Absence of System Leading to Performance Degradation**\\n\\nThe paper briefly touches upon the performance decline observed when not using the chunk-based attention masking but fails to delve deeply into the underlying reasons. A more thorough analysis or theoretical justification for why the absence of a system prompt within MoI leads to performance degradation would provide greater clarity and strengthen the argument for the proposed solution.\\n\\n**2. Insufficient Exploration of Long Input and Output Scenarios**\\n\\nWhile the study effectively evaluates MoI on various tasks, it overlooks scenarios that involve prolonged inputs and outputs, such as Retrieval-Augmented Generation (RAG) and long-form text creation. Including these cases would offer a more comprehensive understanding of how MoI performs under conditions that demand handling of extensive contextual information and sustained generation over longer sequences.\\n\\n**3. Lack of Evaluation on Larger-Scale Models**\\n\\nThe experiments are primarily conducted on the Qwen-7B-chat model and smaller variants. Testing the MoI methodology on larger models with more parameters (like Qwen-72B or Qwen-MoE) would provide insights into its scalability and effectiveness in more powerful architectures. This omission leaves a gap in understanding how MoI can be generalized to state-of-the-art LLMs with significantly higher capacities.\", \"questions\": \"1. See the weakness section. Maybe elaborate more on your opinions about point 1.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Please check our response\", \"comment\": \"Dear Reviewer ZeTr,\\n\\nSince the discussion period has been extended to December 3rd, but November 27th is the last day that authors may upload a revised PDF, we hope you can review our new version of the paper and the additional experimental content to see if they have addressed the weaknesses and questions you mentioned in your rebuttal.\\n\\nWe look forward to your response.\\n\\nThank you.\"}", "{\"metareview\": \"This paper proposes an approach, Mixture-of-Instructions (MoI), that aims to align a model with several tasks simultaneously. The approach is based on instruction packing with diverse system prompts to represent and improve multiple downstream tasks during supervised fine-tuning. Authors test the approach on a new benchmark they compiled from existing benchmarks, and create the MoI version of the Qwen-7B-chat model. Reviewers highlight the contributions as the ease of applying the approach and comprehensive experimental results. The weaknesses include the limited novelty based on the incremental nature of the approach and limited experimentation, for example, the analysis on the single model.\", \"additional_comments_on_reviewer_discussion\": \"Rebuttal period included good discussion between the authors and the reviewers. Authors responded positively to the suggestions and included additional results with new models. However, there were still more questions unanswered for the reviewers at the end of the discussions.\"}", "{\"title\": \"Response to reviewer nEEX (part 1)\", \"comment\": \"Dear reviewer, thank you very much for your comments and professional advice. Based on your suggestions, we clarify some ambiguous parts of the paper and would like to provide the details as follows:\\n\\n**Weakness**:\\n\\nThe chunk-based attention, packing the data and unmasking attention across different tasks, is quite unreasonable from intuition. Isolated attention is for removing unrelated contexts and full attention is for efficiency. So what is the motivation of the chunk-based attention? Moreover, some related works (Zhao et al, 2024) that this paper has even cited have pointed out that similar samples should be packed together to enhance the final performance. However, this work, chooses the opposite way, unmask the attention across different tasks in a chunk. What is the motivation and why is it better? This concern could become more important to me when I see the coding ability gets greatly enhanced by MoI compared to balanced packing, which is kindof magic to me.\\n\\n**Answer**:\\n\\nThe reason for incorporating chunk-based attention is that, as we progressed with the packed approach, it naturally led to some discussions about the effects of concatenating data for joint training on the model. In fact, we experimented with various modifications to the attention mask. One intuitive approach we considered was applying complex attention mask coverage to the concatenated data. However, we ultimately opted for this chunk-based attention approach. If you are genuinely interested in this process, please refer to Figure 8 in our appendix. This figure fully demonstrates the attention behavior of a Qwen-7B-chat model when reasoning with data formed by concatenating two pieces. You can see that a substantial amount of attention is focused on the first token, indicating considerable sparsity in attention. Our insight is that, since attention is not being fully utilized and is heavily focused on the first token, concatenating multiple pieces of data together and placing the default system prompt we intend to use at the start of this concatenated data aligns well with the sparse nature of attention.\\n\\n**Weakness**:\\n\\n I don't think the paragraph \\\"Why is MoI effective?\\\" can convince me. The paragraph only points out that the attention of MoI in SFT is significantly different from one in Chat, which is quite natural and normal (in my opinion) if we unmask the attention between different instructions. I expect more essential reasons, that is, why the altered attention distribution is more useful rather than more useless?\\n\\n\\n\\n**Answer**:\\n\\nThe original intent behind designing this experiment was influenced by the work \\\"A Mathematical Framework for Transformer Circuits.\\\" Initially, we aimed to use some Attention Score Maps and Case Studies to reveal the role of MoI. For example, after applying MoI, the model's Attention Score Map became denser, enhancing its ability to capture key information. However, we ultimately felt that choosing any particular way to present and interpret the results would be a form of cherry-picking. Instead, we decided to explore what our method actually brings to the language model by examining the mechanisms of both MLP and Attention Circuits. As you can see from the conclusions, after training with MoI, the distribution of Attention shifted more significantly compared to the normal SFT model. This shift is not something that can be caused by SFT schemes like Packed and Balanced. When replacing a Chat model's Attention with the weights of an SFT model, all SFTs improve the model's performance except MoI, where the Attention distinctly degrades the original model's ability. This indicates that the model has indeed learned a form of Attention expression different from that of the original model. If you have any additional experimental ideas regarding this, please feel free to share them.\"}", "{\"summary\": \"This paper addresses a practical issue on multi-task supervised fine tuning (SFT). In order the model to be trained propoerly on all the tasks they propose packing the instructions belonging to different tasks in a balanced fashion. This results in superior performance on a number of tasks.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This is a clever and practical idea. The impact of this is clearly presented and I appreciate the comprehensive paper with many analyses.\", \"weaknesses\": \"However I feel like it may not constitute a full ICLR paper. Instead in the model system card, this could have been mentioned in a paragraph. For example Llama or Olmo model system cards have similar technical details like this.\", \"questions\": [\"Does the idea work for existing open SFT tuned models like Llama\", \"Does the idea apply to all model sizes and task categories?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Please check our response\", \"comment\": \"Dear Reviewer bxFP,\\n\\nSince the discussion period has been extended to December 3rd, but November 27th is the last day that authors may upload a revised PDF, we hope you can review our new version of the paper and the additional experimental content to see if they have addressed the weaknesses and questions you mentioned in your rebuttal.\\n\\nWe look forward to your response.\\n\\nThank you.\"}", "{\"title\": \"Response to reviewer Dhj1\", \"comment\": \"**Question**:\", \"ln099\": \"Are there any previous studies that highlight the significance of system prompts? If so, you should reference them and compare your method to theirs to demonstrate its superiority. If you're the first to do this, it's also worth mentioning.\\n\\n**Answer**:\\n\\n[1] explores how incorporating character definitions in system prompts can assist language models in achieving multi-domain alignment while [2] ropose a novel persona-driven data synthesis methodology for SFT. These studies demonstrate the role of character definitions in knowledge generation and task completion. [3] presents the first comprehensive cross-supervision alignment experiment in the role-play domain, revealing that the intrinsic capabilities of LLMs confine the knowledge within role-play. \\n\\nBased on these studies, we infer that the knowledge and expression of language models are related to role settings. Therefore, we attempt to modify roles to enable the language model to learn new knowledge during SFT\\u3002\\n\\n[1]Wang R, Mi F, Chen Y, et al. Role Prompting Guided Domain Adaptation with General Capability Preserve for Large Language Models[J]. arXiv preprint arXiv:2403.02756, 2024.\\n\\n[2]Ge T, Chan X, Wang X, et al. Scaling synthetic data creation with 1,000,000,000 personas[J]. arXiv preprint arXiv:2406.20094, 2024.\\n\\n[3]Lu K, Yu B, Zhou C, et al. Large language models are superpositions of all characters: Attaining arbitrary role-play via self-alignment[J]. arXiv preprint arXiv:2401.12474, 2024.\\n\\n\\n**Question**:\", \"ln038\": \"How did you come to the conclusion that the model generates less than ideal solutions due to conflicts in prompt-induced knowledge? Is there any research or experimental evidence to back this up?\", \"ln085\": \"What is the approach you use to identify conflicts between new and old knowledge during SFT? I couldn't find this information in section 2.\", \"ln040\": \"Can you explain why modifying prompts can help in resolving knowledge conflicts?\\n\\n**Answer**:\\n\\nIn Figure 2, we demonstrate the issue of knowledge conflict by comparing how the attention of models trained with different system prompts changes on specific questions. In the original Chat model, attention highlights converge on words like \\\"Boyer\\\" and \\\"track of frequency.\\\" For the model trained with the \\\"You are a helpful assistant\\\" prompt, attention highlights the word \\\"iterate\\\" on the same question. However, for the model trained with the \\\"You are a programmer\\\" system prompt, attention highlights fall on \\\"Using the Boyer\\\" and \\\"keeping.\\\" Additionally, Figure 6 in the appendix documents the performance variance of the same Qwen-SFT-code model during inference with the \\\"helpful assistant\\\" and \\\"programmer\\\" prompts. This serves as further evidence of the model's differing performance under various roles and illustrates how our MoI method addresses this phenomenon. In the latest version of our paper, we have redrawn the significant attention score maps in Figure 2, which we consider noteworthy, to facilitate researchers in observing the performance changes of the model under different system prompts.\\n\\n\\n**Question**:\", \"ln470\": \"The main focus of this study is multi-task learning, but I don't see a thorough discussion or comparison with existing methods.\\n\\n**Answer**:\\n\\n[4] proposes concatenating multiple datasets during training and using focal loss to balance data bias among multiple tasks. \\n[5] proved multi-task learning of LLM lead to conflicts, while sequential training results in catastrophic forgetting. \\n[6] introduce Progressive Prompts, a continual learning method that learns a unique prompt for each new task while keeping the language model's parameters frozen. This approach prevents catastrophic forgetting and enables forward transfer by reusing information from prompts learned on previous tasks, serving as a good initialization for subsequent tasks.\\n\\nHowever, these studies have not addressed how to balance tasks and perform fine-tuning when language models are deployed as chat assistants that need to simultaneously meet four types of user needs: chatting, programming, mathematics, and tool invocation. In addition, we have even tested our approach in handling complex tasks where users request solving mathematical problems through code writing. This aspect has not been addressed in previous research on multitask learning with purely language models.\\n\\n[4]Liu B, Chen C, Gong Z, et al. Mftcoder: Boosting code llms with multitask fine-tuning[C]//Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2024: 5430-5441.\\n\\n[5]Dong G, Yuan H, Lu K, et al. How abilities in large language models are affected by supervised fine-tuning data composition[J]. arXiv preprint arXiv:2310.05492, 2023.\\n\\n[6]Razdaibiedina A, Mao Y, Hou R, et al. Progressive prompts: Continual learning for language models[J]. arXiv preprint arXiv:2301.12314, 2023.\\n\\n**We have added these references to the related work section in the new version of the paper.**\"}", "{\"title\": \"Response to reviewer bxFP\", \"comment\": \"Dear reviewer, thank you very much for your comments and professional advice. Based on your suggestions, we clarify some ambiguous parts of the paper and would like to provide the details as follows:\\n\\n**weakness**:\\n\\n**Lack of Explanation on MoI's Absence of System Leading to Performance Degradation**\\nThe paper briefly touches upon the performance decline observed when not using the chunk-based attention masking but fails to delve deeply into the underlying reasons. A more thorough analysis or theoretical justification for why the absence of a system prompt within MoI leads to performance degradation would provide greater clarity and strengthen the argument for the proposed solution.\\n\\n**Answer**:\\nIn fact, our alignment work involves further aligning a model that has already been aligned, so we have also referenced some content related to fine-tuning and forgetting. We have added citations to these works[1,2] in the related work section of the latest version of the paper. Overall, Tables 7 and 8 in the paper have already explained in great detail what MoI (Mechanism of Interest) changes in the model and the role of chunk-based attention. From the results, the role of MoI is to enable the model to continuously learn complex concatenation instructions to maximize the changes in attention. The role of chunk-based attention is to help the model balance attention to ultra-long sequences and short sequences.\\n\\n[1] Zhang X, Wu J. Dissecting learning and forgetting in language model finetuning[C]//The Twelfth International Conference on Learning Representations. 2024.\\n\\n[2] Kotha S, Springer J M, Raghunathan A. Understanding catastrophic forgetting in language models via implicit inference[J]. arXiv preprint arXiv:2309.10105, 2023.\\n\\n**Question**:\\n\\nWhile the study effectively evaluates MoI on various tasks, it overlooks scenarios that involve prolonged inputs and outputs, such as Retrieval-Augmented Generation (RAG) and long-form text creation. Including these cases would offer a more comprehensive understanding of how MoI performs under conditions that demand handling of extensive contextual information and sustained generation over longer sequences.\\n\\n**Answer**:\\n\\nThank you for raising these meaningful questions. Here are the results of the additional experiments we conducted, which validate the tasks RAG] and Long text using the RGB[3] and L_Bench[4] benchmarks, respectively:\\n\\n**RGB English Benchmark for Retrieval-Augmented Generation with Noise**\\n|Models\\t| Noise Ratio 0 | Noise Ratio 0.2| Noise Ratio 0.4 | Noise Ratio 0.6 | Noise Ratio 0.8|\\n|-|-|-|-|-|-|\\n|ChatGPT (OpenAI 2022)\\t|96.33\\t| 94.67\\t| 94.00\\t| 90.00\\t| 76.00 | \\n|ChatGLM2-6B\\t|91.33 \\t|89.67 \\t|83.00 \\t|77.33\\t|57.33 |\\n|Vicuna-7B-v1.3\\t|87.67 \\t|83.33 \\t|86.00 \\t|82.33 \\t|60.33 |\\n|Qwen-7B-Chat\\t|94.33\\t|91.67\\t|91.00\\t|87.67\\t |73.67 |\\n|Qwen-7B-MoI\\t|93.33\\t|90.00\\t|89.67\\t|86.33\\t|71.67 |\\n\\n**L-Eval:**\\n\\n|models\\t| Coursera | GSM | QuALITY | TOEFL | CodeU | SFiction | Avg.| \\n|-|-|-|-|-|-|-|-|\\n|ChatGPT\\t| 63.51 \\t| 84.00 \\t| 61.38 \\t| 78.43 \\t|12.22 |\\t64.84 |\\t60.73|\\n|Llama2-7b-chat\\t| 29.21 \\t|19.00 \\t| 37.62 \\t|51.67 |\\t1.11 |\\t60.15 |\\t33.12|\\n|Qwen-7b-chat\\t|45.64\\t|29.0\\t|59.40\\t|76.20\\t|5.55\\t|60.93\\t|46.12|\\n|Qwen-7b-MoI\\t|47.24\\t|68.0\\t|50.99\\t|66.17\\t|4.44\\t|55.93\\t|48.80|\\n\\nFrom the results, we did not significantly enhance or diminish the model's RAG capability and long text processing ability; rather, we altered the distribution of these capabilities. This is evident as, in the L-Eval, our accuracy on the GSM dataset increased significantly, but it decreased on the TOEFL and SFiction datasets. This outcome aligns with the distribution of our training dataset.\\n\\n[3]An C, Gong S, Zhong M, et al. L-eval: Instituting standardized evaluation for long context language models[J]. arXiv preprint arXiv:2307.11088, 2023.\\n\\n[4] Chen J, Lin H, Han X, et al. Benchmarking large language models in retrieval-augmented generation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2024, 38(16): 17754-17762.\"}", "{\"title\": \"Please check our response\", \"comment\": \"Dear Reviewer bxFP,\\n\\nWe hope you can review our new version of the paper and the additional experimental content to see if they have addressed the weaknesses and questions you mentioned in your rebuttal.\\n\\nWe look forward to your response.\\n\\nThank you.\"}", "{\"title\": \"Thanks for clarification\", \"comment\": \"I appreciate the authors for their comprehensive explanation. I would prefer to maintain my existing positive rating.\"}", "{\"title\": \"Response to reviewer ZeTr (part 2)\", \"comment\": \"**Weakness**:\\n\\nThe extensibility of this strategy is not clear. The work has tried four tasks, namely mathematical reasoning, code generation, tool usage, and daily chat. When the task number increases, does this strategy still work? If the length of packed instructions exceeds the maximum sequence length, e.g., 4096 in llama, how can we solve this issue?\\n\\n**Answer**:\\n\\nWe did not consider this perspective because the problem we aim to address is mainly defined by the scenarios in which users employ language models. We broadly categorized tasks into chatting, math, coding, and tool usage because these are the four most frequently encountered user scenarios. Moreover, switching the chunk-based attention in our method to an isolated attention mask would essentially revert back to standard SFT. Numerous studies have already demonstrated that language models are inherently multi-task learners. I believe that increasing the number of tasks will not affect the effectiveness of our strategy, as the simultaneous learning of math and coding tasks is already sufficiently challenging. Furthermore, our method enhances the model's ability to use code to solve math problems, which indirectly demonstrates the effectiveness of our approach.\\n\\nOur current packing length indeed exceeds the sequence length of the open-source SFT models themselves, so we have forcefully extended the context length of these models during training to 8192. We have already reported this in the experimental setup.\\n\\n**Weakness**:\\n\\nThe description of the system prompt is somewhat confusing. In Section 2.4, the authors say that unique system prompts for each domain-specific task are set to ensure tailored guidance. However, in Table 1, they highlight that for all benchmark computations, they consistently use the system prompt \\\"You are a helpful assistant.\\\" How can these conflict statements be understood? Does that mean the system prompts are different in training and testing?\\n\\n**Answer**:\\nAs illustrated in our Figure 3, during training, we deliberately divided the SFT training data into tasks across four domains. Some of this data inherently included a system prompt, and for these, we appended the corresponding description at the beginning of the existing system prompt. For data without a system prompt, we added one. Ultimately, all the training data was standardized into the chatml format. \\n\\nDuring evaluation as mentioned at ln306-310, we simply added the system prompt 'You are a helpful assistant.' to all validation datasets. This approach was chosen to genuinely test the practicality of this method, as in real-world applications where language models function as a service, it is impossible during a single inference to determine the exact task category of a user's query. Therefore, we made a simplistic abstraction by assuming that users will not modify the system prompt when invoking the language model. This requires our model to achieve the effect of task-specific system prompts under the default system prompt condition, which is why we only added the default system prompt during evaluation.\\n\\n**Weakness**:\\nIn section 2.2, the authors mention that training on combined data can lead to model bias toward certain tasks, enhancing performance in some while degrading it in others. This is related to the studies of catastrophic forgetting [1,2], which should be discussed in the related work or compared as baselines.\\n\\n**Answer**:\\nYes, the content you mentioned is indeed something we did not reference. We have added citations for these references in the new version of the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Please check our response\", \"comment\": \"Dear Reviewer DmtX,\\n\\nSince the discussion period has been extended to December 3rd, but November 27th is the last day that authors may upload a revised PDF, we hope you can review our new version of the paper and the additional experimental content to see if they have addressed the weaknesses and questions you mentioned in your rebuttal. \\n\\nWe look forward to your response. \\n\\nThank you.\"}", "{\"title\": \"Thank you for your responses!\", \"comment\": \"Thanks for your detailed responses to my concerns. I think the paper is a good exploration of the empirical SFT process for many downstream tasks, and the experiments are solid. However, the contribution is still limited compared to a regular ICLR long paper and I will keep my score.\"}", "{\"summary\": \"The paper introduces a Mixture-of-Instructions (MOI) strategy, which combines instruction packing with diverse system prompts to enhance multiple downstream tasks. The multi-task training technique is applied to a Qwen-7B-Chat model, which demonstrates good performance in mathematical reasoning, code generation, tool usage, and chat benchmark.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed strategy is straightforward and easy to follow yet achieves competitive performance against many open-source models. It can be effective guidance for SFT training and data utilization.\\n\\n2. The experiments and ablation studies are comprehensive, which can clearly reflect the effectiveness of the method and explain how it works.\", \"weaknesses\": \"1. The novelty is limited. The idea of Mixture of Instructions has already been explored by the works of FLAN serious. Packing multiple samples into a single one and balanced sampling are also common strategies employed by many works, as cited in the paper. The main innovation of this work is the chunk-based packing and attention mask. However, the effectiveness and generalization of this strategy are still not clear when the model size and task numbers increase in the view of the scaling law.\\n\\n2. In each chunk, the instructions are packed and reordered. One concern is whether the models would be sensitive to the order of instructions. Have the authors ever explored the influence of different orders of the packed instructions in the chunk? \\n\\n3. The extensibility of this strategy is not clear. The work has tried four tasks, namely mathematical reasoning, code generation, tool usage, and daily chat. When the task number increases, does this strategy still work? If the length of packed instructions exceeds the maximum sequence length, e.g., 4096 in llama, how can we solve this issue?\\n\\n4. The description of the system prompt is somewhat confusing. In Section 2.4, the authors say that unique system prompts for each domain-specific task are set to ensure tailored guidance. However, in Table 1, they highlight that for all benchmark computations, they consistently use the system prompt \\\"You are a helpful assistant.\\\" How can these conflict statements be understood? Does that mean the system prompts are different in training and testing?\\n\\n5. In section 2.2, the authors mention that training on combined data can lead to model bias toward certain tasks, enhancing performance in some while degrading it in others. This is related to the studies of catastrophic forgetting [1,2], which should be discussed in the related work or compared as baselines.\\n\\n[1] An empirical study of\\u2002catastrophic forgetting\\u2002in large language models during continual fine-tuning.\\n\\n[2] Understanding\\u2002catastrophic forgetting\\u2002in language models via implicit inference.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks, I am keeping my score.\"}", "{\"title\": \"Thanks for your feedback!\", \"comment\": \"Thank you for taking the time to help improve our paper.\\n\\nYour feedback has been truly helpful to us. \\n\\nSincerely, thank you.\"}", "{\"title\": \"Rebuttal Revision Version Paper Modifications\", \"comment\": \"We thank the reviewers for their time and effort on this work. Based on their feedback, we have made the following revisions to the paper:\\n\\n1.To address reviewer DmtX's concern that our paper resembled a model system card, **we reformatted the experimental and analysis sections, expanding the overall length from 9 to 10 pages**. In the revised paper, each discussion point in the analysis section is accompanied by the relevant experimental results on the same page, reducing the need for frequent page-turning while reading.\\n\\n2.We have added more references on multi-task learning to the original related work section.\\n\\n3.We included a new related work section titled \\\"SFT in the alignment of LLM,\\\" providing references on catastrophic forgetting during SFT and the impact of system prompts on language models during the SFT process.\\n\\n4.We have redrawn Figure 2, adding arrows to indicate key attention responses.\\n\\n5.We included experimental results related to RAG and Qwen72B in the appendix, addressing the concerns of reviewer bxFP.\\n\\n6.We added supplementary experiments in the appendix to address reviewer nEEX's concerns about \\\"Why is the attention of MoI better?\\\"\\n\\nWe appreciate all the valuable feedback provided by the reviewers at this stage. As the rebuttal deadline is approaching, we have responded to all the reviewers. If there are still any concerns, please let us know as soon as possible.\"}", "{\"title\": \"Response to reviewer nEEX (part 2)\", \"comment\": \"**Weakness**:\\n\\nThe contribution (or motivation) listed in line 085-087 seems contradictive to the actual implementation in MoI, which uses a unified \\\"Assistant\\\" system prompt. The authors emphasize that the system prompt is important (and conduct some experiments to prove this) but finally adopt a simple \\\"Assistant\\\" system prompt in their proposed model. What is the connection between them?\\n\\n**Answer**:\\n\\nThis is indeed a point that we did not clearly explain in the background section. The primary use case for current language models is as AI assistants, such as ChatGPT. When users interact with AI assistants, they almost never modify the foundational System Prompt. However, users expect the language model to perform well across various scenarios, such as code writing, solving mathematical problems, or tool utilization. Our experiments were conducted within this context. In other words, we aimed to train a model where the Attention distribution is significantly altered by using different system prompts, but we wanted these changes to be effective with the default system prompt. We did not intend for users or dialogue systems to continuously switch between different system prompts.\\n\\n**Weakness**:\\n\\nIn fact, many SFT training samples have their own specific system prompts rather than a simple sentence \\\"You are ....\\\". Moreover, the huge impact (sensitivity) of system prompt seems well-known by LLM/NLP community. From my perspective, the exploration about system prompt in this work is kind of shallow and the possible insight is limited.\\n\\n**Answer**:\\n\\nIn our approach, we actually augmented the system prompt. As listed in the appendix, both our training data and Evaluation dataset contain only a small portion of data with a system prompt. A significant amount of data consists only of a question and an answer. Therefore, we added the system prompt to such data. For some datasets, like T-EVAL, which already include a system prompt, we simply added this sentence before the existing system prompt. Thus, we have not modified the original system prompts present in the datasets themselves.\\n\\n**Question**:\\n\\nHow do you control balance when the numbers of samples of different tasks are different? And according to Table 9, the numbers of samples are indeed unbalanced. How do you order them? Conversely, if the balanced packing required the same number of samples across all tasks, that would bring out a great limitation.\\n\\n**Answer**:\\n\\nThis is a great question. In the supplemental material code we submitted, there is handling logic for this part. Our logic involves initially concatenating four datasets and reordering them by adding a chunk attention mask. We continue this process until one of the task datasets is exhausted. If the exhausted dataset is Chat, we then concatenate the remaining datasets and apply an isolated attention mask to them. If the exhausted dataset is not Chat, we reorder the concatenated remaining datasets and add a chunk attention mask. We repeat this process as needed.\\n\\n**Question**:\\n\\nThe system prompt part is confusing for me. Why do you always adopt the unified \\\"Assistant\\\" system prompt for all tasks in MoI model rather than different system prompts? What system prompt do you use for other baseline models in training and inference? I strongly recommand a table to present the system prompts in training and inference stages for each model.\\n\\n**Answer**:\\n\\nThe detailed training data system prompts are provided in Table 9 of the appendix, which we hope will address your concerns.\\n\\n**Question**:\\n\\nWhat is the difference between \\\"Isolated attention mask\\\" in Table 8 and \\\"Qwen-SFT-balanced\\\" in Table 1? Why are the results not the same?\\n\\n**Answer**:\\n\\nThis is a good question. Thank you for bringing up this point. The Isolated attention mask model is based on the Qwen-SFT-balanced model, with the instruction sequence reordered. Specifically, the first system prompt of the chunk was changed to 'You are a helpful assistant.' This is the only difference between the two models; all other training settings are exactly the same. The difference between these two models actually highlights the role of instruction reordering. However, we believe that instruction reordering is just a matter of implementation technique and cannot be considered a theoretical innovation. If you are interested in this, you can suggest some meaningful experimental directions, and we will address your questions.\"}", "{\"title\": \"Response to reviewer ZeTr (part 1)\", \"comment\": \"Dear reviewer, thank you very much for your comments and professional advice. Based on your suggestions, we clarify some ambiguous parts of the paper and would like to provide the details as follows:\\n\\n**Weakness**:\\n\\nThe novelty is limited. The idea of Mixture of Instructions has already been explored by the works of FLAN serious. Packing multiple samples into a single one and balanced sampling are also common strategies employed by many works, as cited in the paper. The main innovation of this work is the chunk-based packing and attention mask. However, the effectiveness and generalization of this strategy are still not clear when the model size and task numbers increase in the view of the scaling law.\\n\\n**Answer**:\\n\\nOne of the key innovations of our work is the investigation of the effectiveness of the packed method, which was briefly mentioned but not thoroughly explored in the original series of FLAN works. There were no comparative experiments on the use of sequence and packed methods in the original FLAN studies. In contrast, we conducted detailed experiments on various variables using the Qwen 7B model, as presented in Table 1 of our paper. Our findings demonstrate that the packed method significantly enhances model performance and accelerates the training process during the SFT stage. However, both the packed and sequence methods inevitably introduce data bias when mixing multiple task datasets, which led us to design the balanced sampling method to address this issue. Additionally, we conducted an in-depth analysis of the impact of different attention masks during the packed process and ultimately opted for chunk-based attention. The feasibility of chunk-based attention was confirmed by the experimental results in Table 8, which showed that it mitigates the degradation in T-Eval performance caused by attention contamination while retaining the improvements in code and math validation sets brought by the balanced sampling method.\\n\\nRegarding model size considerations, the results in Tables 5 and 6 demonstrate that our method improves models with sizes of 1.8B, 4B, 7B, and 8B. Beyond experiments on Qwen, we validated the effectiveness of our approach on language models trained with different data and settings on Llama2 and Llama3, all showing positive results.\\n\\nAs for task numbers, although we roughly divided the datasets into four tasks during training, our evaluation actually included a wide range of tasks. For instance, T-Eval contains six tasks (refer to Table 10 in the appendix), and MT-Bench includes eight tasks: writing, roleplay, reasoning, math, coding, extraction, STEM, and Humanities. Furthermore, we acknowledged the limitations of using rule-based accuracy computation, particularly in math tasks where accuracy is determined by keyword matching. Therefore, we also reported the performance on MT-Bench(In MT-Bench, a language model is used to score the answers instead of using rule-based matching), with detailed metrics of our model provided in Figure 7 of the appendix.\\n\\nThe experimental results clearly demonstrate that our method is effective and applicable across varying model sizes and task numbers. This solid evidence indicates that the reviewer's concerns regarding scaling were unfounded.\\n\\n**Weakness**:\\n\\nIn each chunk, the instructions are packed and reordered. One concern is whether the models would be sensitive to the order of instructions. Have the authors ever explored the influence of different orders of the packed instructions in the chunk?\\n\\n**Answer**:\\n\\nYes, the order of instructions can indeed affect the final alignment performance. In Qwen-SFT-balanced reported in Table 1, we only performed uniform sampling without altering the order of instructions. As a result, its performance is weaker compared to our experiments where instruction order was altered under equivalent settings, specifically the 'No attention mask' model in Table 8.\"}", "{\"summary\": \"The paper introduces a technique known as Mixture-of-Instructions (MoI) for improving the alignment efficiency of large language models (LLMs) across multiple tasks. The technique uses instruction packing and diverse system prompts to enhance the model's performance. The authors applied this methodology to the Qwen-7B-chat model, resulting in the development of Qwen-SFT-MoI, which showed significant improvements in tasks like coding, mathematics, and tool use. The paper contributes by identifying and addressing conflicts between new and old knowledge during SFT, introducing the MoI method for joint multi-task training, and demonstrating the effectiveness of the MoI in enhancing SFT models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1) The authors effectively identify a gap in existing alignment methodologies that typically focus on single task and propose a multi-task solution that is particularly relevant for AI-driven products.\\n2) The authors also provide a comprehensive evaluation of their method using seven benchmark datasets, demonstrating its effectiveness. The resulting model, Qwen-SFT-MoI, shows significant improvements in generative capabilities across multiple tasks. \\n3) The paper is well-written and clear in its presentation of the methodology and results. The work is significant as it offers a practical solution to a key challenge in the field of AI, namely the efficient alignment of LLMs for multi-task performance.\", \"weaknesses\": \"The primary concern revolves around the originality. The suggested MoI method incorporates several techniques, such as a distinct system prompt for each task, balanced sampling, and chunk-based attention. However, these techniques have already been introduced in previous research. For instance, the significance of the prompt for SFT has been highlighted by \\\"Keeping llms aligned after fine-tuning: The crucial role of prompt templates, Lyu et al., 2024\\\" and \\\"Reducing the cost: Cross-prompt pre-finetuning for short answer scoring, Funayama et al., 2023\\\". The technique of balanced sampling has been employed by \\\"Sampling bias and class imbalance in maximum-likelihood logistic regression, Oommen et al., 2011\\\", and the concept of chunk-based attention has been put forth by \\\"Shifted chunk encoder for transformer based streaming end-to-end ASR, Wang et al., 2022\\\" and \\\"Statistically defined visual chunks engage object-based attention, Lengyel et al., 2021\\\".\\n\\nConsidering these techniques are not innovative, the method seems more of an engineering blend if it fails to offer fresh perspectives or intriguing discoveries. Nevertheless, the positive outcomes and comprehensive experiments render the work robust and beneficial for the industrial sector.\", \"questions\": \"LN038: How did you come to the conclusion that the model generates less than ideal solutions due to conflicts in prompt-induced knowledge? Is there any research or experimental evidence to back this up?\", \"ln040\": \"Can you explain why modifying prompts can help in resolving knowledge conflicts?\", \"ln085\": \"What is the approach you use to identify conflicts between new and old knowledge during SFT? I couldn't find this information in section 2.\", \"ln099\": \"Are there any previous studies that highlight the significance of system prompts? If so, you should reference them and compare your method to theirs to demonstrate its superiority. If you're the first to do this, it's also worth mentioning.\", \"ln470\": \"The main focus of this study is multi-task learning, but I don't see a thorough discussion or comparison with existing methods.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer nEEX(2nd turn)\", \"comment\": \"**Regarding W1&W2**:\\n\\nWhy is the attention of MoI better?\\n\\n**Answer**:\\nThank you for your response\\uff01\\n\\nWe have thoroughly understood your concerns. We actually define this issue as the language model overfitting to simple system prompts during the SFT stage. This overfitting manifests as the model only capturing information following specific characters. In the Qwen-7B-Chat model, it overfits by focusing with fixed attention on the information following 'You are a helpful assistant,' which in turn reduces alignment performance. Our approach alleviates this overfitting by introducing a variety of system prompts.\\n\\nPlease allow me to use a few examples to explain the effect of our approach. Although these examples may seem somewhat cherry-picked, they are indeed phenomena we have observed, which also serve as a motivation for our paper. Please take a look at the case below:\", \"case_1\": \"**System Prompt**: You are a helpful assistant. Today is Friday, May 17th. \\n\\n**Question**: What day is it today?\\n\\n**Qwen-7B-Chat**: Today is Friday, May 17th.\\n\\n**Qwen-SFT-MoI**: Today is Friday, May 17th.\", \"case_2\": \"**System Prompt**: Today is Friday, May 17th. You are a helpful assistant.\\n\\n**Question**: What day is it today?\\n\\n**Qwen-7B-Chat**: Today is **17th February 2023.**\\n\\n**Qwen-SFT-MoI**: Today is **Friday, May 17th.**\\n\\nIn the two cases above, we set a system prompt for the models. In case 1, we see that both models can respond to questions based on the settings in the system prompt. However, in case 2, we first set the date to Friday, May 17th, and then told the model \\\"You are a helpful assistant.\\\" We can observe that the Qwen-7B-Chat model seems to \\\"miss\\\" the time setting, whereas the MoI model can capture this information. This suggests that the MoI model's attention mechanism is more robust to the sequence of input instructions. In contrast, the Qwen-7B-Chat model stubbornly seeks information following the \\\"You are a helpful assistant\\\" keyword. When it cannot find the expected content in this area, the model tends to create an answer rather than capturing information that came before the key setting.\\n\\n**Regarding W3**:\\n\\n Now I get to know the implementation of the authors, but I still have questions about the motivation. Why not just concatenate \\u201cYou are a helpful AI assistant\\u201d and user-defined specific system prompt into a complete system prompt in both training and inference stages?\\n\\n**Answer**:\\n\\nYou are absolutely right. This is how we initially approached it as well. You can take a look at the description in section 2.1 SYSTEM PROMPT MATTERS. At the beginning, we constructed a dataset using real Leetcode problems and solutions to train the model and added 'You are a helpful assistant' to each piece of data. However, we found that the model still did not learn how to write the Boyer-Moore Voting Algorithm. When we changed the system prompt to 'You are a programmer', the model learned to use this system prompt to answer questions during testing. In fact, we drew inspiration for this approach from the work cited as [1], and we have added this reference in the new version of the paper.\\n\\n[1]Wang, Rui, et al. \\\"Role Prompting Guided Domain Adaptation with General Capability Preserve for Large Language Models.\\\" arXiv preprint arXiv:2403.02756 (2024).\\n\\n**Regarding Q1:**\\n\\nI believe your added detail is quite important and should never be put into supplemental material.\\n\\n**Answer**:\\n\\nIn the new version of the paper, we have included this table in the main text.\\n\\n**Regarding statements on system prompt:** \\n\\nIt is really confusing that the authors switch between system prompts in different experiments (also mentioned by reviewer ZeTr). I don\\u2019t well understand the motivation and that decreases the readability of this paper a lot.\\n\\n**Answer**:\\n\\nWe found that modifying the system prompt is beneficial for training. Adjusting the system prompt during training means that the model's system prompt is also modified during inference. We are considering whether it's possible for the model to consolidate new knowledge learned under different system prompts into the default system prompt, \\\"You are a helpful assistant.\\\" This way, the model can answer questions it previously couldn't during inference, without requiring users to manually change the system prompt.\\n\\n**We hope our response has addressed your concerns, and we look forward to your feedback.**\"}", "{\"title\": \"Response to reviewer DmtX\", \"comment\": \"Dear reviewer, thank you very much for your comments and professional advice. Based on your suggestions, we clarify some ambiguous parts of the paper and would like to provide the details as follows:\\n\\n**Weakness**:\\n\\nHowever I feel like it may not constitute a full ICLR paper. Instead in the model system card, this could have been mentioned in a paragraph. For example Llama or Olmo model system cards have similar technical details like this.\\n\\n**Answer**:\\n\\nIt is true that some of our accompanying illustrations and experimental tables bear similarities to the model system cards of some excellent open-source language models on Hugging Face. This is because we both aim to provide a reliable alignment model, and in this process, the workload for validation is immense. A wide variety of datasets need to be tested to demonstrate the effectiveness of model alignment. This is actually a point we are quite pleased with in our work. We conducted extensive and detailed experiments to demonstrate that our method effectively improves performance across 10 validation sets, some of which include multiple tasks.\\n\\nIf the concern is related to the formatting of our writing, we have expanded the main body of the paper from 9 pages to 10 pages in our newly submitted version, making our experimental conclusions and tables easier to read.\\n\\nIf there are aspects of the method we proposed that are confusing, please specify your concerns in detail, and we will do our best to address your questions.\\n\\n**Question**:\\n\\nDoes the idea work for existing open SFT tuned models like Llama\\n\\n**Answer**:\\n\\nIn our paper, Table 6 already provides experimental results for Llama2-7B-chat and Llama3-8B-instruct, demonstrating that our method shows improvements on both models.\\n\\n**Question**:\\n\\nDoes the idea apply to all model sizes and task categories?\\n\\n**Answer**:\\n\\nIn Table 5 of our paper, we present the experimental results on 1.8B and 4B models, which demonstrate that our approach significantly improves performance on models of these sizes. Furthermore, the performance of small models trained with our Mixture of Instructions (MoI) approach is comparable to some excellent open-source small models, underscoring the effectiveness of our method.\\nRegarding task categories, we initially defined the training dataset categories as code, math, chat, and tool usage, as these represent the domains where language models have substantial real-world applications (this is a non-academic definition). In our initial experiments, we tested the SFT performance on individual tasks, and ultimately, our MoI approach proved to be highly effective in leveraging data from all four tasks to comprehensively align the language model. Additionally, the experiments in Table 4 demonstrate that after simultaneously learning data from code, math, and tool usage tasks, our model gained the ability to use code to solve math problems.\\n\\n**Given that the questions and comments from the reviewer are quite brief, we are eager to receive more detailed feedback from you. We will do our best to address any questions or concerns you may have.**\"}", "{\"title\": \"Thanks for your feedback!\", \"comment\": \"Thank you for taking the time to help improve our paper.\\n\\nYour feedback has been truly helpful to us. \\n\\nSincerely, thank you.\"}", "{\"title\": \"Response to reviewer bxFP part 2\", \"comment\": \"**Question**:\\n\\nThe experiments are primarily conducted on the Qwen-7B-chat model and smaller variants. Testing the MoI methodology on larger models with more parameters (like Qwen-72B or Qwen-MoE) would provide insights into its scalability and effectiveness in more powerful architectures. This omission leaves a gap in understanding how MoI can be generalized to state-of-the-art LLMs with significantly higher capacities.\\n\\n**Answer**:\\n\\nWe did indeed want to conduct experiments on the Qwen-72B and Qwen-MoE models as you suggested, but unfortunately, we do not have sufficient GPU resources to support full-parameter fine-tuning on these large models. We were only able to run LoRA version training, which introduced a new set of hyperparameters. We found that when training under LoRA, it requires extensive tuning of the rank, alpha values, and learning rate, which is unfortunately an experiment we cannot complete in a short time. However, we do have a set of LoRA experimental results based on the Qwen-72B-chat version. These results do not truly reflect the effectiveness of our approach because the improvement is minimal. We can no longer ascertain whether the improved performance is due to the effectiveness of our method or the low-rank perturbations introduced by the LoRA weights to the original weights. Nonetheless, we hope this partially addresses your concerns:\\n\\n|Model\\t| MMLU\\t| GSM8K\\t| MATH\\t| HumanEval\\t| MBPP\\t|Avg|\\n|-|-|-|-|-|-|-|\\n|Qwen1.5-72B-Chat\\t|77.5\\t| 82.7\\t|42.5\\t|71.3\\t|71.9\\t|69.18|\\n|Qwen1.5-72B-Chat-MoI-LoRA\\t|77.9\\t| 83.0\\t|43.1\\t|72.8\\t|72.2\\t|69.80|\\n\\n**Overall, we sincerely thank you for taking the time to review our paper. Your feedback has been incredibly valuable and genuinely contributes to advancing our work in a more robust and reasonable direction. Regardless of whether the paper is accepted or not, we are grateful for your invaluable insights and are very much looking forward to receiving your feedback.**\"}", "{\"summary\": \"This paper proposes a method called Mixture-of-Instructions (MoI), which employs a strategy of instruction packing combined with diverse system prompts to boost the alignment efficiency of language models. The methods mainly include balanced instruction packing and chunk-based attention. The experimental results show that the methods can enhance the ability of cross-domain tasks by multi-task learning.\\nIn addition, this authors also conduct some experiments on the impact of system prompt for different tasks and find that system prompt matters.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The research scope of this work, how to perform multi-task learning in SFT is indeed key to LLMs.\\n\\n2. The proposed method is simple and easy to implement. Researchers would love to replicates this work if they believe the reported results of this work.\\n\\n3. This paper employs some benchmarks and visualization analysis to demonstrate its effectiveness, which enhances the readers' comprehensive understanding of this work.\", \"weaknesses\": \"1. The chunk-based attention, packing the data and unmasking attention across different tasks, is quite unreasonable from intuition. Isolated attention is for removing unrelated contexts and full attention is for efficiency. So what is the motivation of the chunk-based attention? Moreover, some related works (Zhao et al, 2024) that this paper has even cited have pointed out that similar samples should be packed together to enhance the final performance. However, this work, chooses the opposite way, unmask the attention across different tasks in a chunk. What is the motivation and why is it better? This concern could become more important to me when I see the coding ability gets greatly enhanced by MoI compared to balanced packing, which is kindof magic to me.\\n\\n* PS: I don't think the paragraph \\\"Why is MoI effective?\\\" can convince me. The paragraph only points out that the attention of MoI in SFT is significantly different from one in Chat, which is quite natural and normal (in my opinion) if we unmask the attention between different instructions. I expect more essential reasons, that is, why the altered attention distribution is more useful rather than more useless?\\n\\n2. The contribution (or motivation) listed in line 085-087 seems contradictive to the actual implementation in MoI, which uses a unified \\\"Assistant\\\" system prompt. The authors emphasize that the system prompt is important (and conduct some experiments to prove this) but finally adopt a simple \\\"Assistant\\\" system prompt in their proposed model. What is the connection between them?\\n\\n3. In fact, many SFT training samples have their own specific system prompts rather than a simple sentence \\\"You are ....\\\". Moreover, the huge impact (sensitivity) of system prompt seems well-known by LLM/NLP community. From my perspective, the exploration about system prompt in this work is kind of shallow and the possible insight is limited.\", \"questions\": \"1. How do you control balance when the numbers of samples of different tasks are different? And according to Table 9, the numbers of samples are indeed unbalanced. How do you order them? Conversely, if the balanced packing required the same number of samples across all tasks, that would bring out a great limitation.\\n\\n2. The system prompt part is confusing for me. Why do you always adopt the unified \\\"Assistant\\\" system prompt for all tasks in MoI model rather than different system prompts? What system prompt do you use for other baseline models in training and inference? I strongly recommand a table to present the system prompts in training and inference stages for each model.\\n\\n3. What is the difference between \\\"Isolated attention mask\\\" in Table 8 and \\\"Qwen-SFT-balanced\\\" in Table 1? Why are the results not the same?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Polite Reminder\", \"comment\": \"Dear Reviewer ZeTr, I apologize for the interruption. Since another reviewer nEEX mentioned your concerns in his response, we kindly ask for your feedback on whether our rebuttal has addressed your concerns. Otherwise, this will hinder the communication between us and the other reviewer, which will seriously affect the review process of the paper.\\n\\nWe look forward to your response.\"}", "{\"title\": \"Thanks for your feedback.\", \"comment\": \"Thank you for taking the time to help improve our paper.\\nThis is good news for us!\"}", "{\"title\": \"Thanks for your response!\", \"comment\": \"Some of my concerns are indeed addressed. Here are my follow-up feedbacks:\\n\\n**Regarding W1:**\\n> Our insight is that, since attention is not being fully utilized and is heavily focused on the first token, concatenating multiple pieces of data together and placing the default system prompt we intend to use at the start of this concatenated data aligns well with the sparse nature of attention.\\n\\nI understand and agree that (1) attention is not being fully utilized and is heavily focused on the first token; (2) sparse nature of attention. But, how do \\\"concatenating multiple pieces of data together and placing the default system prompt at start\\\" is corresponding to the sparse nature of the attention? May you explain more?\\n\\n**Regarding W2 (\\\"Why is MoI effective?\\\" part):** Yes, as I commented before, I fully understand that the \\u201cthis indicates that the model has indeed learned a form of Attention expression **different** from that of the original model\\u201d. But difference does not always bring in improvement. Why is the attention of MoI **better**? The answer to this question is highly connected to my previous concern in W1.\\n\\n\\n**Regarding W3:** Now I get to know the implementation of the authors, but I still have questions about the motivation. Why not just concatenate \\u201cYou are a helpful AI assistant\\u201d and user-defined specific system prompt into a complete system prompt in both training and inference stages?\\n\\n\\n**Regarding Q1:** I believe your added detail is quite important and should never be put into supplemental material. Moreover, the logic might improve the results but should be definitely not the basic (default) behavior. Obviously the authors noticed (and maybe conduct an experiment) that Chat task is completely different from other tasks so that the logic is designed like that. The logic further raises the concern regarding the scalability (e.g., reordering, task number, sample number, etc..) of this method, mentioned by both the reviewer ZeTr and me.\\n\\n\\n**Regarding statements on system prompt:** It is really confusing that the authors switch between system prompts in different experiments (also mentioned by reviewer ZeTr). I don\\u2019t well understand the motivation and that decreases the readability of this paper a lot.\"}", "{\"title\": \"Some Future Suggestions\", \"comment\": \"Thanks a lot for the detailed response and clarifications of the authors. On the whole this paper has its techntical merits, and it gives some insights for this active field.\\n\\n**As suggestions**, I personally suggest that the authors need to resolve the following things (in the future version):\\n\\n1. \\nMake the motivation of this paper clear. For example, the statement in your rebuttal \\n> This way, the model can answer questions it previously couldn't during inference, without requiring users to manually change the system prompt.\\n\\nis much clearer and stronger (in my opinion) than the statement in your paper\\n\\n> We identified and addressed conflicts between new and old knowledge during SFT, by introducing\\nmodified system prompts that aid in the integration of new knowledge.\\n\\n\\n\\n2. \\nClarify more the connection between your proposed MoI and system prompt.\\n\\nYour paper contains two main contributions including the method MoI and some tricks about system prompts. However, it is confusing about the connection between them. I understand that MoI has some special settings for system prompt, but this work makes reader feel like that you just put two separate things into one paper.\\n\\n\\n3. \\nTake a simple survey for your applicable scenarios, and make the motivations solid.\\n\\nAs I commented before, many SFT training samples have their own specific system prompts rather than a simple sentence \\\"You are ....\\\". Moreover, the huge impact (sensitivity) of system prompt seems well-known by LLM/NLP community. Before you really dive into this work, I think you should figure out that\\n- What is the proportion of the SFT training samples (in the real world) that have their own specific system prompts rather than a simple sentence \\\"You are ....\\\"? If the proportion is really high, it would largely injure the motivation of this paper.\\n- How (and at what proportion) do real users use system prompt? I agree with your point that some users do not like to change system prompt in inference stage and they might tend to put their instructions into `user_prompt` following `system_prompt`, but there would be also many people who tend to set `system_prompt`.\\n- How do developers and users handle the failed cases for LLMs? As far as I know, they would re-write the instructions as well as the system prompts because the huge impact (sensitivity) of system prompt seems well-known by LLM/NLP community.\\n- What is the real scenario of multi-task learning in SFT? For example, as far as I know, (1) there are thousands of specific tasks and each task has *very* different number of samples; (2) \\\"Chat\\\" is more like a general concept, and it covers many tasks including (*maybe*) math, coding, etc. People do not often identify the \\\"Chat\\\" type in their SFT dataset.\\nThe above setting is very different with that in your paper.\\n\\nI will keep my initial rating. Thanks again for the reponse of the authors.\"}", "{\"title\": \"Please check our response\", \"comment\": \"Dear Reviewer nEEX,\\n\\nSince the discussion period has been extended to December 3rd, but November 27th is the last day that authors may upload a revised PDF, we hope you can review our new version of the paper and the additional experimental content to see if they have addressed the weaknesses and questions you mentioned in your rebuttal.\\n\\nWe look forward to your response.\\n\\nThank you.\"}", "{\"title\": \"Thank you for response!\", \"comment\": \"Dear Reviewer nEEX,\\n\\nThank you for your valuable suggestions. My collaborators and I gained many ideas and insights for improving our work after reading your response. We sincerely thank you for your feedback and suggestions regarding our work.\\n\\nSincerely, thank you!\"}" ] }
9OxTqscUwi
AttnInput: Revolutionizing Pinyin Input with Context-Aware RWKV Language Models
[ "Zhiyu Gui" ]
The Pinyin Input Method Engine (IME) is widely used for inputting Chinese characters, but effectively integrating it with powerful large language models (LLMs) remains a challenge due to issues such as semantic discontinuity and inefficient training. This paper presents AttnInput, a novel approach that leverages the strengths of the RWKV language model, specifically its linear computational complexity and "infinite" context length, to enhance Pinyin IME. Our method integrates Pinyin information directly into the internal state of RWKV through a lightweight side network, effectively addressing the semantic discontinuity issue faced by previous LLM-based IMEs. Furthermore, AttnInput utilizes a pre-training strategy, significantly reducing training data and computational costs compared to previous methods. Experimental results demonstrate that AttnInput achieves state-of-the-art performance on abbreviated Pinyin input, especially as the Pinyin sequence length increases. This efficient design allows us to scale up to larger models and incorporate longer contexts, further improving accuracy and user experience.
[ "Pinyin Input Method", "IME", "LLM", "RWKV", "Ladder Side-Tuning" ]
Reject
https://openreview.net/pdf?id=9OxTqscUwi
https://openreview.net/forum?id=9OxTqscUwi
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yXZ7pU3SJc", "yGJHDxISYH", "x44IkH16Ct", "st0LruAUf7", "oMd9vpIGoq", "m4Dcrqm9C7", "hvjjVTFT5c", "gYNqjaV7BR", "TRpd9dUczs", "RcQEfZKaSY", "Qtd7t4cEOn", "OGnnP5wbQF", "AVWWnsp3kT", "7vVKGwEXvR", "7u6hjZrGZm" ], "note_type": [ "official_review", "official_comment", "official_comment", "decision", "comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730072195677, 1732325866037, 1732198357447, 1737523806933, 1739419876454, 1732203950343, 1734599193391, 1732380468317, 1732440800706, 1730052717867, 1729065606025, 1730655758430, 1732155822838, 1732182356435, 1732722002756 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6969/Reviewer_aWjT" ], [ "ICLR.cc/2025/Conference/Submission6969/Reviewer_aWjT" ], [ "ICLR.cc/2025/Conference/Submission6969/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6969/Authors" ], [ "ICLR.cc/2025/Conference/Submission6969/Authors" ], [ "ICLR.cc/2025/Conference/Submission6969/Area_Chair_wz8n" ], [ "ICLR.cc/2025/Conference/Submission6969/Authors" ], [ "ICLR.cc/2025/Conference/Submission6969/Reviewer_gCM1" ], [ "ICLR.cc/2025/Conference/Submission6969/Reviewer_ZXk9" ], [ "ICLR.cc/2025/Conference/Submission6969/Reviewer_gCM1" ], [ "ICLR.cc/2025/Conference/Submission6969/Reviewer_qBNK" ], [ "ICLR.cc/2025/Conference/Submission6969/Authors" ], [ "ICLR.cc/2025/Conference/Submission6969/Authors" ], [ "ICLR.cc/2025/Conference/Submission6969/Authors" ] ], "structured_content_str": [ "{\"summary\": \"Authors present a novel form of pinyin to Hanzi character conversion model using a RWKV model as the backbone.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Language technology support for pinyin is a critical field for investigation but remains niche in NLP community. Conventional wisdom would be to conduct SFT with an LLM, whereas the authors have shown that superior results can be achieved with specialized architectures. This is a suitably novel approach. Further, the use of RWKV instead of a conventional LLM is a necessary consideration of alternative technologies that goes understudied in current environment.\", \"weaknesses\": \"Several motivations of the paper are no substantiated and would benefit from citation of other works to defend. For instance, authors claim:\\n\\n\\\"However, inserting pinyin sequences disrupts the semantic flow between the prompt and target text,\\nposes challenges for effectively leveraging pretrained large language models, as their training objective primarily focuses on predicting the next token.\\\"\\n\\nBut lacks substantiation. Especially with current developments in long-context language modeling, there's no reason to suspect an attention based framework to maintain different semantic contexts despite the break in flow. \\n\\nWhile investigation of other LLM frameworks is sorely needed in the research community, transformer-decoders are effectively the default when it comes to LLMs. Choice of RWKV in their stead would benefit from clear motivation and comparison against methods.\\n\\nUse of Top-N for accuracy is an infrequent metric. Taken in combination with the lack of a numbers table and the heavy overlap between models in the Top-1 counts, the performance of the model appears questionable. This issue can be alleviated with a numeric table.\", \"questions\": \"Could you provide an appendix section on Pinyin? For non-native writers, it's difficult to follow the convention and how it overlaps with Hanzi system.\\n\\nWhat was the motivation for using RWKV as opposed to transformer based methods and current SOTA LLM systems?\\n\\nCould you substantiate the concern for semantic discontinuity in the concatenation approach?\\n\\nPlease provide clear accuracy numbers in support, or in lieu, of the graphs. It is difficult to evaluate if results are significant without.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Q1: This is helpful. But could you provide a bit more detail. At minimum, it should helps the reader understand why in Figure 1. JDGB can map to a single character. Just an example of homophony should suffice.\", \"q2\": \"This is better. Please also add to introduction at least a passing mention of the limitation of vanilla decoder models for thoroughness.\", \"q3\": \"Ah, if you're relying on your empirical results, please clarify that in the introduction would you make this claim. I was confused as if you were responding to previous limitations discovered for the task, not the following results from your work.\", \"q4\": \"Thank you, that's properly thorough.\"}", "{\"comment\": \"We thank you for the feedback and address all remaining concerns below. For further details, please refer to the newly uploaded file, where the modifications are highlighted in blue font. Thank you again for your valuable comments.\\n\\n## Q1\\n\\n> The experimental evaluation is primarily limited to synthetic data generated from SkyPile-150B, which may not fully reflect performance on real-world user-generated text, where varied and noisy input could pose additional challenges.\\n\\n> Can the authors provide more detailed evaluations on real-world datasets, including user-generated Pinyin input, to validate the model's robustness to noisy and diverse input?\\n\\nThis paper does not take noise in the input into consideration. We assume that the user's Pinyin input is always correct, which is also the current practice of most input methods.\\n\\n## Q2\\n\\n> The ablation study only tests the model without Pinyin sequences but does not explore other architectural variations, such as different side network configurations, limiting the insights into each component's importance.\\n\\n> How does the performance of AttnInput compare when trained with fewer training steps or using smaller datasets? Does the model generalize well with reduced training resources?\\n\\nWe compared models trained for 30k steps and 40k steps, and the test results were very similar, indicating that the model generalizes well with reduced training resources.\\n\\nWe did not mention this in the paper because we considered it unimportant. AttnInput requires minimal training resources.\\n\\n## Q3\\n\\n> Could the authors expand on the computational cost of integrating ladder side-tuning compared to other parameter-efficient fine-tuning techniques? How does this approach balance trade-offs between performance and efficiency?\\n\\nThe computational coat of ladder side-tuning is shown in Appendix A. In actuality, the principal merit of ladder side-tuning is its capacity to diminish the memory consumed by activations, consequently facilitating the use of larger batch sizes. Nevertheless, the size of activations depends on multiple factors, including the model's structure and the strategy for recomputation, thus rendering it difficult to analyze with a definitive formula. This is not the focus of this paper, so we did not elaborate on the analysis. In our experiments, ladder side-tuning trained faster than LoRA. The balance between performance and efficiency is achieved by adjusting the number of trainable parameters\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"We thank you for the feedback and address all remaining concerns below. For further details, please refer to the newly uploaded file, where the modifications are highlighted in blue font. Thank you again for your valuable comments.\\n\\n## Q1\\n\\n> While Figure 3 helps in comparing the performance trend, can the accuracy numbers be presented in a Table form as well? It would assist in quantitatively understanding the differences in performance.\\n\\nThank you for your suggestion! The detailed numeric table is in Appendix C now.\\n\\n## W1 & Q2\\n\\n> The vanilla RWKV seems to perform better than the proposed AttnInput for the P@1 setting (up to 9 pinyin length) and the P@5 setting (up to 6 pinyin length).\\n\\n> What are the potential reasons for the better performance of vanilla RWKV over the proposed approach in the P@1 setting (up to 9 pinyin length) and P@5 setting (up to 6 pinyin length)? How will this affect the practical usage of the proposed model for pinyin input?\\n\\nwe noticed that AttnInput performs slightly worse than vanilla RWKV6 in Top-1 accuracy. This phenomenon is also observed in previous works [1]. Our hypothesis is that the training procedure led to a slight degradation in the original model\\u2019s performance. We analyzed instances where the vanilla RWKV6 model provided the correct answer, while AttnInput failed to prioritize the target. Our investigation revealed that in these specific instances, the abbreviated pinyin corresponded to numerous contextually appropriate Chinese character sequences, causing AttnInput to encounter difficulties in accurately ranking them based on probability. This observation supports our initial hypothesis.\\n\\n## W4\\n\\n> Were multiple runs done (with different seeds) for the main results presented in Figure 3? If so, the error bars should be included as well.\\n\\nNo, random seeds do not affect the results because we employ deterministic decoding. (see 3.6 PINYIN-CONSTRAINED TRAINING AND INFERENCE for detailed information)\\n\\n## W5\\n\\nThank you! All typos have been fixed.\\n\\nReferences\\n\\n[1] [https://aclanthology.org/2022.acl-long.133.pdf](https://aclanthology.org/2022.acl-long.133.pdf)\"}", "{\"metareview\": \"The paper proposes AttnInput to improve the Pinyin Input Method Engine (IME) by utilizing the RWKV language model. This method addresses key challenges in integrating Pinyin with large language models, such as semantic discontinuity and inefficient training processes.\\n\\nHowever, there are noteworthy concerns: 1) the topic of 'Pinyin Input Method Engine' is relatively limited in scope, as evidenced by the scarce literature in Section 5 Relation Work. 2) Several motivations for the 'pinyin input' task have yet to be substantiated (as pointed out by Reviewer aWjT). 3) the motivation for choosing RWKV.\", \"additional_comments_on_reviewer_discussion\": \"1) Conduct more detailed evaluations using real-world datasets. (Reviewer qBNK)\\n2) Provide direct comparisons with other state-of-the-art methods, such as LSTM-based or Transformer-based IMEs (Reviewer ZXk9, aWjT, qBNK).\\n3) Several motivations presented in the paper are not substantiated. (Reviewer aWjT)\\n4) The model's performance appears questionable. (Reviewer aWjT)\\n5) The presentation is unclear and lacks structure. (Reviewer gCM1)\\n\\nThe authors did not directly address all the questions, and the responses can not dispel my doubts on the above questions.\"}", "{\"comment\": \"Thank you for your detailed response! We have addressed these issues in the latest uploaded PDF. All modifications are highlighted in orange.\"}", "{\"title\": \"Raising my score.\", \"comment\": \"Thank you for your response! The additional experiments have addressed my questions. I will increase my rating to 5.\"}", "{\"summary\": \"This paper presents a modification of RWKV architecture to support pinyin input, by integrating the pinyin information in the internal state of RWKV through a lightweight network. The proposed method has minimal computational overhead, and shows better performance than other baselines especially for longer pinyin lengths.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper addresses an important problem of adding support for pinyin method input in language models.\", \"The proposed method seems to perform better than other baselines, particularly on longer context lengths with P@10 and P@15 eval.\", \"The writing is clear overall. A few improvements (highlighted below) would further improve the clarity of the paper.\"], \"weaknesses\": [\"The vanilla RWKV seems to perform better than the proposed AttnInput for the P@1 setting (up to 9 pinyin length) and the P@5 setting (up to 6 pinyin length).\", \"AttnInput is evaluated on one dataset. It should be evaluated on additional pinyin input datasets to establish the generalizability of the proposed approach.\", \"A comparison of the performance of AttnInput with the regular GPT baseline would be useful (similar to how it is done in the PinyinGPT paper [1]).\", \"Were multiple runs done (with different seeds) for the main results presented in Figure 3? If so, the error bars should be included as well.\", \"Minor issues:\", \"Line 49: achieving -> achieve\", \"Line 95 - 97: v, k and r should be defined.\", \"Line 276, 286, 296: Traget -> Target\", \"Line 53: , poses challenges -> and poses challenges\", \"Line 80-83: This paragraph should be moved to a more relevant position.\"], \"references\": \"[1] https://aclanthology.org/2022.acl-long.133.pdf\", \"questions\": [\"While Figure 3 helps in comparing the performance trend, can the accuracy numbers be presented in a Table form as well? It would assist in quantitatively understanding the differences in performance.\", \"What are the potential reasons for the better performance of vanilla RWKV over the proposed approach in the P@1 setting (up to 9 pinyin length) and P@5 setting (up to 6 pinyin length)? How will this affect the practical usage of the proposed model for pinyin input?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces AttnInput, a novel approach integrating Pinyin information with RWKV-6. The method combines contextual and Pinyin information with a side network, enabling more efficient training than traditional techniques. Experimental results demonstrate that AttnInput achieves state-of-the-art performance on abbreviated Pinyin input.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces a method that employs RWKV-6 for Pinyin IMEs. RWKV, an RNN-based model, is particularly well-suited for this task.\\n2. The design of the side model efficiently integrates Pinyin information into the backbone model, and ladder side-tuning enables efficient training.\\n3. The proposed method achieves state-of-the-art performance in this task.\", \"weaknesses\": \"1. The authors demonstrate that their method is both efficient and effective for training. However, they do not provide experiments to substantiate its superiority over other Pinyin integration methods.\\n2. The comparison of your model with other approaches is not fair due to the inherent strengths of RWKV-6. A more appropriate comparison would involve other methods that integrate contextual and Pinyin information under the same conditions, such as comparable models.\\n3. The paper's presentation is unclear and lacks structure. The authors do not clearly explain the design of AttnInput, particularly the side network, nor do they describe how vanilla RWKV-6 is employed. Furthermore, the Introduction section is too brief and fails to provide an overview of the method.\\n4. The tables could be improved aesthetically: Table 1 exceeds the linewidth, and Table 2 would benefit from being presented in a three-line table format.\", \"questions\": \"1. As discussed in Weaknesses 1 and 2, what are the experimental results of other integration methods with RWKV-6?\\n2. The P@1 performance of the method is not better than the vanilla RWKV-6. Does this imply that incorporating Pinyin information merely imposes a constraint on the decoding process?\\n3. The authors claim that RWKV-6 was chosen for its infinite context window. However, RNN-based models also suffer performance degradation beyond their context window. Could the authors provide details on the length extrapolation capacity (i.e., performance beyond the context window) of their methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a novel approach to improving the Pinyin Input Method Engine (IME) by leveraging the RWKV language model. AttnInput addresses the integration of Pinyin with large language models, aiming to overcome challenges like semantic discontinuity and inefficient training. The model uses a lightweight side network to enhance the RWKV model's state representations with Pinyin information, improving efficiency in both training and inference. AttnInput claims state-of-the-art performance in abbreviated Pinyin input by utilizing RWKV's linear computational complexity and infinite context length. The proposed method also reduces computational requirements compared to prior approaches like PinyinGPT-Concat. The experimental results showcase significant performance improvements, particularly with longer Pinyin sequences, while maintaining practical latency for real-world applications.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper proposes an innovative integration of a lightweight side network with the RWKV model to enhance Pinyin IMEs, utilizing RWKV's infinite context length and efficient linear computational complexity, which is novel in the context of Pinyin input. The model achieves state-of-the-art results in experiments, especially with long Pinyin sequences, demonstrating the value of integrating the Pinyin information directly into the language model state. The structure of the paper is well-organized, with detailed explanations of the model components such as RWKV6, AttnInput, ladder side-tuning, and efficient training mechanisms. The experiments are thoroughly explained, with useful visual aids like figures and tables to support the analysis. AttnInput presents a meaningful advancement in the use of large language models for Pinyin IMEs, with potential real-world applications. The reduction in computational resources and training data compared to previous methods, without compromising performance, is a substantial contribution to the field.\", \"weaknesses\": \"The experimental evaluation is primarily limited to synthetic data generated from SkyPile-150B, which may not fully reflect performance on real-world user-generated text, where varied and noisy input could pose additional challenges. The ablation study only tests the model without Pinyin sequences but does not explore other architectural variations, such as different side network configurations, limiting the insights into each component's importance. Additionally, the paper primarily compares AttnInput with the vanilla RWKV6 and PinyinGPT-Concat, lacking direct comparisons with other state-of-the-art methods like LSTM-based or Transformer-based IMEs, which would provide a broader context for the contribution.\", \"questions\": \"Can the authors provide more detailed evaluations on real-world datasets, including user-generated Pinyin input, to validate the model's robustness to noisy and diverse input?\\n\\nHow does the performance of AttnInput compare when trained with fewer training steps or using smaller datasets? Does the model generalize well with reduced training resources?\\n\\nCould the authors expand on the computational cost of integrating ladder side-tuning compared to other parameter-efficient fine-tuning techniques? How does this approach balance trade-offs between performance and efficiency?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank you for the feedback and address all remaining concerns below. For further details, please refer to the newly uploaded file, where the modifications are highlighted in blue font. Thank you again for your valuable comments.\\n\\n## W1 & W2 & Q1\\n\\nThis was indeed an oversight on our part. We appreciate you bringing it to our attention.\\n\\nTo ensure a fair comparison with previous concat-based methods, we trained a concat-based model with RWKV6-1.6B, labeled as RWKV6-concat-lora. This model was fine-tuned with LoRA and includes 500M trainable parameters. The training data is the same as the AttnInput model. We tested this model, and its performance was disappointing.\\n\\nThe observed inferior performance of RWKV6-concat-lora relative to vanilla RWKV6 provides compelling evidence in support of our proposition that concat-based method disrupts semantic consistency and leads to inefficient training.\\n\\n## Q2\\n\\nFirst, It's impossible that AttnInput only introduces constraints at the decoding stage, as constraints are inherently and explicitly applied during decoding.(see 3.6 PINYIN-CONSTRAINED TRAINING AND INFERENCE for detailed information) If AttnInput's function within decoding is *solely* to impose these constraints, it would have no impact on the output.\\n\\nSecond, we noticed that AttnInput performs slightly worse than vanilla RWKV6 in Top-1 accuracy. This phenomenon is also observed in previous works [1]. Our hypothesis is that the training procedure led to a slight degradation in the original model\\u2019s performance. We analyzed instances where the vanilla RWKV6 model provided the correct answer, while AttnInput failed to prioritize the target. Our investigation revealed that in these specific instances, the abbreviated pinyin corresponded to numerous contextually appropriate Chinese character sequences, causing AttnInput to encounter difficulties in accurately ranking them based on probability. This observation supports our initial hypothesis.\\n\\n## Q3\\n\\nFirst, we admit that this was our oversight. The authors of RWKV6 claim that RWKV6 has \\u201dinfinite\\u201d context length on https://rwkv.com/ due to the observed continuous decrease in loss as the context length extends beyond the context length used during training. However, this does not necessarily imply that RWKV6 outperforms Transformer-based models in long-text understanding or retrieval tasks.\\n\\nSecond, we conducted experiments on text exceeding the context length used during training, and the results demonstrate that AttnInput possesses strong length extrapolation capacity. (see 4.2 RESULTS)\\n\\n## W3\\n\\nWe apologize for any confusion caused by our unclear writing. We have substantially rewritten the entire paper.\\n\\n## W4\\n\\nThank you! We have improved the layout of both tables according to your suggestions.\\n\\nReferences\\n\\n[1] [https://aclanthology.org/2022.acl-long.133.pdf](https://aclanthology.org/2022.acl-long.133.pdf)\"}", "{\"comment\": \"We thank you for the feedback and address all remaining concerns below. For further details, please refer to the newly uploaded file, where the modifications are highlighted in blue font. Thank you again for your valuable comments.\\n## Q1\\n\\n> Could you provide an appendix section on Pinyin? For non-native writers, it's difficult to follow the convention and how it overlaps with Hanzi system.\\n\\nThank you for your suggestion! We have already provided the appendix section.\\n\\n## Q2\\n\\n> What was the motivation for using RWKV as opposed to transformer based methods and current SOTA LLM systems?\\n\\n1. RWKV uses a unique tokenizer that encodes each Chinese character as a single token, whereas other pre-trained large language models use tokenizers that encode multiple Chinese characters into one token. This has brought us great convenience in training.\\n2. The authors of RWKV6 claim that RWKV6 has \\u201dinfinite\\u201d context length on https://rwkv.com/ due\\n to the observed continuous decrease in loss as the context length extends beyond the context length used during\\n training. However, this does not necessarily imply that RWKV6 outperforms Transformer-based models in\\n long-text understanding or retrieval tasks.\\n3. RWKV is more efficient during inference compared to transformer based methods.\\n\\n## Q3\\n\\n> Could you substantiate the concern for semantic discontinuity in the concatenation approach?\\n\\nSure, we trained a concat-based model with RWKV6-1.6B, labeled as RWKV6-concat-lora. This model was fine-tuned with LoRA and includes 500M trainable parameters. The training data is the same as the AttnInput model. We tested this model, and its performance was disappointing. (see Figure 3)\\n\\nThe observed inferior performance of RWKV6-concat-lora relative to vanilla RWKV6 provides compelling evidence in support of our proposition that concat-based method disrupts semantic consistency and leads to inefficient training.\\n\\n## Q4\\n\\n> Please provide clear accuracy numbers in support, or in lieu, of the graphs. It is difficult to evaluate if results are significant without.\\n\\nThank you for your suggestion! The detailed numeric table is in Appendix C now.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for taking the time to review our work and provide your valuable feedback. We welcome any additional comments, questions, or concerns you may have, as it would give us an opportunity to address them further.\"}" ] }
9Orm76dUuT
Test-Time Backdoor Attacks on Multimodal Large Language Models
[ "Dong Lu", "Tianyu Pang", "Chao Du", "Qian Liu", "Xianjun Yang", "Min Lin" ]
Backdoor attacks typically set up a backdoor by contaminating training data or modifying parameters before the model is deployed, such that a predetermined trigger can activate harmful effects during the test phase. Can we, however, carry out test-time backdoor attacks *after* deploying the model? In this work, we present **AnyDoor**, a test-time backdoor attack against multimodal large language models (MLLMs), without accessing training data or modifying parameters. In AnyDoor, the burden of *setting up* backdoors is assigned to the visual modality (better capacity but worse timeliness), while the textual modality is responsible for *activating* the backdoors (better timeliness but worse capacity). This decomposition takes advantage of the characteristics of different modalities, making attacking timing more controllable compared to directly applying adversarial attacks. We empirically validate the effectiveness of AnyDoor against popular MLLMs such as LLaVA-1.5, MiniGPT-4, InstructBLIP, and BLIP-2, and conduct extensive ablation studies. Notably, AnyDoor can dynamically change its backdoor trigger prompts and/or harmful effects, posing a new challenge for developing backdoor defenses.
[ "Multimodal Large Language Models", "Test-Time Backdoor Attacks" ]
https://openreview.net/pdf?id=9Orm76dUuT
https://openreview.net/forum?id=9Orm76dUuT
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qH1eYHkEJp", "WjZPIyk8N7", "WEKvyWOgv5", "QlryeuG2Fw", "6HkEVt4m8W", "52cTGLzaVC" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_comment", "official_review" ], "note_created": [ 1730176150459, 1730448494993, 1731654874036, 1729488453693, 1731654855438, 1730108183869 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4806/Reviewer_cLVr" ], [ "ICLR.cc/2025/Conference/Submission4806/Reviewer_DR4y" ], [ "ICLR.cc/2025/Conference/Submission4806/Authors" ], [ "ICLR.cc/2025/Conference/Submission4806/Reviewer_fkWW" ], [ "ICLR.cc/2025/Conference/Submission4806/Authors" ], [ "ICLR.cc/2025/Conference/Submission4806/Reviewer_RtCE" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents a test-time backdoor attack method called AnyDoor, which employs adversarial noise on images alongside static triggers at the text level. Specifically, AnyDoor optimizes adversarial noise for the visual module and uses predefined textual triggers as supervisory signals. During inference, these text backdoor triggers activate the backdoor behavior. The effectiveness of the proposed method is evaluated across multiple model architectures.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The research motivation is clearly articulated, and the writing is coherent.\", \"The exploration of backdoor attacks on multimodal large language models represents a novel research area.\"], \"weaknesses\": \"- **Unclear Definition of Backdoor Attack:** The definition of the attack is ambiguous. The proposed method aligns more with adversarial attacks than with traditional backdoor attacks. Typically, a backdoor attack involves two key components: (1) Backdoor injection (through poisoning or weight/activation manipulation) and (2) Backdoor activation using a predefined trigger to control the model\\u2019s output target. The injection phase establishes a mapping between the trigger and a specific target label or response, allowing the attacker to control the model's output during inference. Based on the fundamental concept of backdoor attacks, I believe the proposed AnyDoor attack is more accurately classified as an adversarial attack rather than a backdoor attack. Therefore, the authors should clarify the differences between adversarial and backdoor attacks and explain the rationale for merging these concepts.\\n\\n- **Limited Technical Innovation:** The core technique of the proposed method combines standard adversarial perturbations (for images) with specific string triggers (for text) to execute the attack. However, universal adversarial perturbations (UAPs) and token-level triggers (e.g., using static words like \\u201csudo\\u201d) have already been extensively studied in existing literature [1][2][3]. As a result, the proposed attack does not introduce new insights or techniques for the backdoor research community. One interesting idea would be to explore jointly optimizing image perturbations and text triggers, which may lead to a more effective attack strategy.\\n\\n- **Lack of a Defined Threat Model:** The paper does not clearly delineate the attack scenario or the attacker's capabilities. Without a defined threat model, readers may struggle to assess the relevance of the proposed attack in real-world situations and whether the attacker can successfully execute it under practical constraints.\\n\\n- **Concerns about Attack\\u2019s Robustness:** The attack may be easily mitigated through preprocessing techniques at the image or text level. For example, defenders could use diffusion-based models to purify adversarial perturbations in images [4] or filter out special characters in text [5], effectively neutralizing the attack. The authors need to clarify the practical implications of their method and how it would withstand these potential defenses.\\n\\nIn summary, the proposed AnyDoor attack is more accurately classified as an adversarial attack rather than a backdoor attack, as it primarily relies on optimizing adversarial noise in images. In terms of methodology, the technical novelty is limited, and the attack could be easily countered by purification techniques such as diffusion models at the image level. Most critically, achieving target-specific attacks during test-time may not be transferable or universal, as adversarial noise needs to be optimized for specific targets, thereby limiting the method\\u2019s applicability in real-world scenarios.\\n\\n[1] Adversarial Illusions in Multi-Modal Embeddings, USENIX Security, 2024 \\n[2] Towards adversarial attack on vision-language pre-training models, MM, 2022 \\n[3] Badnl: Backdoor attacks against nlp models with semantic-preserving improvements, ACSAC, 2021 \\n[4] Diffusion models for adversarial purification, ICML, 2022 \\n[5] STRIP: a defence against trojan attacks on deep neural networks, ACSAC, 2019\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores the possibility of implementing a backdoor attack during the testing phase. The paper proposes a type of backdoor attack called ``AnyDoor``, which does not require access to training data or modification of model parameters. In terms of experiments, the article focuses on multimodal language models and conducts extensive experiments on multiple models such as LLaVA, Mini-GPT4, BLIP.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The experiments in the article are comprehensive, with experiments conducted on multiple multimodal large models.\", \"The writing/presentation is good, and some of the visual explanations are quite clear.\", \"The test-time backdoor researched in this article is quite interesting.\"], \"weaknesses\": [\"The method section seems quite vague. After reading, I still do not understand the principle of backdoor injection during the test phrase. It seems that the article dedicates a large portion of its content to introducing the scenario and highlighting the differences from traditional scenarios in the methodology section.\", \"It seems that the article did not analyze the threat model. Who is the attacker during the testing phase? Who is the victim? What are the capabilities of the attacker? Where do these attacks take place, in which scenarios/platforms?\", \"The technical contribution of this article is minimal. Could you perhaps emphasize the technical contribution again? I believe that a certain level of technical contribution is necessary for a top-tier conference like this.\"], \"questions\": [\"I suggest the author provide a detailed explanation of the method's principles and details, and explain why such a method is needed.\", \"I also recommend that the author elaborate on the main contributions of the article. In my opinion, it seems that only a new scenario has been proposed.\", \"For other suggestions, please refer to the \\\"Limitations\\\" section.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"In this study, the authors extend visual universal adversarial attacks, initially designed for image classification tasks, to large multi-modal models (MLLMs). Their method involves optimizing adversarial perturbations in conjunction with a predefined trigger token to elicit specific harmful responses from the models. The approach is tested on MLLMs including LLaVA-1.5, MiniGPT-4, InstructBLIP, and BLIP-2, demonstrating a notable success rate in the attacks. Additionally, the authors conduct several ablation studies to further analyze the method's effectiveness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"First, to the best of my knowledge, no prior work has explored attacks in this particular setting, combining adversarial perturbations on images with a triggering token.\\n\\nSecond, the draft is generally clear and easy to follow.\\n\\nThird, the experimental evaluation includes a robust selection of state-of-the-art MLLMs.\", \"weaknesses\": \"First, given that existing approaches, such as Dong et al. (2023b), have already demonstrated that visual universal adversarial perturbations (UAPs) can be applied to MLLMs (potentially in a black-box setting as well), the technical novelty of this work appears somewhat limited, especially so given the limited transferability (see below).\\n\\nSecond, a critical area that requires substantial expansion is the study of the transferability of the proposed attack. While a paragraph on page 10 touches on this, much remains unexplored. For example, how effective is the attack in generating text that is semantically similar to the target response? If it is not effective, what do you believe could be the underlying reasons, and how would you propose to investigate this further? From a practical perspective, the lack of cross-model transferability significantly reduces the attack's relevance.\\n\\nThird, aspects of the experimental evaluation could be improved. For instance, the rationale behind selecting certain types of adversarial attacks and the criteria for choosing and applying mitigation methods are not clearly explained (see below for specific examples).\\n\\nThe following are some detailed comments.\", \"page_2\": \"\\u201cIt is important to note that adversarial attacks require tset = tact, which may be quite strict as it necessitates both manipulating capacity and timeliness.\\u201d\", \"comment\": \"It is not unclear to me how to read these numbers in such a setting. In fact, I would say that it is hardly meaningful to make such a measurement.\", \"table_5\": \"\\u201cTable 5: Attack under common corruptions. The universal adversarial perturbations are generated using the border attack with b = 6.\\u201d\", \"commen\": \"I found the experimental configuration rather under-specified here. Details are how the corruptions are applied are missing, which could greatly impact what the results are saying here. For instance, how do you crop? In the case of border attack, is the border retained somehow or it could be cropped? Furthermore, what about the effect on other types of attacks?\", \"page_10\": \"\\u201cTherefore, we utilize caption evaluation metrics to assess the discrepancy between the model\\u2019s output with the introduction of a trigger into the input and the output of the original clean sample. This comparison reveals the sustained transfer attack potential of our AnyDoor attack, resulting in diminished model outputs.\\u201d\", \"questions\": \"(1): What is your technical contribution that is in addition to existing approaches?\\n\\n(2): How transferable are your attacks, across different models (models with different architecture or models that have gone through diferent finetuning).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewers,\\n\\nWe appreciate your time and insightful feedback on our work.\\n\\nBackdoor attacks seek to **inject a backdoor that can be later activated by a non-adversarial trigger**, with poisoning data and editing model parameters being two popular ways to accomplish this goal. However, we disagree with the assertion that \\\"if you do not poison data or edit the model, you are not conducting backdoor attacks.\\\" This conservative assertion actually limits the scope of backdoor attack research, particularly in the area of LLMs/MLLMs, as adversaries can hardly assume they can poison pretraining data. For example, even with a poisoning rate of $0.01\\\\\\\\%$, adversaries need to poison 1.5B tokens according to 15T tokens pertaining data of Llama 3.\\n\\nOur **test-time backdoor attacks** extend the scope of backdoor attacks to include scenarios in which neither the training data nor the model parameters can be poisoned/edited. Our method achieves the goal of \\\"injecting a backdoor that can be later activated by a non-adversarial trigger\\\" while remaining applicable to deployed models that cannot be modified. We believe that our work will benefit the backdoor research community and inspire more interesting ideas that can efficiently backdoor large models.\\n\\nAfter careful consideration, we have decided to withdraw our paper from ICLR. Thank you once again for your thorough review and thoughtful comments.\\n\\nBest,\\\\\\nThe Authors\"}", "{\"summary\": \"This paper presents AnyDoor, a test-time backdoor attack method targeting multimodal large language models (MLLMs). AnyDoor uses universal adversarial perturbations in the vision domain, combined with a text prompt, to activate harmful backdoor responses. The approach leverages the higher capacity of the vision modality to generate perturbations, while a short phrase in the language domain triggers the backdoor response in MLLMs. Unlike traditional data poisoning backdoor attacks, AnyDoor optimizes the backdoor trigger at test time, making it feasible for real-world applications. This novel approach reveals a new adversarial threat to MLLMs, with comprehensive evaluations demonstrating AnyDoor\\u2019s effectiveness.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Optimizing the backdoor perturbation/trigger at test time is both novel and practical for MLLMs. Unlike data poisoning\\u2014which may be impractical in some settings\\u2014test-time optimization of the backdoor perturbation/trigger is a more realistic approach for real-world scenarios. This introduces a new form of adversarial threat for many MLLMs.\", \"In this threat model, the backdoor trigger is embedded in the text domain, while the image is also perturbed. The harmful backdoor response is activated only when both the image perturbation and text trigger are present, representing a unique and innovative threat model for MLLMs. The motivation behind this model is well illustrated in Figure 1.\", \"The thorough evaluations are appreciated, and the empirical results clearly demonstrate the proposed method\\u2019s effectiveness.\"], \"weaknesses\": [\"Although not explicitly stated, AnyDoor appears to require gradient access for perturbation optimization, implying a white-box setting. This requirement could limit its practical applicability, as many MLLMs are deployed as services without granting gradient access to users. This restriction makes the attack challenging to execute in real-world.\", \"The results in Table 9 indicate limited black-box transferability. When using LLaVA-1.5 as the source model, it would be helpful to know if the attack transfers effectively to other models such as InstructBLIP or BLIP2. Further exploration of cross-model transferability could offer more insights into AnyDoor\\u2019s robustness in black-box scenarios.\", \"The paper lacks evaluations with adversarially trained models. It would be great to assess AnyDoor\\u2019s effectiveness on MLLMs that incorporate adversarial training, such as a LLaVA model with an adversarially trained image encoder, as discussed in RobustCLIP by Schlarmann et al. (2024) [1]. Such analysis would shed light on whether adversarial training enhances model resilience to AnyDoor.\", \"The current perturbations, such as Border and Corner attacks, are visually apparent and might be easily detected by human observers. Defenders could employ simple countermeasures, like cropping, to neutralize these fixed-location perturbations. It would be great to test if randomizing the perturbation locations retains the attack\\u2019s effectiveness. Additionally, the high perturbation budget of 32/255 for $L_\\\\infty$ attack is noticeable. Including an ablation study with smaller perturbation budgets, such as 4/255, 8/255, and 16/255, could provide a better understanding of the trade-off between stealthiness and attack success.\", \"The paper has a few potentially misleading areas that would benefit from further clarification. Please refer to the questions\\u00a0section.\", \"---\", \"[1] Schlarmann, C., Singh, N. D., Croce, F., & Hein, M. Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models. In\\u00a0Forty-first International Conference on Machine Learning.\"], \"questions\": [\"Is AnyDoor assuming white-box access to the model?\", \"Without the image perturbation, can the trigger text alone activate the harmful response?\", \"In lines 176 - 183, the explanation of greater capacity in the vision domain is clear. Regarding timeliness, the reviewer is confused by the statement in line 178, \\\"attacking activation necessitates a modality with greater manipulating timeliness.\\\" Can authors further explain where is the trade-off? Planting the backdoor using data poisoning does not seem to incur any overheads, but AnyDoor requires optimization.\", \"It is not clear what loss function $\\\\mathcal{L}$ is used in Eq (2). It would be great if the author could further clarify.\", \"For the \\\"Contain\\\" metric, does it only count all the target strings present in the response?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
9OfKxKoYNw
DiffusionGuard: A Robust Defense Against Malicious Diffusion-based Image Editing
[ "June Suk Choi", "Kyungmin Lee", "Jongheon Jeong", "Saining Xie", "Jinwoo Shin", "Kimin Lee" ]
Recent advances in diffusion models have introduced a new era of text-guided image manipulation, enabling users to create realistic edited images with simple textual prompts. However, there is significant concern about the potential misuse of these methods, especially in creating misleading or harmful content. Although recent defense strategies, which introduce imperceptible adversarial noise to induce model failure, have shown promise, they remain ineffective against more sophisticated manipulations, such as editing with a mask. In this work, we propose DiffusionGuard, a robust and effective defense method against unauthorized edits by diffusion-based image editing models, even in challenging setups. Through a detailed analysis of these models, we introduce a novel objective that generates adversarial noise targeting the early stage of the diffusion process. This approach significantly improves the efficiency and effectiveness of adversarial noises. We also introduce a mask-augmentation technique to enhance robustness against various masks during test time. Finally, we introduce a comprehensive benchmark designed to evaluate the effectiveness and robustness of methods in protecting against privacy threats in realistic scenarios. Through extensive experiments, we show that our method achieves stronger protection and improved mask robustness with lower computational costs compared to the strongest baseline. Additionally, our method exhibits superior transferability and better resilience to noise removal techniques compared to all baseline methods. Our source code is publicly available at https://choi403.github.io/diffusionguard.
[ "image inpainting", "adversarial attack", "image editing", "ai safety", "diffusion model" ]
Accept (Poster)
https://openreview.net/pdf?id=9OfKxKoYNw
https://openreview.net/forum?id=9OfKxKoYNw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yk2lgr2SUX", "xkdxmhiKmp", "vbEVXcciKT", "tRALPpJTi0", "q8r9ITABdk", "px36zixdDK", "pEeg1t32lx", "p9jtggH1QW", "nDrGyqEMTR", "muroh1fXkU", "jMxjoeTqNF", "hhClHAJlTR", "fNqlVHKz9V", "dbjFdmUrsI", "d01W4ONbKa", "boMeN0MB2I", "ainZ4VKp2G", "X9mwndda2g", "WA5tSP7iWp", "SIPjdiRB0Y", "QGKT0GIbPM", "OnSFiicyJE", "M2RzFGCfOQ", "JbyUpaC02S", "EnOzfyQ8Bc", "EWskMmRPjf", "EDn02zMQND", "BkH2pqn3fV", "6eIZTXFkg5", "1udKXqVgxQ", "10UlwHROIi", "0U3iqKaeVC" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732276183733, 1730654383013, 1729733579785, 1732595816387, 1732276636237, 1730607125817, 1732279537852, 1732338255079, 1732178226900, 1732292461984, 1732210152705, 1732224740056, 1732276287187, 1732283811804, 1737523736421, 1732276362449, 1732178325071, 1732199243505, 1729613840874, 1732244177128, 1732178401158, 1732244390146, 1734798259017, 1732178533784, 1732276496552, 1732198731849, 1732178183469, 1732177748129, 1732178250261, 1732245094227, 1732177878168, 1732177930008 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Reviewer_ScsB" ], [ "ICLR.cc/2025/Conference/Submission5966/Reviewer_ZPiB" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Reviewer_QU4Q" ], [ "ICLR.cc/2025/Conference/Submission5966/Reviewer_ZPiB" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Reviewer_WYYe" ], [ "ICLR.cc/2025/Conference/Submission5966/Reviewer_ScsB" ], [ "ICLR.cc/2025/Conference/Submission5966/Reviewer_WYYe" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Reviewer_WYYe" ], [ "ICLR.cc/2025/Conference/Submission5966/Reviewer_ZPiB" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Reviewer_ZPiB" ], [ "ICLR.cc/2025/Conference/Submission5966/Area_Chair_FnSA" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Reviewer_ZPiB" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ], [ "ICLR.cc/2025/Conference/Submission5966/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer ScsB,\\n\\nThank you for your thoughtful feedback and for increasing the score. We appreciate your comments, which greatly helped improve the depth of our work. Your insights were invaluable, and we look forward to further refining our research.\"}", "{\"summary\": \"The paper proposes DiffusionGuard, an image-cloaking algorithm to defend against malicious diffusion-based text-guided inpainting. Compared to previous works, it has two main proposals. First, instead of optimizing any denoising step using either image-space loss or reconstruction loss, DiffusionGuard only optimizes at the early stage (t = T) and aims to increase the norm of the noise. Second, it employs mask augmentation to improve the robustness of the proposed algorithm for different mask variations at test time. Experiments verified that DiffusionGuard outperforms the previous baselines on this task.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposal of mask augmentation is sensible.\", \"DiffusionGuard outperforms the baselines in all metrics. Qualitative figures show that it often causes the inpainting models to generate plain and blurry inpainted output backgrounds.\"], \"weaknesses\": [\"The title is misleading. The work only focuses on diffusion-based text-guided inpainting, e.g., Stable Diffusion Inpainting. It does not consider other diffusion-based image editing methods such as Instruct-Pix2Pix, MasaCtrl... The authors should revise the title to better specify the scope of the work.\", \"The work only tests with Stable Diffusion Inpainting variants. Recent inpainting models, e.g., MagicBrush [1], should be mentioned and tested.\", \"L191-200: the mentioned \\\"unique behavior\\\" of inpainting models sounds misleading. In the early denoising stage, the fine details only appear on the unchanged region, which is basically copied from the input. The inpainting regions, i.e., the background, still do not have fine details and behave as in normal diffusion models.\", \"L200: The reason for targeting the early steps is not convincing. From the presented results, the proposed method affects only the inpainting regions outside of the face, which have similar behavior as in normal diffusion models. In the ablation studies, the author should add an extra experiment to test the case when Eq (4) is applied in all time steps instead of only in the early one.\", \"In mask augmentation, the mask is shrunk to be smaller. What happens if the mask used at test time is bigger?\", \"The PSNR metric used in Table 1 and Fig. 5 is not reliable. Given the same image and mask, we can have different editing results that match the input prompt. Hence, a small PSNR does not necessarily imply a successful defense; good editing can still produce a low PSNR score.\", \"The test set is small, with only 42 images. It is better to test on a much larger set of images.\", \"The authors ran experiments with 5 masks per testing image. From Fig.4, the masks are pretty similar; hence, the effect of changing the mask is not significant. I would trade the number of masks and prompts to have more testing images.\", \"The authors should provide a qualitative figure showcasing the cloaked images to see whether the added noise is obvious or not. Quantitative numbers (PSNR, SSIM) for it are also recommended.\", \"Fig.4: The first 3 examples are very good; the inpainted backgrounds are plain and blurry. However, the last example does not show that behavior. The authors should explain why. Fig.5a confirms that DiffusionGuard is not always that good and still loses to Photoguard 22-25% of the time.\", \"[1]. Zhang, K., Mo, L., Chen, W., Sun, H. and Su, Y., 2024. Magicbrush: A manually annotated dataset for instruction-guided image editing. Advances in Neural Information Processing Systems, 36.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work reveals that inpainting models generate fine details in the very early stages of the denoising process, leading to the development of a defense method against unauthorized image inpainting. A mask augmentation technique is proposed to enhance robustness. Additionally, a benchmark is introduced to evaluate the effectiveness of protection against unauthorized image inpainting.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. An insight is provided that inpainting models generate fine details during the very early stages of the denoising process.\\n2. A new objective specifically designed to prevent image inpainting is introduced.\\n3. A benchmark is introduced.\", \"weaknesses\": \"1. The mask augmentation is achieved by shrinking the contours inward. If malicious users provide masks larger than those used during training, will this affect performance?\\n\\n2. The diffusion model's sampling can begin from different timesteps, and various sampling schedulers may start at different timesteps. For example, when sampling with 50 steps of DDIM, T is typically around 981, whereas for 25 steps of DPM-Solver, T might be around 961. If the user uses a different sampler from the one used during training, or the same sampler but starts from a different timestep T, will the proposed algorithm still work in this case?\\n\\n3. The problem setting may be somewhat narrow. While the title suggests it is \\\"against image editing,\\\" the method is only effective for a specific type of editing\\u2014image inpainting. It remains unclear whether the method can prevent other forms of editing that don't involve masks, such as instruction-guided editing [1][2].\\n\\n4. Several recent references [3,4,5] on harmful concept removal are missing.\\n\\n---\\n[1] InstructPix2Pix: Learning to Follow Image Editing Instructions\\n\\n[2] MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing\\n\\n[3] One-dimensional Adapter to Rule Them All: Concepts, Diffusion Models and Erasing Applications\\n\\n[4] MACE: Mass Concept Erasure in Diffusion Models\\n\\n[7] Separable Multi-Concept Erasure from Diffusion Models\", \"questions\": \"Please refer to the weaknesses.\\n\\nIf the authors address my concerns during the rebuttal, I would be open to adjusting my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer QU4Q,\\n\\nWe greatly appreciate the time and efforts in reviewing our paper.\\n\\nAs the discussion period draws close, we kindly remind you that seven days remain for further comments or questions. We would appreciate the opportunity to address any additional concerns you may have before the discussion phase ends.\\n\\nThank you very much!\\n\\nMany thanks,\\n\\nAuthors\"}", "{\"comment\": \"Dear reviewer WYYe,\\n\\nThank you for your thoughtful feedback and insightful questions. We deeply appreciate the time you\\u2019ve taken to engage with our work and offer such valuable perspectives. Below, we address each of your points in detail:\\n\\n**[R1] Question about L2 loss**\\n\\nThank you for the question. We agree with your insight that L2 leads to better performance because diffusion models are trained for the MSE loss (e.g., between target noise and its prediction). More specifically, this training would effectively define the likelihood of the diffusion model with MSE as well; for instance, [1] defines the conditional log-likelihood using the L2 distance of the input image and a diffusion-denoised image. \\n\\nTherefore, attacking the diffusion model using the training loss would be the most effective, and other losses could be less effective. For example, an adversarial perturbation generated to maximize the L1 loss might not maximize the L2 loss. If a model trained using the L2 loss receives such input, the model would process it more accurately than when it receives an adversarial perturbation generated using the L2 loss.\\n\\n**[R2] Resolved concern about ablation study**\\n\\nWe are delighted that our additional ablation studies addressed your concern and provided clarity on different parameters. Thank you for your thoughtful feedback and encouraging words.\\n\\n**[R3] Clarification about adversarial perturbation for eye editing experiments**\\n\\nTo clarify, the adversarial perturbations for the example in [W5-1] (Figure 26) were generated using the previous mask (main object). In other words, the adversarial perturbation from our main experiment was directly reused, making this a transferred setup. We have updated our draft to reflect this clarification.\\n\\n**[R4] About sub-perturbations**\\n\\nWe fully agree that atomizing an image into several sections and optimizing for each individually presents a promising research direction. One interesting application of this approach could be for images containing multiple independent entities to protect\\u2014for example, an image with multiple people. In such cases, generating adversarial perturbations for each face separately and then merging them might prove more effective than simply optimizing for all faces simultaneously.\\n\\nWe deeply appreciate your insightful suggestion and are excited about the possibilities of future research direction. This discussion has been immensely thought-provoking and has further motivated us to explore these ideas in future work.\\n\\n[1] Jaini, P., Clark, K., and Geirhos, R., 2024. Intriguing properties of generative classifiers. International Conference on Learning Representations, 2024.\"}", "{\"summary\": \"The paper proposes a effective and robust method against malicious diffusion-based image editing. The method is interesting and insightful. With the proposed benchmark, the paper shows the superior results compared to baseline methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The observations, that the inpainting models produce fine details of masked region at early steps, are interesting and insightful.\\n\\n2. Using augmented masks is a reasonable and effective method to improve robustness.\\n\\n3. The paper proposes a benchmark to evaluate different methods. Extensive results show the effectiveness and robustness of the method.\", \"weaknesses\": \"There are two main concerns.\\n\\n1. Did the authors try some specifically designed purification methods for such perturbations in diffusion models? Such as the method in [1].\\n\\n2. Only focusing on mask-based image editing may be a little limited. Currently many editing methods do not require such masks, such as InstructPix2Pix[2]. Can the proposed method be used in these methods? Will the proposed method still be more effective and robust?\\n\\n\\n\\n\\n[1] Bochuan Cao et al. IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI, 2023.\\n\\n[2] Tim Brooks et al. InstructPix2Pix: Learning to Follow Image Editing Instructions, 2023.\", \"questions\": \"1. Will different editing prompts have effects on the results? Are the perturbations generated with a single prompt or several different text prompts?\\n\\nFor other questions, please see the Weaknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors' patient explanations and thorough experiments. The results regarding large masks are both intriguing and reasonable, the discussion on generalization is well-articulated, and the insights into instruct-pix2pix are thoughtful and commendable.\\n\\nAll my concerns have been effectively addressed, and I am pleased to raise the score to 6.\"}", "{\"comment\": \"Dear Reviewer WYYe,\\n\\nWe are very delighted that our comments and experiments were able to adequately address your previous concerns, and thank you for the constructive comments and for increasing the score.\\n\\nWe agree that atomizing an image into minimum semantic sections could indeed lead to a more robust solution against inpainting, especially under various mask inputs. This is indeed an interesting direction for future research, and we appreciate you bringing it to our attention. While our current approach focuses on exploiting the specific behavior of inpainting models and augmenting a mask and to design effective and robust attacks, incorporating information about semantic subparts in an image could further enhance robustness against various masks.\\n\\nWe are also grateful for your recognition of the experimental section and our efforts to provide comprehensive analyses. Your feedback has been invaluable in refining our work. Thank you again for your thoughtful and encouraging review.\"}", "{\"comment\": \"Dear Reviewer ZPiB,\\n\\nWe sincerely appreciate your insightful comments. They were incredibly helpful in improving our draft. We have addressed each comment in detail below.\\n\\n---\\n\\n**[W1] Effects of larger masks at test time**\\n\\nThis is an interesting question! We conducted additional experiments to test this scenario. Specifically, we generated two larger masks by dilating the existing masks (one seen and one unseen), ensuring they are significantly larger, with an average increase of 26% in mask area. As shown in Table A (below), we found that DiffusionGuard maintains strong protective effectiveness compared to the baselines, even when a larger mask is provided at test time. \\n\\n**Table A. Using a larger mask at test-time**\\n\\n| Method | CLIP Dir. Sim \\u2193 | CLIP Sim. \\u2193 | ImageReward \\u2193 | PSNR \\u2193 |\\n|------------------|-----------------|-------------|---------------|---------|\\n| PhotoGuard | 22.07 | -1.588 | 28.55 | 15.45 |\\n| AdvDM | 21.76 | -1.593 | 28.46 | **13.20** |\\n| Mist | 22.19 | -1.562 | 28.64 | 13.99 |\\n| SDS(-) | 21.29 | -1.587 | 28.20 | 13.96 |\\n| **DiffusionGuard** | **20.71** | **-1.709** | **27.86** | 14.72 |\\n\\nWe explain this in detail in Appendix E.5 and Table 8 in our updated draft.\\n\\n---\\n\\n**[W2] Inference starting with different timesteps or using different sampler**\\n\\nThank you for the intriguing question. To address this, we conducted two additional experiments: testing DiffusionGuard with DDIM using different steps, and testing with a different sampler (DPM-Solver) using 25 steps. Specifically, we tested DDIM with three additional inference steps (25, 40, 75), corresponding to T values of 961, 976, and 963, respectively. Additionally, we tested DPM-Solver with 25 steps. In all cases, we found that DiffusionGuard operates independently of inference steps and consistently outperforms baseline methods, as shown in Table B below. As shown in the table, DiffusionGuard consistently shows the best performance without being particularly sensitive to the number of sampling steps or the sampler type.\\n\\n**Table B. Evaluation using different number of DDIM inference steps or sampler**\\n\\n| **Method** | **CLIP Dir. Sim\\u2193** | **CLIP Sim.\\u2193** | **ImageReward\\u2193** | **PSNR\\u2193** |\\n|-------------------------------------------------|--------------------|----------------|------------------|-----------|\\n| **Unseen mask** | | | | |\\n| PhotoGuard, DDIM 25 steps | 22.93 | -1.421 | 30.04 | 15.43 |\\n| PhotoGuard, DDIM 40 steps | 23.21 | -1.357 | 30.23 | 14.69 |\\n| PhotoGuard, DDIM 50 steps (default) | 23.30 | -1.357 | 30.30 | 14.53 |\\n| PhotoGuard, DDIM 75 steps | 23.53 | -1.368 | 30.31 | 14.59 |\\n| PhotoGuard, DPM-Solver 25 steps | 18.75 | -1.682 | 26.73 | 11.83 |\\n| AdvDM, DDIM 25 steps | 23.92 | -1.382 | 30.73 | 14.16 |\\n| AdvDM, DDIM 40 steps | 24.27 | -1.358 | 30.94 | 13.52 |\\n| AdvDM, DDIM 50 steps (default) | 24.27 | -1.361 | 30.97 | 13.37 |\\n| AdvDM, DDIM 75 steps | 24.20 | -1.376 | 30.90 | 13.36 |\\n| AdvDM, DPM-Solver 25 steps | 19.37 | -1.744 | 27.27 | 10.63 |\\n| **DiffusionGuard, DDIM 25 steps** | **21.29** | **-1.596** | **28.59** | **14.04** |\\n| **DiffusionGuard, DDIM 40 steps** | **21.59** | **-1.562** | **28.90** | **13.29** |\\n| **DiffusionGuard, DDIM 50 steps (default)** | **21.84** | **-1.557** | **29.05** | **13.19** |\\n| **DiffusionGuard, DDIM 75 steps** | **22.08** | **-1.572** | **29.12** | **13.29** |\\n| **DiffusionGuard, DPM-Solver 25 steps** | **17.48** | **-1.816** | **25.53** | **10.20** |\\n\\nNote that while we presented two main baselines and unseen mask set only in this table due to space limit, DiffusionGuard outperformed all other baselines in a similar manner in both seen and unseen mask sets. We include the full detail of the experiments in Table 9 of Appendix E.6.\"}", "{\"comment\": \"Thank author for thorough response during the rebuttal period. I am pleased to see that all my previous concerns have been adequately addressed, and I have accordingly increased my rating to 6.\\n\\nWhile I find this paper to be methodologically sound and well-executed, I should note that although I have experience with `attack against LDM` and `LDM-based defenses`, I am not specifically an expert in inpainting. Nevertheless, my first impression of attacking inpainting is, there could be a solution that `atomize image into minimum semantic sections` as I mentioned previously, which might lead to more robust attack methods against inpainting. This consideration somewhat tempers my enthusiasm for the novelty of the approach, preventing me from assigning a higher score of 8 or 10.\\n\\nThat said, I want to commend the experimental section, which comprehensively addresses all my concerns. The authors' diligent efforts during the rebuttal period have successfully clarified the vast majority of my questions. The thorough responses and additional analyses have significantly strengthened my confidence in the work's validity and contribution to the field.\"}", "{\"comment\": \"Thanks the authors for their rebuttal. It well addressed most of my concerns, so I have increased the score.\"}", "{\"title\": \"request more clarification of the Re-Q5-1 and maybe some explanation of Q2\", \"comment\": \"## Q1\\nI was confused about that XD. Now this concern is solved.\\n\\n## Q2\\nThanks for tons of experiments. Choosing $L_2$ due to better performance, that make sense, but more explaination or analysis will benefit. For example, I'm not sure if it's correct, maybe the $L_2$ is related to the reconstruction loss or something else, so it have better performance? I'm expecting a convincing explanation.\\n\\n## Q3+Q4\\nThis solved my concern about the ablation study, which will provide a basic understanding of different parameters.\\n\\n## Q5-1\\nJust for double check, could you clarify the adversarial sample was generated with the previous mask (main object) or the new mask (eye)?\\n\\n## 5-2\\nVery interesting findings. I'm surprised that the merged perturbation is a little bit better than the default one. I think this could be a potential research direction that atomize the image into several sections, then optimize for them. A good speed up strategy such as optimize all blocks at the same time will be a huge innivation.\"}", "{\"comment\": \"Dear reviewer ZPiB,\\n\\nThank you for your thoughtful feedback and insightful questions. We deeply appreciate the time you\\u2019ve taken to engage with our work and offer such valuable perspectives. We addressed each point in detail in the replies below.\\n\\n**[R1] Qualitative examples for edits using various masks**\\n\\nThank you for a constructive comment. Based on your new suggestion, we added a new visualization in Figure 29 in Appendix K, including different masks and their corresponding edit results.\", \"we_used_a_total_of_7_masks\": \"4 smaller than the head (Rows 2\\u20135), 1 matching the head size (Row 1), and 2 larger than the head (Rows 6\\u20137). In all cases, DiffusionGuard results in a successful protection, causing the edit to result in unrealistic images.\\n\\nA notable insight is that masks larger than the head, which include regions outside the head (especially the background), restrict editing flexibility. In the figure, for these larger masks, the background remains fixed as an empty entrance to a dark hallway (Rows 6\\u20137) to align with the dark surroundings of the source image, and the shirt color is consistently dark blue due to the original clothing. In contrast, the smaller masks (Rows 1\\u20135) allow diverse backgrounds, such as a wall, hallway, or hospital ward. This suggests that larger masks, as they include the background, are less ideal for flexible editing and emphasize the need to focusing on the head, especially the face.\\n\\nAnother observation is that for all 7 masks, including the smoother backgrounds of Rows 6\\u20137, the boundary between the original face and the edited background is visibly distinct in the protected images. This likely results from the attack disrupting the inpainting model's internal processes, causing it to misinterpret colors. This effect could be useful for future research directions.\"}", "{\"comment\": \"Dear Reviewer ZPiB,\\n\\nThank you for your thoughtful feedback and for raising the score. We are delightful that our additional experiments and explanations were able to engage with the thoughtful ideas you raised. Your insights and comments were invaluable in improving our work, and we greatly appreciate the constructive discussion.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"**[R2] About using different timesteps or schedulers**\\n\\nWe believe that optimizing for a single integer timestep (e.g., 1 to 1000) generalizes to nearby timesteps because timesteps are converted into real-number sinusoidal embeddings, similar to Transformer positional embeddings, as described in the DDPM [1] paper: \\\"Diffusion time t is specified by adding the Transformer sinusoidal position embedding into each residual block.\\\"\\nWhile diffusion model libraries like Diffusers represent timesteps as discrete integers (e.g., `981`), this is arbitrary, as timesteps exist on a continuous spectrum. Optimizing for one timestep influencing nearby ones is reasonable, given the common practices like interpolating positional embeddings (e.g., Vision Transformer [2]) or adjusting the number of timesteps dynamically (e.g., Consistency Models [3]).\\n\\nAnother example can be found in research about protection against diffusion models as well. In AdvDM [4], adversarial perturbations are generated by randomly sampling timesteps uniformly from all of {1, 2, ..., 1000}, rather than specific values like {21, 41, ..., 981}.\\n\\n[1] Ho, J., Jain, A., and Abbeel, P., 2020. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 2020.\\n\\n[2] Dosovitskiy, A., Beyer, L., Kolesnikov, A., et al., 2021. An image is worth 16x16 words: Transformers for image recognition at scale. International Conference on Learning Representations, 2021.\\n\\n[3] Song, Y., Dhariwal, P., Chen, M., et al., 2023. Consistency models. International Conference on Machine Learning, 2023.\\n\\n[4] Liang, C., Wu, X., Hua, Y., et al., 2023. Adversarial example does good: Preventing painting imitation from diffusion models via adversarial examples. International Conference on Machine Learning, 2023.\"}", "{\"comment\": \"Dear Reviewer WYYe,\\n\\nWe sincerely appreciate your insightful comments. They were incredibly helpful in improving our draft. We have addressed each comment in detail below.\\n\\n---\\n\\n**[Q1] Clarification of loss maximization**\\n\\nThank you very much for your careful review, and we apologize for the confusion. The loss should indeed be maximized, as stated in Eq. 5. We have clarified this in the revised draft.\\n\\n---\\n\\n**[Q2] Reason for choosing L2 norm of the predicted noise.**\\n\\n\\nIn developing our method, we experimented with various loss function designs, including the L1 norm and total variation, as you mentioned. We found that maximizing the L2 norm empirically provided the most effective protection results. Table A (below) reports the performance of DiffusionGuard when using the L1 norm and total variation of the predicted noise as the loss function. \\n\\n**Table A. DiffusionGuard with L1 or total variation loss**\\n\\n| **Method** | **CLIP Dir. Sim \\u2193** | **CLIP Sim. \\u2193** | **ImageReward \\u2193** | **PSNR \\u2193** |\\n|-----------------------------------------|----------------------|------------------|--------------------|------------|\\n| **Seen mask** | | | | |\\n| DiffusionGuard, L1 loss | 19.37 | -1.789 | 26.71 | 12.62 |\\n| DiffusionGuard, total variation loss | 20.79 | -1.766 | 27.47 | 12.88 |\\n| **DiffusionGuard, L2 loss (default)** | **18.95** | **-1.807** | **26.55** | **12.60** |\\n| **Unseen mask** | | | | |\\n| DiffusionGuard, L1 loss | 21.97 | -1.549 | 29.14 | 13.23 |\\n| DiffusionGuard, total variation loss | 22.60 | -1.538 | 29.47 | 13.22 |\\n| **DiffusionGuard, L2 loss (default)** | **21.84** | **-1.557** | **29.05** | **13.19** |\\n\\nAs shown, using the L2 norm of the predicted noise results in the best (lowest) value for all metrics for both seen and unseen masks.\\n \\n---\\n\\n**[Q3] Influence of the early step T value**\\n\\nThank you for your constructive comment. As per your suggestion, we conducted additional ablation experiments by selecting T after splitting the timesteps into 5 equal intervals due to resource constraints. The results are presented in the Table B below. \\n\\n**Table B. Ablation of different early step values**\\n\\n| **Method** | **CLIP Dir. Sim \\u2193** | **CLIP Sim. \\u2193** | **ImageReward \\u2193** | **PSNR \\u2193** |\\n|-----------------------------------------|----------------------|------------------|--------------------|------------|\\n| **Unseen mask** | | | | |\\n| DiffusionGuard, step T/5 | 24.20 | -1.285 | 31.09 | 14.36 |\\n| DiffusionGuard, step 2T/5 | 23.59 | -1.388 | 30.54 | 14.32 |\\n| DiffusionGuard, step 3T/5 | 22.29 | -1.498 | 29.63 | 13.42 |\\n| DiffusionGuard, step 4T/5 | 21.88 | -1.552 | 29.24 | 13.03 |\\n| DiffusionGuard, step 5T/5 | 21.84 | -1.557 | 29.05 | 13.19 |\\n\\n\\nWe observed that T values around the first and second intervals yield the best protection strength, which we attribute to the significance of early denoising steps in inpainting models. This finding aligns with the motivation and approach of our method, which emphasizes early-stage (closer to pure Gaussian noise, i.e., T) loss.\"}", "{\"comment\": \"**[Q1] Effect of editing prompts on the generation of perturbations**\\n\\nThank you for this interesting question. In our experiments, we generated perturbations by setting the text prompt to an empty string (\\\"\\\" in Python) to maintain neutrality and ensure that they are not biased towards any specific test-time prompts, following PhotoGuard [1].\\n\\nRegarding the effects of different prompts, our observations indicate that perturbations generated with a non-empty text prompt are more tailored to that specific prompt. This makes them more effective at protecting against similar prompts due to their targeted nature but less effective against prompts that are unrelated.\\n\\nWe have clarified this in Appendix D (Experimental details) in our updated draft.\\n\\n[1] Salman, H., Khaddaj, A., Leclerc, G., Ilyas, A., and Madry, A., 2023. Raising the cost of malicious AI-powered image editing. International Conference on Machine Learning, 2023.\"}", "{\"summary\": \"The author proposed an attack method that targeting the LDM-based inpainting task. The method came with a new loss fucntion, and a new data agumentation for inpainting mask. The author also proposed a new benchmark for evaluate the anti-inpainting methods. The experiments showed the good performance. Generally it's a complete work.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The author captured the key problem that the global adversarial perturbation will loss its adversarial semantic in inpainting due to the mask\\n2. The proposed method only need to run one step of U-Net in each attack step\\n3. The experiment is quite comprehensive\", \"weaknesses\": \"1. The motivation for the loss is not clear\\n2. There is no ablation study for the hyperparameters\\n\\nMore details see questions\", \"questions\": \"1. In line 242, the author wrote \\\"by minimizing the following loss ...\\\". Should it be minimizing or maximizing? I was a little bit confused. I hope the author could clearify this.\\n2. The author proposed a new objective or loss function for the PGD-like attack. Could the author tell me why you choose to maximize the L2 norm of the predicted noise in the earlier step? Why not L1 or total variation or focal norm? I didn't see any explaination of why max(L2) works.\\n3. The author didn't show that the influence of different early step `t`. For example, from 1 to 10, there is no ablation about it. I'd like to see what's the influence of different `t`, and why.\\n4. Although the author has done the ablation for attack steps (comp. budget), the author didn't do the ablation for learning rate choice. Some previous papers mentioned the larger learning rate choice may cause better performance when attacking the generation task in some cases, which is counter-intuitive. I hope the author can do the ablation for this as well, to find the best hyperparameters settings.\\n5. regarding the line 175 to 183, the authors mention that they only apply perturbations in sensitive areas most commonly used by malicious users. This is reasonable in most cases, but if a malicious user only wants to edit the eyes in a face photo, and wants to change the shape of the pupil or the position where the line of sight is focused, will inpainting be successful in this case? I would be inclined to think that editing or inpainting would still succeed in this case.\\n Therefore, I would like to ask the author to make a demo. For example, in a face image, the facial features are masked separately for attack, and then a perturbation composed of several sub-perturbations will be obtained. I want to see how effective a certain sub-perturbation is in this case. Whether the inpainting of the sub-entity will be successful and whether the editing of the entire entity will be successful.\\n I expect the author can take series of experiments and show me the results, and it would be better if it could be analyzed quantitatively.\\n6. Some figures are too tiny to read such as Figure 7a, author may prefer to make the 7a wider to have a better visualization.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"About larger masks\", \"comment\": \"Thanks for the detailed responses.\\n\\nAbout masks, could the authors provide some visual results for reference? For example, providing a figure containing several cases using DiffusionGuard with maybe 6-8 different sizes (some larger some smaller) of masks on the same case, to visually see the effect of different sizes of the mask. And to see if any insight can be provided for readers.\"}", "{\"comment\": \"**[Q4] Ablation of learning rate choice**\\n\\nThank you for your thoughtful comment. We agree that identifying the best hyperparameters for DiffusionGuard is important for its effectiveness. Therefore, we conducted additional experiments using various learning rate (PGD step size) values, and the results are presented below (Table C). \\n\\n**Table C. Learning rate (step size) hyperparemeter search of DiffusionGuard**\\n\\n| **Method** | **CLIP Dir. Sim \\u2193** | **CLIP Sim. \\u2193** | **ImageReward \\u2193** | **PSNR \\u2193** |\\n|-----------------------------------------|----------------------|------------------|--------------------|------------|\\n| **Seen mask** | | | | |\\n| DiffusionGuard, lr=0.5 | 19.21 | -1.777 | 26.80 | **12.52** |\\n| DiffusionGuard, lr=1.0 | **18.95** | -1.807 | **26.55** | 12.60 |\\n| DiffusionGuard, lr=2.0 | 19.45 | **-1.816** | 26.90 | 12.52 |\\n| DiffusionGuard, lr=3.0 | 20.10 | -1.724 | 27.29 | 12.71 |\\n| DiffusionGuard, lr=4.0 | 20.75 | -1.677 | 27.72 | 12.80 |\\n| DiffusionGuard, lr=8.0 | 22.23 | -1.541 | 28.79 | 13.47 |\\n| **Unseen mask** | | | | |\\n| DiffusionGuard, lr=0.5 | 22.00 | **-1.560** | 29.23 | **13.13** |\\n| DiffusionGuard, lr=1.0 | **21.84** | -1.557 | **29.05** | 13.19 |\\n| DiffusionGuard, lr=2.0 | 22.13 | -1.513 | 29.30 | 13.15 |\\n| DiffusionGuard, lr=3.0 | 22.18 | -1.481 | 29.28 | 13.26 |\\n| DiffusionGuard, lr=4.0 | 22.59 | -1.463 | 29.54 | 13.39 |\\n| DiffusionGuard, lr=8.0 | 23.06 | -1.372 | 30.01 | 14.03 |\\n\\nAs shown, we find that the learning rate of 0.5/255 or 1.0/255 yields the best performance, and the performance gets worse as the learning rate increases. The hyperparameter search suggests that DiffusionGuard performs better when the learning rate is smaller, which is better aligned with the general intuition of optimization, unlike some previous paper which stated that larger learning rates may cause better performance.\\n\\n---\\n\\n**[Q5-1] When malicious user edits only the eyes in a face photo**\\n\\nTo address the reviewer's concern, we conducted additional demonstration experiments in which only certain facial features are inpainted (e.g., shape of pupil, line of sight, shape of mouth), by crafting new masks that correspond to these new setups. We include the qualitative demo in Figure 24 of Appendix J, in which DiffusionGuard still causes the editing to fail, maintaining its protective effectiveness. \\n\\nWe believe this is because the noise is applied all over the face, most of the noise survives even when certain facial features are inpainted, resulting in a failed edit.\"}", "{\"title\": \"About different sampling schedulers (or T)\", \"comment\": \"Thanks for the detailed experiments.\\n\\nAbout different sampling schedulers (or T), could authors provide some discussion or rationale as to why this training method can be generalized to different samplers or T? If my understanding is not wrong, only one T is seen during training.\"}", "{\"metareview\": \"This work introduces a novel adversarial noise-based defense method designed to protect images from unauthorized edits by diffusion-based image editing techniques. The authors propose an objective that targets the early stages of the diffusion process to enhance attack performance, complemented by a mask-augmentation technique that further improves robustness. The reviewers unanimously support the paper's acceptance in positive ratings (i.e., 6.0 on average), recognizing its contribution to addressing the challenges of diffusion-based image manipulation with an efficient and robust protection method. They also commend the innovative approach of crafting adversarial noise by focusing on the early denoising stages. Additionally, reviewers WYYe, QU4Q, and ScsB highlight the effectiveness of the mask augmentation method in enhancing robustness during test time.\\n\\nBased on these positive evaluations, we have decided to accept the paper.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion periods, the reviewers request the thorough evaluations (ScsB, QU4Q) and detailed experiments for new insights in the aspects of enhancements in protection effectiveness, robustness, and efficiency (ZPiB, WYYe). The authors do properly address these concerns and provide the results of requested evaluations, including the more experiments with instruction-based editing (ScSB, QU4Q), on a more extensive test set (ScsB), for the robustness evaluation against diffusion-based purification (QU4Q), and with different hyperparameters (WYYe, ZPiB).\"}", "{\"comment\": \"Dear reviewers,\\n\\nWe sincerely appreciate the time and effort you have dedicated to reviewing our paper. Your insightful comments have been incredibly valuable in refining and improving our work.\\n\\nAs reviewers highlighted, our paper addresses an important ethical issue concerning diffusion-based image editing (ScsB, QU4Q, ZPiB, WYYe) and presents a novel method to protect images against inpainting models based on an interesting insight about inpainting models (QU4Q, ZPiB). This approach is validated through comprehensive evaluations (ScsB, QU4Q, ZPiB) and complemented by clear and detailed explanations (QU4Q, WYYe).\\n\\nIn response to your feedback, we have conducted additional experiments and incorporated the findings into the revised draft. Below is an outline of the key updates:\\n\\n- **Evaluation of DiffusionGuard against instruction-based editing methods, including InstructPix2Pix** (Appendix E.4, Table 7; ScsB, QU4Q, ZPiB).\\n- **Comparison of effectiveness under larger test masks** (Appendix E.5, Table 8; ScsB, ZPiB).\\n- **Analysis of inference robustness across different samplers and timesteps** (Appendix E.6, Table 9; ZPiB).\\n- **Enhanced visualization and measurement of the visibility of the adversarial perturbations** (Appendix H, Figure 22; ScsB)\\n- **Additional references on recent studies about harmful concept removal** (Section 5; ZPiB)\\n- **Exploration of sub-perturbation approaches** (Appendix J.1, Figure 25; WYYe).\\n\\nWe believe that DiffusionGuard can be a useful addition to the ICLR community, guided by your feedback helping us enhance the clarity and depth of our work.\\n\\nThank you once again for your constructive reviews and support.\\n\\nAuthors.\"}", "{\"comment\": \"**[R3] Additional experiments with InstructPix2Pix**\\n\\nThank you for the detailed response. Based on your feedback, we conducted two editing experiments with InstructPix2Pix, one by starting from a malicious image and changing the face, and one by starting with a celebrity image and changing the background.\\n- **Starting with malicious image and changing the face to celebrity (Figure 24 of Appendix I.1):**\\n We started with a person being arrested and instructed InstructPix2Pix and Stable Diffusion Inpainting to replace the subject's face with that of a celebrity. Using the prompt \\\"Turn the man into {celebrity name}\\\", we edited 10 images of well-known figures recognized by Stable Diffusion Inpainting. As a control, we applied the same prompt to an inpainting model with a mask designating the face area.\", \"the_results_reveal_two_key_limitations_of_instructpix2pix\": \"(1) the instruction affects unintended areas, such the police officer in the image, always turning them into the celebrity in all cases, or even the background, and (2) the generated face quality is lower than that of the inpainting model. For highly conditioned celebrities like football players (e.g., Lionel Messi, Cristiano Ronaldo), even the background changes to match their context, such as resembling a football stadium.\\n\\n\\n- **Starting with celebrity image and changing the background (Figure 25 of Appendix I.2):**\\n Starting with celebrity images, we used InstructPix2Pix with the instruction \\\"Change the background to jail\\\" and Stable Diffusion Inpainting with the prompt \\\"A photo of a person in jail\\\". For inpainting, we used a mask designating the face region.\\n\\n As shown in Figure 25, InstructPix2Pix struggles with this task, often altering the celebrity into a different person (Rows 1, 3, 4, 6) or over-conditioning on the source image such as the postures or letters inside them (Rows 2, 5, 6). In contrast, the inpainting model successfully generates accurate representations of celebrities in a jail setting in all cases.\"}", "{\"comment\": \"**[Q5-2] Effectiveness of dividing perturbation into sub-perturbations**\\n\\nThis is a very intriguing idea! As suggested, we conducted a series of experiments and detailed them in Appendix J.1 of our updated draft.\\n\\nAs shown in Figure 25 of Appendix J.1, we divided a face in a given image into three subparts: (1) eyes and forehead, (2) mouth and cheeks, and (3) neck. Correspondingly, we split the original mask into three masks, each targeting a subpart. Using DiffusionGuard, we generated adversarial perturbations for each subpart, then merged them into a single perturbation matching the original mask's shape.\", \"we_compared_this_merged_perturbation_with_the_original_diffusionguard_perturbation_under_four_scenarios\": \"editing the eyes, mouth, neck, and the entire image. The quantitative results are presented in Table D below. Interestingly, the merged perturbation performed similarly to the original for localized edits and slightly better for full-image protection.\\n\\n**Table D. Sub-perturbation quantitative results**\\n\\n| **Method** | **CLIP Dir. Sim \\u2193** | **CLIP Sim. \\u2193** | **ImageReward \\u2193** | **PSNR \\u2193** |\\n|-----------------------------------------|----------------------|------------------|--------------------|------------|\\n| **Mask: Eye** | | | | |\\n| Default DiffusionGuard | 1.48 | **20.28** | **-1.503** | **33.45** |\\n| Sub-perturbation DiffusionGuard | **0.67** | 20.31 | -1.442 | 34.45 |\\n| **Mask: Mouth** | | | | |\\n| Default DiffusionGuard | 8.03 | 17.91 | -1.363 | 30.71 |\\n| Sub-perturbation DiffusionGuard | **7.95** | **17.82** | **-1.400** | **31.73** |\\n| **Mask: Neck** | | | | |\\n| Default DiffusionGuard | **14.12** | **20.01** | **-0.269** | **30.63** |\\n| Sub-perturbation DiffusionGuard | 16.88 | 20.48 | 0.071 | 31.71 |\\n| **Mask: Entire face** | | | | |\\n| Default DiffusionGuard | 2.34 | 19.51 | -1.98 | 37.47 |\\n| Sub-perturbation DiffusionGuard | **2.12** | **19.40** | **-1.99** | **34.18** |\\n\\nWhile the idea of generating sub-perturbation has great potential for development, it did not consistently enhance protection in this experiment, likely because the protective noise over targeted areas is not inputted to the inpainting model. Despite this, the approach shows promise, especially when inpainting the background, instead of inside the face. For instance, if a malicious user regenerated everywhere except the eyes and the nose, this idea could be more effective than default perturbations.\\n\\nThank you again for suggesting this fascinating idea about sub-perturbations, and we will continue to explore any possible directions to enhance its effectiveness.\\n\\n---\\n\\n**[Q6] Figure 7a too small**\\n\\nThank you for your careful review. As per your suggestion, we have updated Figure 7a to make it wider, improving its visualization. This change has been incorporated into the updated draft for your convenience.\"}", "{\"comment\": \"Dear Reviewer QU4Q,\\n\\nWe sincerely appreciate your insightful comments. They were incredibly helpful in improving our draft. We have addressed each comment in detail below.\\n\\n---\\n\\n**[W1] Testing against purification methods specifically designed for diffusion models**\\n\\nWe appreciate your constructive comment. Following your suggestion, we conducted additional evaluations of DiffusionGuard and the baseline models against IMPRESS, a purification method specifically designed for defense against diffusion models. The results, as detailed in Table A below, demonstrate that DiffusionGuard remains more resilient compared to the baseline methods, achieving best (bold) or second-to-best (underline) results in all metrics. This shows the robustness of DiffusionGuard against targeted purification strategies.\\n\\n**Table A. Evaluation after purifying each protection method with IMPRESS**\\n\\n| **Method** | **CLIP Dir. Sim \\u2193** | **CLIP Sim. \\u2193** | **ImageReward \\u2193** | **PSNR \\u2193** |\\n|--------------------------|---------------------|-------------------|--------------------|--------------|\\n| **Seen mask** | | | | |\\n| PhotoGuard | *23.45* | -1.387 | *29.84* | 14.28 |\\n| AdvDM | 23.57 | *-1.458* | 30.04 | **13.57** |\\n| Mist | 23.69 | -1.417 | 30.07 | 14.14 |\\n| SDS(-) | 23.49 | -1.334 | 29.94 | 14.46 |\\n| **DiffusionGuard** | **22.37** | **-1.500** | **29.10** | *13.78* |\\n| **Unseen mask** | | | | |\\n| PhotoGuard | 23.33 | -1.348 | *29.94* | 14.58 |\\n| AdvDM | 23.28 | **-1.442** | 30.00 | **13.55** |\\n| Mist | *23.24* | -1.407 | 30.09 | 14.08 |\\n| SDS(-) | 23.87 | -1.231 | 30.80 | 14.32 |\\n| **DiffusionGuard** | **22.81** | *-1.418* | **29.64** | *13.92* |\\n\\n---\\n\\n\\n**[W2] DiffusionGuard against InstructPix2Pix**\\n\\nThank you for your constructive feedback. Our proposed method can be used with instruction-based models such as InstructPix2Pix as well, especially the early-stage loss component. Although mask augmentation cannot be used as instruction-based models do not accept any mask input, our loss can still be applied.\\n\\nOur focus on inpainting methods stems from their superior practical usefulness and flexibility in editing. While instruction-based methods like InstructPix2Pix can edit images without a mask, they tend to preserve high-level structures, such as body posture, which limits their capacity for drastic modifications (see Figure 23 in Appendix I for failure cases of InstructPix2Pix). In contrast, masked inpainting allows for the complete regeneration of designated areas. Consequently, we believe that inpainting-based editing methods, which are the focus of this paper, offer more practical value for complex scenarios than instruction-based (non-inpainting) editing methods. \\n\\nFurthermore, to strengthen our work, we have extended our evaluations to include instruction-based methods such as InstructPix2Pix. Our results show that DiffusionGuard provides superior protective effectiveness compared to existing methods when applied to InstructPix2Pix, as detailed in Table B below.\\n\\n**Table B. Comparison using InstructPix2Pix**\\n\\n| **Method** | **CLIP Dir. Sim \\u2193** | **CLIP Sim. \\u2193** | **ImageReward \\u2193** | **PSNR \\u2193** |\\n|--------------------------|---------------------|-------------------|--------------------|--------------|\\n| PhotoGuard | 15.02 | -1.508 | 22.95 | 17.19 |\\n| AdvDM | 22.15 | -1.234 | 27.18 | 14.53 |\\n| Mist | 22.82 | -1.204 | 27.48 | 14.35 |\\n| SDS(-) | 25.21 | -1.290 | 29.34 | **11.50** |\\n| **DiffusionGuard** | **14.07** | **-1.591** | **21.74** | 17.42 |\\n\\nWe have included these results in Table 7 of Appendix E.4 with more details.\"}", "{\"comment\": \"Dear Reviewer ScsB,\\n\\nWe sincerely appreciate your insightful comments. They were incredibly helpful in improving our draft. We have addressed each comment in detail below.\\n\\n---\\n\\n**[W1] Misleading title and lack of comparison to other editing methods (e.g., InstructPix2Pix)**\\n\\nThank you for your constructive feedback. We acknowledge that the title may not fully reflect the scope of our work. We plan to revise it to \\\"DiffusionGuard: A Robust Defense Against Malicious Diffusion-based **Inpainting**\\u201d, which addresses your concern more accurately. \\n\\nOur focus on inpainting methods stems from their superior practical usefulness and flexibility in editing. While instruction-based methods can edit images without a mask, they tend to preserve high-level structures, such as body posture, which limits their capacity for drastic modifications (see Figure 23 in Appendix I for failure cases of InstructPix2Pix). In contrast, masked inpainting allows for the complete regeneration of designated areas. Consequently, we believe that inpainting-based editing methods, which are the focus of this paper, offer more practical value for complex scenarios than instruction-based (non-inpainting) editing methods. \\n\\nFurthermore, to strengthen our work, we have extended our evaluations to include instruction-based methods such as InstructPix2Pix. Our results show that DiffusionGuard provides superior protective effectiveness compared to existing methods when applied to InstructPix2Pix, as shown in Table A below.\\n\\n**Table A. Comparison using InstructPix2Pix**\\n\\n| Method | CLIP Dir. Sim \\u2193 | CLIP Sim. \\u2193 | ImageReward \\u2193 | PSNR \\u2193 |\\n|------------------|-----------------|-------------|---------------|---------|\\n| PhotoGuard | 15.02 | -1.508 | 22.95 | 17.19 |\\n| AdvDM | 22.15 | -1.234 | 27.18 | 14.53 |\\n| Mist | 22.82 | -1.204 | 27.48 | 14.35 |\\n| SDS(-) | 25.21 | -1.290 | 29.34 | **11.50** |\\n| **DiffusionGuard** | **14.07** | **-1.591** | **21.74** | 17.42 |\\n\\n\\nWe have included these results in Table 7 of Appendix E.4 with more details.\\n\\n---\\n\\n**[W2] Lack of comparison to MagicBrush**\\n\\nThank you for your comment. However, it is important to note that MagicBrush does not propose a new inpainting model but rather focuses on instruction-based editing without requiring an inpainting mask. This is explicitly stated by the authors: *\\\"We fine-tune InstructPix2Pix on MagicBrush and show that\\u2026\\\"* Although the dataset in the MagicBrush paper includes a mask for each edit instance, the authors clarify that no inpainting-specific models are trained using this dataset.\\n\\nOur work evaluates defense against inpainting using Stable Diffusion (SD) Inpainting 1.0 and 2.0, as they are the most widely used inpainting models. Most related studies also test primarily on these two models, as very few other public inpainting models are available.\\n\\n---\\n\\n**[W3] Misleading description of the unique behavior of inpainting models**\\n\\nTo clarify, both the inside and outside of the mask in Fig. 2 are generated solely by the denoiser, starting from pure Gaussian noise, without any external copy-and-paste operation. Starting from pure Gaussian noise, the denoiser of the inpainting model is able to sharply denoise the masked region in the early steps by observing the source image which is given as an additional input. In contrast, a normal diffusion model with the same inputs does not exhibit this behavior. This indicates that this copy-and-paste-like behavior of inpainting models is a learned behavior during their fine-tuning process and we refer to this as a \\\"unique behavior\\\".\\n\\nAs you noted, the rest of the image is generated similarly to normal diffusion models, lacking fine details in the early stages. This observation inspired our early-stage loss, based on the hypothesis that the early completion of the masked region may influence the generation of the rest of the image.\"}", "{\"comment\": \"**[W3] Somewhat narrow problem setting and evaluating against instruction-guided editing**\\n\\nThank you for your constructive feedback. Our proposed method can be used with instruction-based models such as InstructPix2Pix as well, especially the early-stage loss component. Although mask augmentation cannot be used as instruction-based models do not accept any mask input, our loss can still be applied.\\n\\nOur focus on inpainting methods stems from their superior practical usefulness and flexibility in editing. While instruction-based methods like InstructPix2Pix can edit images without a mask, they tend to preserve high-level structures, such as body posture, which limits their capacity for drastic modifications (see Figure 23 in Appendix I for failure cases of InstructPix2Pix). In contrast, masked inpainting allows for the complete regeneration of designated areas. Consequently, we believe that inpainting-based editing methods, which are the focus of this paper, offer more practical value for complex scenarios than instruction-based (non-inpainting) editing methods. \\n\\nFurthermore, to strengthen our work, we have extended our evaluations to include instruction-based methods such as InstructPix2Pix. Our results show that DiffusionGuard provides superior protective effectiveness compared to existing methods when applied to InstructPix2Pix, as detailed in the table below.\\n\\n**Table C. Comparison using InstructPix2Pix**\\n\\n| **Method** | **CLIP Dir. Sim\\u2193** | **CLIP Sim.\\u2193** | **ImageReward\\u2193** | **PSNR\\u2193** |\\n|---------------------|--------------------|----------------|------------------|--------------|\\n| PhotoGuard | 15.02 | -1.508 | 22.95 | 17.19 |\\n| AdvDM | 22.15 | -1.234 | 27.18 | 14.53 |\\n| Mist | 22.82 | -1.204 | 27.48 | 14.35 |\\n| SDS(-) | 25.21 | -1.290 | 29.34 | **11.50** |\\n| **DiffusionGuard** | **14.07** | **-1.591** | **21.74** | 17.42 |\\n\\nWe have included these results in Table 7 of Appendix E.4 with more details.\\n\\n---\\n\\n**[W4] Missing references on harmful concept removal**\\n\\nThank you for the constructive feedback. We have updated the draft as per your suggestion, and included the recent references on harmful concept removal in Section 5 (Related works).\"}", "{\"title\": \"About instruction-guided editing\", \"comment\": \"About instruction-guided editing, I am satisfied with the results that DiffusionGuard can protect images from them.\\n\\nAbout failure cases of Instruct-pix2pix, if users start from a bad image and then input prompts like replace the face with xxx, where xxx is a famous person that sd can generate, can it be successful? \\n\\nOr, if users start from a photo of a famous person and then input a prompt like changing the background as in jail, can it be successful?\"}", "{\"comment\": \"**[W4] Reason for targeting early steps not sufficiently convincing**\\n\\nAs clarified in [W3], the model does generate the inside of the face as well, and not just the outside, at the very early denoising step. This distinct behavior from normal diffusion models motivated targeting the early denoising stages.\\n\\nFollowing the common practice done by the baselines (PhotoGuard, AdvDM, Mist, SDS), we post-processed the generated results solely for demonstration purposes by copying and pasting the inside of the mask from the source image. This post-processing occluded the raw generated image and made the inside of the masks appear clean. We apologize for the confusion, and we have updated our draft to clearly state this. We also included raw generated images without post-processing in Figure 11 of Appendix D.\\n\\nAdditionally, we also conducted additional experiments following your suggestion, by applying Eq. 4 to multiple timesteps at the same time. Specifically, we applied the loss at timesteps $\\\\textbraceleft\\\\frac{T}{10}, \\\\frac{2T}{10}, ..., T\\\\textbraceright$ or $\\\\textbraceleft\\\\frac{T}{5}, \\\\frac{2T}{5}, ..., T\\\\textbraceright$ simultaneously. As shown in Table B (below), the performance degraded when not focusing solely on the early timestep, supporting our claims.\\n\\n**Table B. Ablation of applying our loss over multiple steps at the same time**\\n\\n| Method | CLIP Dir. Sim \\u2193 | CLIP Sim. \\u2193 | ImageReward \\u2193 | PSNR \\u2193 |\\n|--------------------------------------------|-----------------|-------------|---------------|---------|\\n| **Seen mask** | | | | |\\n| DiffusionGuard, {T/10, 2T/10, ..., T} | 23.02 | -1.530 | 29.32 | 14.15 |\\n| DiffusionGuard, {T/5, 2T/5, ..., T} | 23.76 | -1.454 | 30.00 | 14.97 |\\n| **DiffusionGuard (single early step)** | **18.95** | **-1.807** | **26.55** | **12.60** |\\n| **Unseen mask** | | | | |\\n| DiffusionGuard, {T/10, 2T/10, ..., T} | 23.36 | -1.379 | 30.25 | 14.27 |\\n| DiffusionGuard, {T/5, 2T/5, ..., T} | 24.07 | -1.319 | 30.74 | 15.51 |\\n| **DiffusionGuard (single early step)** | **21.84** | **-1.557** | **29.05** | **13.19** |\\n\\n---\\n\\n**[W5] Effects of bigger masks at test time**\\n\\nThis is an interesting question! We conducted additional experiments to test this scenario. Specifically, we generated two larger masks by dilating the existing masks (one seen and one unseen), ensuring they are significantly larger, with an average increase of 26% in mask area. As shown in Table C below, DiffusionGuard continues to demonstrate strong protective effectiveness, outperforming the baselines even with larger test masks.\\n\\n**Table C. Using a larger mask at test-time**\\n\\n| Method | CLIP Dir. Sim \\u2193 | CLIP Sim. \\u2193 | ImageReward \\u2193 | PSNR \\u2193 |\\n|------------------|-----------------|-------------|---------------|---------|\\n| PhotoGuard | 22.07 | -1.588 | 28.55 | 15.45 |\\n| AdvDM | 21.76 | -1.593 | 28.46 | **13.20** |\\n| Mist | 22.19 | -1.562 | 28.64 | 13.99 |\\n| SDS(-) | 21.29 | -1.587 | 28.20 | 13.96 |\\n| **DiffusionGuard** | **20.71** | **-1.709** | **27.86** | 14.72 |\\n\\n---\\n\\n**[W6] Reliability issue of PSNR**\\n\\nWe agree that PSNR may not fully capture a successful defense. Still, we have incorporated three additional metrics that more accurately reflect the effectiveness of our defense strategy: CLIP similarity, CLIP directional similarity, and ImageReward. These metrics assess how well the edited images align with the editing instructions and the overall editing performance, respectively.\\n\\nFollowing your suggestion, we also added a new chart (Figure 14 of Appendix E.2.2) by replacing PSNR with CLIP directional similarity in Figure 5b, 5c. The results are still aligned with our findings: DiffusionGuard consistently outperforms PhotoGuard across different compute and noise budgets.\"}", "{\"comment\": \"**[W7,8] Suggestion to use a larger test set, and trade masks and prompts for more images**\\n\\nThank you for your constructive feedback. While our original benchmark size aligns with most existing works in image protection against diffusion-based editing (42 images, each with 5 masks and 10 prompts), we agree that expanding the test set is important. For reference, SDS [1] employs a test set of 100 portrait images, each with 1 mask and 1 prompt, while PhotoGuard [2] tests on a smaller number of images but with up to 60 prompts each.\\n\\nIn response, we collected 688 new images from the FFHQ [3] dataset, each paired with two human-validated, automatically generated masks (one seen, one unseen). This significantly expands the test set compared to existing works. Our evaluation on this larger dataset shows that DiffusionGuard outperforms baseline methods by a large margin. Quantitative results are provided in Table D below. For this experiment, we used all 10 prompts without reducing the number of prompts, totaling 13760 edited instances for each protection method.\\n\\n**Table D. Evaluation of each method on a new test set with 688 images**\\n\\n| Method | CLIP Dir. Sim \\u2193 | CLIP Sim. \\u2193 | ImageReward \\u2193 | PSNR \\u2193 |\\n|------------------|-----------------|-------------|---------------|---------|\\n| **Seen mask** | | | | |\\n| PhotoGuard | 24.31 | -1.986 | 26.05 | 12.79 |\\n| AdvDM | 26.69 | -2.018 | 28.03 | 13.40 |\\n| Mist | 26.39 | -1.962 | 27.84 | 14.09 |\\n| SDS(-) | 25.93 | -1.939 | 27.64 | 14.31 |\\n| **DiffusionGuard** | **22.48** | **-2.176** | **24.43** | **12.70** |\\n| **Unseen mask** | | | | |\\n| PhotoGuard | 26.28 | -1.948 | 27.88 | 14.19 |\\n| AdvDM | 27.22 | -1.967 | 28.78 | 13.24 |\\n| Mist | 26.78 | -1.898 | 28.39 | 14.00 |\\n| SDS(-) | 26.43 | -1.843 | 28.38 | 14.13 |\\n| **DiffusionGuard** | **24.26** | **-2.112** | **26.22** | **12.63** |\\n\\nWe plan to collect more images to further extend our dataset and release it publicly soon. We have also included the new dataset in the supplementary materials, resized and compressed due to attachment size limits.\\n\\n[1] Xue, H., Liang, C., Wu, X., and Chen, Y., 2024. Toward effective protection against diffusion-based mimicry through score distillation. International Conference on Learning Representations, 2024.\\n\\n[2] Salman, H., Khaddaj, A., Leclerc, G., Ilyas, A., and Madry, A., 2023. Raising the cost of malicious AI-powered image editing. International Conference on Machine Learning, 2023.\\n\\n[3] Karras, T., Laine, S., and Aila, T., 2019. A style-based generator architecture for generative adversarial networks. Conference on Computer Vision and Pattern Recognition, 2019.\"}" ] }
9OMvtboTJg
LLMOPT: Learning to Define and Solve General Optimization Problems from Scratch
[ "Caigao JIANG", "Xiang Shu", "Hong Qian", "Xingyu Lu", "JUN ZHOU", "Aimin Zhou", "Yang Yu" ]
Optimization problems are prevalent across various scenarios. Formulating and then solving optimization problems described by natural language often requires highly specialized human expertise, which could block the widespread application of optimization-based decision making. To automate problem formulation and solving, leveraging large language models (LLMs) has emerged as a potential way. However, this kind of approach suffers from the issue of optimization generalization. Namely, the accuracy of most current LLM-based methods and the generality of optimization problem types that they can model are still limited. In this paper, we propose a unified learning-based framework called LLMOPT to boost optimization generalization. Starting from the natural language descriptions of optimization problems and a pre-trained LLM, LLMOPT constructs the introduced five-element formulation as a universal model for learning to define diverse optimization problem types. Then, LLMOPT employs the multi-instruction tuning to enhance both problem formalization and solver code generation accuracy and generality. After that, to prevent hallucinations in LLMs, such as sacrificing solving accuracy to avoid execution errors, the model alignment and self-correction mechanism are adopted in LLMOPT. We evaluate the optimization generalization ability of LLMOPT and compared methods across six real-world datasets covering roughly 20 fields such as health, environment, energy and manufacturing, etc. Extensive experiment results show that LLMOPT is able to model various optimization problem types such as linear/nonlinear programming, mixed integer programming, and combinatorial optimization, and achieves a notable 11.08% average solving accuracy improvement compared with the state-of-the-art methods. The code is available at https://github.com/caigaojiang/LLMOPT.
[ "Optimization", "Optimization Problem Formulation", "Problem Definition", "Foundation Model" ]
Accept (Poster)
https://openreview.net/pdf?id=9OMvtboTJg
https://openreview.net/forum?id=9OMvtboTJg
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yzUtjIRGJy", "yEaODidV9p", "wzGmM4vc7r", "wKlQN4dkW1", "w6LOlrcbCj", "v24b9jvNh7", "tyYMbyWpyH", "sbyczWwXhN", "sWFA8jH1tn", "sDSu5fgerO", "lBCIDpbTsL", "kxAb9IOInu", "goNaBYicxf", "f2a6nakOBU", "eGjRM5xD4K", "eB8WysKC8M", "daHIYbsvtm", "dNzYVj4zH0", "d8OgRbNljP", "c06QIohetP", "as2E63xUiA", "apYLZhIe4z", "ajQSVpLExI", "Z2z5aDkZUu", "Y9f3OC6pps", "XDnhQNUDSM", "W8x3Pti8yy", "UiDOLk9qd3", "UF1mJmbBxj", "R7W5y015Zc", "QPYD0c8Leg", "QIfxmerssl", "OtXDdmteCo", "OmCm8B9ssF", "NBkM5S2Off", "KkzbuD8kzC", "Jk35XXQv3k", "JWZgMG9OkG", "IUv4sG4RCU", "Hp4OPSpfnR", "HD5y1WhMxD", "GEw7eWRUl6", "FPZQXqGB6v", "EMnxankXBT", "DAidztCJSG", "ByckZhXU7i", "BusyLdnE1M", "BDQaKorf4q", "AnNM0vkDZ4", "9zQlM9ylUO", "9jDuRb8J2x", "98sg4s8lb8", "7VdJIUjEA3", "7NMNCRau8G", "6pJgqxgo2h", "6SxiUbiykF", "5JiG9Ym20c", "4nRlA4fBD4", "47Q0SzyVE9", "0hrnx7iZdf", "0BTZcD0mDP" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732196761718, 1732197158453, 1732196922766, 1730708240557, 1730387626074, 1732197811786, 1733278764773, 1733215819991, 1732197660388, 1732197504230, 1732538627125, 1732538708152, 1733208392895, 1732515972572, 1732197609478, 1737523984760, 1732197848120, 1732852564705, 1732656038699, 1732197922896, 1732196823867, 1730514428842, 1732197023298, 1732196659692, 1732197193283, 1733208497787, 1732197451947, 1733114808676, 1732197384450, 1732538670404, 1732197790576, 1732197133741, 1733208448012, 1732197568926, 1732538738314, 1733140125272, 1734605319779, 1732198060392, 1733140018031, 1732852671023, 1732852604722, 1732682086549, 1733195327051, 1732197983273, 1733208341718, 1732197686699, 1732197093286, 1732197953443, 1733139957414, 1732196862215, 1733140071656, 1729421222160, 1732198077785, 1732852716801, 1732197430740, 1733228008117, 1732196895644, 1732197060025, 1732197544234, 1732716253701, 1733224003801 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Reviewer_izRU" ], [ "ICLR.cc/2025/Conference/Submission9457/Reviewer_iJY5" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Reviewer_feJ7" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Area_Chair_neSr" ], [ "ICLR.cc/2025/Conference/Submission9457/Area_Chair_neSr" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Reviewer_JwJz" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Reviewer_JwJz" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Area_Chair_neSr" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Area_Chair_neSr" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Area_Chair_neSr" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Reviewer_iJY5" ], [ "ICLR.cc/2025/Conference/Submission9457/Reviewer_JwJz" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Reviewer_feJ7" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Reviewer_iJY5" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ], [ "ICLR.cc/2025/Conference/Submission9457/Reviewer_feJ7" ], [ "ICLR.cc/2025/Conference/Submission9457/Authors" ] ], "structured_content_str": [ "{\"title\": \"Reply to Reviewer izRU (1/5)\", \"comment\": \"Thank you for your valuable questions raised in the review! We would like to discuss them with you, and if there are any other questions, please feel free to ask, and we will respond promptly.\\n\\n## About data labeling.\\n\\n### Response to Question 1: Expert involvement in data labeling.\\n\\nThank you for your concerns about data labeling! First, we introduce **the process of data labeling**, which is divided into four stages: preliminary review, expert labeling, expert review, and data aggregation.\\n\\n+ _**Preliminary review**_**.** Initially review the data and remove unfeasible problems (unfeasible not means difficult). With the help of GPT-4o, we divide optimization problems into two categories according to their difficulty. Problems that meet one of the following conditions will be classified as difficult problems: at least 3 out of 5 solutions using GPT-4o are inconsistent, the code generated by GPT-4o has errors, and experts have found complex constraints, reasoning, or large amounts of data.\\n+ _**Expert labeling**_**.** For simple questions, 2 experts independently annotate. And 3 experts independently label for complex questions. For each question, five-element and solver code need to be labeled, and the code must be run without errors. In this stage, experts may use GPT-4o to generate text that meets the expert's intentions to reduce typing time and generate more formatted code. In order to improve data quality, experts may modify questions appropriately to make them more suitable for the problem scenario, or delete inappropriate questions (such as unfeasible ones).\\n+ _**Expert review**_**.** For each question, a new expert checks the labels of other experts in the previous step, which is based on the correctness of the problem modeling and the consistency of the labels of different experts (the same question may have different but correct labels from different experts). Highly controversial questions or those with any incorrect labels are included in a independent challenging dataset.\\n+ _**Data aggregation**_**.** Five experts discuss and analyze each of the data in the independent difficult dataset to determine whether the problem has a solution or can be adjusted to a feasible problem, and decide whether to abandon the problem based on this. If not, the experts will discuss and complete the labeling of these data. The correctly labeled questions that has passed the expert review will be summarized. And the other feasible questions that has been incorrectly labeled during the entire labeling process will be summarized.\\n\\nThen, we anonymously introduce the experts' qualifications. The above steps are completed by 12 experts. The \\\"preliminary review\\\" and \\\"expert labeling\\\" are finished by 9 undergraduate students with bachelor's degrees or above (computer science or mathematics, all of whom have taken optimization courses), including 4 master's students in related fields (1 doctoral student). The \\\"expert review\\\" is finished by 1 university professor whose research is optimization in machine learning and 2 algorithm engineers working on operations research optimization. \\\"Data aggregation\\\" is finished by the experts except undergraduates. During the data labeling process, we ensure that **each expert completed the labeling independently**. In the first three stages, **a question would not be assigned to the same expert twice.**\\n\\nFinally, we explain **the effectiveness of the expert labels**. The review pass rate for the seven experts assigned to simple question labeling exceeds 90%, while the pass rate for the four experts labeling complex question labeling is above 80% (2 experts labeled both simple and complex questions). Considering the above statistics, along with the fact that **the labeling results will undergo review by senior experts in the third stage**, the reliability and effectiveness of the expert labeling process are well-supported.\\n\\nThanks for your suggestion! We will add these details about expert labeling to the paper.\"}", "{\"title\": \"Reply to Reviewer JwJz (5/9)\", \"comment\": [\"## Response to Questions about the data augmentation and labeling by experts.\", \"**(1) The process of expert labeling**, which is divided into four stages: preliminary review, expert labeling, expert review, and data aggregation.\", \"_**Preliminary review**_**.** Initially review the data and remove unfeasible problems (unfeasible not means difficult). With the help of GPT-4o, we divide optimization problems into two categories according to their difficulty. Problems that meet one of the following conditions will be classified as difficult problems: at least 3 out of 5 solutions using GPT-4o are inconsistent, the code generated by GPT-4o has errors, and experts have found complex constraints, reasoning, or large amounts of data.\", \"_**Expert labeling**_**.** For simple questions, 2 experts independently annotate. And 3 experts independently label for complex questions. For each question, five-element and solver code need to be labeled, and the code must be run without errors. In this stage, experts may use GPT-4o to generate text that meets the expert's intentions to reduce typing time and generate more formatted code. In order to improve data quality, experts may modify questions appropriately to make them more suitable for the problem scenario, or delete inappropriate questions (such as unfeasible ones).\", \"_**Expert review**_**.** For each question, a new expert checks the labels of other experts in the previous step, which is based on the correctness of the problem modeling and the consistency of the labels of different experts (the same question may have different but correct labels from different experts). Highly controversial questions or those with any incorrect labels are included in a independent challenging dataset.\", \"_**Data aggregation**_**.** Five experts discuss and analyze each of the data in the independent difficult dataset to determine whether the problem has a solution or can be adjusted to a feasible problem, and decide whether to abandon the problem based on this. If not, the experts will discuss and complete the labeling of these data. The correctly labeled questions that has passed the expert review will be summarized. And the other feasible questions that has been incorrectly labeled during the entire labeling process will be summarized.\", \"**(2) About data for SFT and KTO.** In the process described above, during the expert labeling process, GPT-4o is used to assist in generating more standardized text, and experts review these results to ensure they align with their intentions. Since the texts generated by GPT-4o are not always correct, incorrect labeling are found by the experts during the second and third stages. In the fourth stage, all the feasible problems are divided into two datasets according to correct labels and incorrect labels. Data with correct labels are used to construct the SFT training set as well as the positive samples in the KTO training set. Meanwhile, data with incorrect labels are used to construct the negative samples in the KTO training set. Consequently, the data included in the SFT training set are inherently part of the KTO training set as well. We will provide detailed explanations in the paper to eliminate any potential misunderstandings regarding the data labeling process. Thanks again for your suggestions!\"]}", "{\"title\": \"Reply to Reviewer izRU (5/5)\", \"comment\": \"### Response to Question 6: Further explanation for self-correction mechanism.\\n\\nThank you for your interest in the details of the self-correction mechanism. When querying an LLM about code-related issues, the LLM may exhibit a tendency to fall into a loop of flawed reasoning during multi-turn conversations. Therefore, in our self-correction implementation, we do not input all historical information into the LLM. As shown in the instruction template in Listing 4 of Appendix H, each correction is handled as an independent call to the LLM, focusing solely on the correction of the current five-element and code without including history data or relying on multi-turn dialogue. **This approach helps to avoid the reasoning loops caused by multi-turn interactions and enhances the robustness of the correction process.**\\n\\nThe robustness of the self-correction mechanism is one of the key topics we focus on, and we are actively exploring new approaches. One promising direction is **leveraging reinforcement learning to guide the LLM through a step-by-step self-correction process** [5]. Unlike the self-correction mechanism in LLMOPT, this approach breaks the correction process into multiple steps, using reinforcement learning to search for the correct correction logic chain. [5] has shown that this approach enables more precise problem identification and effective solutions. We are currently working on constructing relevant datasets and conducting experiments, with preliminary results indicating that this method could be more effective than the original self-correction. Thank you again for your interest!\\n\\nWe hope that our response has addressed your concerns, but if we missed anything please let us know.\\n\\n**References**:\\n\\n[1] NL4Opt dataset. [https://huggingface.co/datasets/CardinalOperations/NL4OPT/viewer](https://huggingface.co/datasets/CardinalOperations/NL4OPT/viewer)\\n\\n[2] Wei, Jason, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems 35 (2022): 24824-24837.\\n\\n[3] ICML 2024 Challenges on Automated Math Reasoning (Task 3). [https://www.codabench.org/competitions/2438/](https://www.codabench.org/competitions/2438/)\\n\\n[4] ORLM open-source model. [https://huggingface.co/CardinalOperations/ORLM-LLaMA-3-8B](https://huggingface.co/CardinalOperations/ORLM-LLaMA-3-8B)\\n\\n[5] Aviral Kumar, et al. Training language models to self-correct via reinforcement learning. 2024. [https://arxiv.org/pdf/2409.12917](https://arxiv.org/pdf/2409.12917)\"}", "{\"summary\": \"The authors of this paper introduced **LLMOPT**, a novel learning-based framework designed to enhance large language models' (LLMs) capability to define and solve general optimization problems described in natural language. The key innovation is the **five-element formulization**, which structures optimization problems into sets, parameters, variables, objectives, and constraints, enabling more accurate problem representation and solver code generation. LLMOPT also incorporates **multi-instruction supervised fine-tuning (SFT)** and **model alignment** using the Kahneman-Tversky Optimization (KTO) method to improve both the accuracy and generalization of solutions.\\n\\nAnother significant contribution is the **self-correction mechanism** in the auto-testing process, which automates error analysis and solution refinement during execution, ensuring robust performance without manual intervention in test-time. The framework was evaluated on six real-world datasets, covering diverse optimization types and fields, achieving an **average accuracy improvement of 11.08%** over state-of-the-art methods. This demonstrates LLMOPT's effectiveness in boosting both the accuracy and generalizability of LLMs in solving complex optimization problems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper introduces a *novel five-element formulization* to define optimization problems, which significantly enhances the ability of large language models (LLMs) to accurately interpret and transform natural language descriptions into solvable optimization problems. This structured approach captures the essential elements of optimization scenarios, ensuring clearer problem representation and facilitating better code generation. The inclusion of elements such as sets, parameters, variables, objectives, and constraints helps the model produce more precise solutions and reduces the risk of omitting implicit problem aspects.\", \"Another notable feature is the *self-correction in test-time*, an automated process called *Auto-testing* integrated within LLMOPT to identify and rectify errors in generated solver code during execution. This mechanism analyzes output logs and determines if adjustments are necessary, enhancing the robustness and accuracy of problem-solving without manual intervention. By automatically looping back to problem reformulation or code generation when needed, LLMOPT can iteratively improve the accuracy of its solutions and adapt effectively to complex challenges.\", \"The paper boasts *strong evaluation results*, demonstrated by extensive testing across six real-world datasets encompassing approximately 20 different fields and various optimization problem types, such as linear and nonlinear programming and combinatorial optimization. LLMOPT shows superior performance, achieving a notable 11.08% average improvement in solving accuracy compared to existing state-of-the-art methods. This result underscores the framework\\u2019s generalization capabilities and effectiveness in diverse optimization scenarios, validated by comprehensive comparisons and ablation studies.\"], \"weaknesses\": \"1. **Complexity of Data Labeling**: The proposed five-element formulation and the use of expert labeling for optimization problem formulations and solver code rely significantly on manual validation. While the authors mention the use of GPT-4 to assist in data generation, human experts are still required to verify the correctness of the outputs. This introduces potential scalability issues as extensive human oversight could limit the practicality of the approach, especially in large-scale deployments. Additionally, the authors do not provide a qualitative or quantitative analysis of the labeling quality or reliability performed by the human experts. Evidence demonstrating the expertise and qualifications of these experts should be presented to support the validity of the labeling process.\\n\\n2. **Insufficient Theoretical Justification for the Five-element Formulation**: The authors claim that the five-element formulation is a universal method for defining optimization problems, but they do not provide sufficient references or theoretical analysis to support this claim of 'universal'. Similarly, in the experimental results, a detailed and decomposed explanation of why the five-element formulation generalizes well and outperforms other models across different types of problems is lacking. Providing theoretical or empirical grounding beyond performance metrics would strengthen the argument for the effectiveness and universality of this approach.\\n\\n3. **More comparative Analysis of the Self-correction Mechanism**: While the authors compare the performance of LLMOPT with and without the self-correction mechanism, it would be beneficial to include an analysis of the performance gains when self-correction is combined with baseline methods (e.g., GPT-4 directly). Additionally, a longitudinal comparison between the current self-correction design and other correction methods, such as manual debugging, could further elaborate on the advantages and limitations of the self-correction approach.\", \"questions\": [\"**Questions and Suggestions for the Authors:**\", \"1. **Clarification on Expert Involvement in Data Labeling**:\", \"**Question**: Could the authors provide more details on the qualifications, expertise, or at least agreement rates of the human experts involved in the data labeling process?\", \"**Suggestion**: Including a quantitative or qualitative analysis of expert reliability and consistency would strengthen the validity of the data labeling claims. Evidence such as statistics of certifications or relevant experience, and agreement rates among experts would be valuable to better assess the credibility of the labeled data.\", \"2. **Theoretical Justification for the Five-element Formulation**:\", \"**Question**: What is the theoretical basis for claiming that the five-element formulation is a universal method for defining optimization problems?\", \"**Suggestion**: Providing additional references or an in-depth theoretical analysis that supports the universality and applicability of the five-element formulation across various optimization scenarios would enhance the argument. Empirical comparisons to alternative problem formulations could also be beneficial.\", \"3. **Comparative Performance Analysis of Self-correction Mechanism**:\", \"**Question**: How does the self-correction mechanism in LLMOPT compare with similar correction methods, such as manual debugging or integration with other LLMs or methods like GPT-4 in a correction loop?\", \"**Suggestion**: Presenting a detailed comparison of LLMOPT\\u2019s self-correction mechanism with existing correction strategies or showing performance metrics of LLMOPT combined with baseline models could provide insights into its relative effectiveness. This would help demonstrate whether the self-correction offers unique advantages or is comparable to simpler, established methods.\", \"4. **Scalability Concerns**:\", \"**Question**: How do the authors envision scaling the current framework for large-scale practical deployments given the heavy reliance on expert validation?\", \"**Suggestion**: Suggestions for automating or streamlining the expert review process, possibly by incorporating semi-supervised or active learning techniques, could address concerns about scalability and long-term sustainability of the labeling process.\", \"5. **Detailed Explanation of Generalization Capabilities**:\", \"**Question**: Can the authors provide more specific examples or case studies where the five-element formulation showed clear advantages in generalization compared to alternative models?\", \"**Suggestion**: Adding empirical evidence or a breakdown of performance across different types of optimization problems, particularly ones not included in the training set, would strengthen the claim of its broad applicability and effectiveness.\", \"6. **Addressing Potential Limitations in Self-correction**:\", \"**Question**: How does the self-correction mechanism handle potential limitations or biases in repeated corrections, such as overfitting to a specific type of error?\", \"**Suggestion**: Discussing safeguards or mechanisms in place to ensure diverse error handling and avoiding biased correction loops would provide more confidence in the robustness of the self-correction feature.\"], \"flag_for_ethics_review\": \"['Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": \"**Ethical Concern**:\\nThe paper should include a statement regarding the treatment and compensation of human expert labelling annotators, as per conference requirements. Transparency in the ethical treatment, fair wages, and working conditions of these contributors is important. The authors are encouraged to provide details about how expert annotators were compensated and ensure that their work adhered to ethical standards. This declaration would reinforce the ethical integrity of the research and ensure compliance with conference guidelines on responsible and fair treatment of human contributors involved in the research process.\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes to finetune LLMs to improve MILP modeling from natural languages. Specifically, from natural language description of a MILP problem, this paper proposes to formulate the MILP problem as a five element formulation, and then generate solver code from the formulation. It uses data augmentation to expand the data set, and ask domain experts to label the data set to finetune the LLMs. Through extensive empirical study, it shows performance improvement from a variety of competitive baseline methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors did a good job benchmarking their method on a variety of benchmarks with many competitive baseline methods. The empirical evaluation seems to be comprehensive.\", \"To my knowledge, there has not been many works that aim to fine-tune the LLM for MILP modeling tasks, so the task is relatively novel.\", \"I appreciate the detailed discussion provided by the authors, which I think is valuable and can help guide the community thinking about future steps.\"], \"weaknesses\": [\"The fine-tuning data collection requires expert manual labels, which limit the scalability and applicability of applying this method.\", \"Despite strength 2 mentioned above, the related work ORLM cited by the paper already took an initial step in fine-tuning LLMs for MILP modeling, and RLHF/DPO/KTO has been commonly used in LLM literature to finetune LLMs, so I\\u2019m a bit concerned whether the novelty of this work is sufficient, especially the work requires expert manual labors for constructing the fine-tuning data.\", \"Table 3: it seems like a majority of performance improvement is from self-correction instead of the five element components and KTO. If I understand correctly, the self-correction is prompting the LLM to correct any error it makes and has been used in previous papers such as [1], and it\\u2019s not related to the KTO fine-tuning pipeline. Given this, I\\u2019m a bit concerned about the contribution of the two main components (five elements and KTO) in this work.\", \"I find certain parts of the paper missing details and somewhat confusing (see my questions below).\"], \"questions\": [\"Five-element formulation: to my knowledge, OPTIMUS [1] also identifies components such as variables, constraints, parameters etc in the optimization description before they translate the optimization problem into code. Can the authors comment on the difference between the two modeling approaches?\", \"Line 222: \\u201cexperts review the generated problems, removing those with unclear descriptions or infeasible solutions to ensure data diversity and quality.\\u201d Can the authors comment on what are the criteria for the experts to determine the descriptions are \\u201cunclear\\u201d? How long is the expert labeling process? Can experts make mistakes? Can the authors comment on whether there is anyway to consider the expert mistakes to further improve the learning performance?\", \"Line 276 equation (3): I find the description of KTO confusing. For example, what is the reference model pi_ref used by the authors? Also, what is the optimal model (is it the same thing as the learning model)?\", \"Table 3: Can the authors comment on what is the setup w/o KTO? Is there still a training / fine-tuning component? what is the alternative loss?\"], \"additional_feedback\": [\"line 53: The authors provide Wrong citation for ORLM on page 1.\", \"line 249: \\u201care correct labeled by experts\\u201d \\u2192 \\u201care correctly labeled by experts\\u201d\", \"[1] AhmadiTeshnizi, Ali, Wenzhi Gao, and Madeleine Udell. \\\"OptiMUS: Scalable Optimization Modeling with (MI) LP Solvers and Large Language Models.\\\" ICML (2024).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer feJ7 (2/8)\", \"comment\": \"### Response to Question 2: Explanation of hallucination.\\n\\nThank you for your question! Your understanding of hallucinations is correct. Hallucinations typically refer to situations where an LLM generates outputs that appear reasonable but are actually fabricated, such as citing non-existent references, calling non-existent methods when writing code, or inferring false facts.\\n\\nIn LLMOPT, the LLM aims to generate solver code that can correctly formulate and solve the optimization problem described in natural language. During this process, we have observed several types of hallucination issues. For example:\\n\\n1. When solving a knapsack problem, the task describes a 0-1 knapsack problem. However, the LLM inexplicably assumes, \\\"_We assume all items are infinite_\\\" and writes code based on this incorrect assumption. This behavior of **erroneously attaching unrelated assumptions based on prior knowledge** is a typical example of hallucination.\\n2. When solving a Traveling Salesman Problem (TSP), the LLM incorrectly introduces an assumption during its reasoning process, stating, \\\"_We can take A as the starting point and G as the endpoint, and model the problem accordingly_.\\\" While this approach simplifies the generation of the solver code, it **arbitrarily adds conditions that do not align with the original problem description**. As a result, the generated solver code fails to solve the original problem. This behavior is another example of hallucination.\\n3. Even in simple problems, LLMs can exhibit hallucinations. For example, when generating solver code, the LLM directly uses `>` or `<` to represent strict inequality constraints. Although Python supports these symbols, most solvers do not support strict inequality modeling and require such constraints to be converted into non-strict inequalities by adding a small positive value. This **inappropriate analogy and subjective inference** are typical examples of hallucination.\\n\\n**Because hallucinations do exist, we designed model alignment to address them.** Thank you for your constructive questions and concerns! We will revise the expressions of the article and provide more accurate explanations based on your suggestions.\"}", "{\"comment\": \"We sincerely appreciate your recognition of our efforts during the rebuttal process. We warmly invite your active participation in the subsequent discussions, as your insights will be invaluable in further improving the quality of our paper. Once again, thank you for your thoughtful and constructive comments.\"}", "{\"title\": \"Official Comment by Reviewer feJ7\", \"comment\": \"Thank you for your response. However, I believe my concern remains unresolved. You mentioned that\\n\\n> which involves 2M+ leads (users) and 22 communities (companies), corresponding to the model.I and model.J variables in the code, **read from the data files**.\\n\\n but this seems inconsistent with your statement about\\n>The primary challenge for LLMOPT lies in **extracting and modeling optimization problems from complex and diverse natural language texts**\\n\\nReading variables from data files appears unrelated to the task of modeling problem from natural language texts.\\n\\nFurthermore, as you pointed out, LLMOPT can solve the Traveling Salesman Problem with approximately 30 nodes, which is an extremely small scale. There is no evidence provided to demonstrate LLMOPT's performance on larger TSPs. Therefore, I would prefer to maintain my score.\"}", "{\"title\": \"Reply to Reviewer iJY5 (5/6)\", \"comment\": \"### Response to Question 2: Details of expert judgement and labeling.\\n\\nThank you for your question about the process of data augmentation and expert labeling. We will add more detailed explanations in the article.\\n\\n1. **Details of expert judgement**. After augmented data, experts check whether the questions are clear during labeling five-element and code. Specifically, there are two types of questions considered _unclear_. One type involves questions with no feasible solution; experts determine this by running code to check for feasibility (e.g., checking for constraint conflicts). The other type involves obvious errors that contradict common sense during the modeling process, such as a car being faster than an airplane in transportation scenarios. This is typically judged based on the expert\\u2019s societal experience.\\n2. **Time and cost of expert labeling**. The labeling process is completed by 12 experts working collaboratively, taking approximately one month. The experts include 9 students with at least a bachelor\\u2019s degree (majoring in computer science or mathematics, all of whom had taken optimization courses), 1 university professor specializing in machine learning and optimization, and 2 algorithm engineers specializing in operations research and optimization. Among the 9 students, 4 are master\\u2019s students in related fields (including 1 PhD student).\\n3. **Mistake prevention during labeling**. We designed a four-stage expert labeling process to prevent incorrect annotations: _preliminary review_, _expert labeling_, _expert review_, and _data aggregation_. In the first stage, one expert, with the help of GPT-4, classifies the questions by difficulty level. Subsequently, each question is independently labeled by 2-3 different experts in the second stage (the number of experts depended on the difficulty). The labeling results are then manually reviewed and finalized during the third stage. For highly contested data, such cases are set aside and discussed by a panel of 5-7 experts to decide their fate. Finally, the data was organized into a training dataset. In terms of labeling validity, the review pass rate for the 7 experts responsible for labeling simple questions exceeded 90%, while the 4 experts handling complex questions had a pass rate exceeding 80% (2 of these experts worked on both simple and complex questions), with the lowest being 83.6%. **For simple questions, the agreement rate between two experts was 93.4%. For complex questions, the rate of two consistent labels out of three was 87.1%.** Considering these statistics, along with the third-stage review conducted by senior experts, the accuracy and reliability of the expert annotations were further ensured.\\n\\nThank you for your questions! We will add more detailed data augmentation details in the paper and the complete expert labeling process in the appendix.\"}", "{\"title\": \"Reply to Reviewer iJY5 (1/6)\", \"comment\": \"Thanks for your questions and concerns raised in the review! The following are our responses to these issues. If you have any additional questions, please don\\u2019t hesitate to ask, and we will respond promptly.\\n\\n## About the LLMOPT Novelty and Contributions\\n\\n### Response to Question 1: Different between five-element and approach in OptiMUS.\\n\\nThe _five-element_ in LLMOPT and the _SNOP_ representation in OptiMUS are completely different.\\n\\n1. **The way of modeling**. **The five-element is extracted completely at once, while the SNOP representation in OptiMUS is extracted step by step**. How the SNOP data is extracted can be found in the website of OptiMUS [2], where the description, parameters and clauses are extracted step by step. In this step-by-step extraction approach, the correctness of the previous step directly affects the next step. Unreasonable subdivision of the extraction process will lead to incoherent and incorrect modeling.\\n2. **Formulation**. The five-element is a mathematical expression that describes and expresses the optimization problem in more detail and is more suitable for modeling languages such as Pyomo. **While the SNOP representation in OptiMUS is not a mathematical model but more like a data structure of the entity extraction task in NLP**. As shown in Fig.3(a) of the original paper of OptiMUS [1], the SNOP representation is composed of 7 types of information including problem_type, problem_info, input_format, output_format, output_info, objective, solver, which is not a mathematical formulation of optimization problem.\\n3. **Usage and correction**. As shown in the live demo [2] of OptiMUS, when using OptiMUS, the problem and specific data are input separately. While when using LLMOPT, you only need to input the problem described in natural language, and the five-element and the code will be generated automatically. **In OptiMUS, if the user does not manually modify the SNOP representation during the extraction process, it cannot be modified or corrected**. In LLMOPT, self-correction can automatically determine whether it is a five-element or code problem, and provide a detailed analysis.\\n\\nMoreover, **our method demonstrates a clear computational cost advantage over prompt-based methods, making it more suitable for large-scale applications**. For instance, our training cost (SFT combined with KTO) requires a total of $ 3.17 \\\\times 10^{19}$ FLOPs, with $ 20.68 \\\\times 10^{12}$ FLOPs needed for a single inference. In contrast, Optimus, which utilizes GPT-4, requires $ 3.88 \\\\times 10^{15}$ FLOPs per single inference. After 9,000+ API calls, our total computational cost becomes lower than that of Optimus, highlighting the cost-effectiveness and scalability of our approach(_see more details in the resposon to reviewer feJ7(5/8): Response to Question 4: Training and testing details of LLMOPT_).\"}", "{\"title\": \"ICLR Public Discussion Phase Ending Soon\", \"comment\": \"Dear Reviewer,\\n\\nThis is a kind reminder that the dicussion phase will be ending soon on November 26th. Please read the author responses and engage in a constructive discussion with the authors.\\n\\nThank you for your time and cooperation.\\n\\nBest,\\n\\nArea Chair\"}", "{\"title\": \"ICLR Public Discussion Phase Ending Soon\", \"comment\": \"Dear Reviewer,\\n\\nThis is a kind reminder that the dicussion phase will be ending soon on November 26th. Please read the author responses and engage in a constructive discussion with the authors.\\n\\nThank you for your time and cooperation.\\n\\nBest,\\n\\nArea Chair\"}", "{\"comment\": \"Dear Reviewer iJY5,\\n\\nWe have carefully revisited your question and provide explanations from the following two perspectives in the hope of addressing your concerns.\\n\\n**Learning contribution**: LLMOPT is the first learning-based approach to introduce a comprehensive framework that tackles the optimization generalization issue by proposing a learning-to-define methodology. This approach leverages LLMs to both define general optimization problems and enhance code accuracy. By employing multi-instruction learning, LLMOPT enables LLMs to model a wide range of problems and generate corresponding solving code, achieving state-of-the-art performance across more general tasks. Unlike related work, which often only focus on data augmentation or prompt engineering without truly addressing the process of learning, LLMOPT provides an intact framework that fully integrates data, learning, and auto-testing, setting a new standard in this field.\\n\\n**Human effort**: We acknowledge that a significant amount of human effort is invested in the data collection and annotation process, but we firmly believe this effort is essential. Complex, diverse, and high-quality data must first be curated by humans rather than relying solely on LLMs, as someone has to take the first step. Our team of human experts is actively constructing more complex optimization problem datasets and exploring the use of automated data augmentation techniques combined with Monte Carlo Tree Search (MCTS) to generate simulation data, aiming to enhance our dataset and address the challenge of data scalability. \\n\\nAs the deadline approaches, we sincerely hope to address your concerns further. We appreciate your effort and valuable feedback once again!\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Looking forward to your reply\", \"comment\": \"Dear reviewers:\\n\\nWe would like to draw your attention to our paper. We believe that the majority of the concerns raised by the reviewers regarding our paper revolve around the further clarification of LLMOPT. During the rebuttal period, we diligently addressed these concerns by providing point-to-point responses, which included **the incorporation of new experiments to clarify potential misunderstandings** (such as ablation analysis for KTO, more ablation analysis for self-correction, generalization analysis of five-element, and results by problem type) and enhancing the clarity of data and the learning process (such as **the process of data labeling, the details of KTO, examples of code generated by LLMOPT**, and **LLMOPT compared with ORLM and OptiMUS**). \\n\\nWhile resolving each reviewer's concerns has required a significant amount of time and effort on our part, we have noticed that there has been no response from all reviewers. \\n\\nThe deadline for author-reviewer discussion is approaching. We sincerely hope that our efforts and improvements can be taken into consideration. We kindly request your assistance in reminding the reviewers of our responses and enhancements.\\n\\nWe sincerely appreciate your valuable time!\\n\\nThanks and regards,\\n\\nAuthors\"}", "{\"title\": \"Reply to Reviewer iJY5 (4/6)\", \"comment\": \"## About expert labeling\\n\\n### Response to concern in Weakness 1: Necessity of manual label and the exploration of scalability.\\n\\nWe are glad that you paid attention to the data issue!\\n\\n**We believe that manual data labeling by experts is necessary**. As discussed in Section 5 of the paper, high-quality data plays a critical role in the performance of the model. However, there is a lack of open-source data in the field of optimization, and the available data is often disorganized and unreliable. For example, the following problem is an example from the NL4Opt dataset [1], which has been open-sourced by ORLM. This is a simple integer programming problem (with only two variables and two linear constraints). The problem is as follows.\\n\\n> A chair produced by Elm Furniture yields a profit of 43, while every dresser yields a 52 profit. Each week, 17 gallons of stain and 11 lengths of oak wood are available. Each chair requires 1.4 gallons of stain and 2 lengths of oak wood, while each dresser requires 1.1 gallons of stain and 3 lengths of oak wood. Determine the maximum profit.\\n\\nThe answer provided by ORLM is 236.5. A clear inconsistency arises here: given that the profits for both chairs and dressers are integers, it is illogical for the maximum profit to be a decimal. Upon verification, this incorrect result stems from the solution of producing 5.5 chairs and no dressers. The correct ground truth should be 224, achieved by producing 4 chairs and 1 dresser. Errors in labeling are not uncommon in open-source datasets, and various types of mistakes are universal across different datasets, such the lack of ground truth, the infeasibility of the problem. This further aggravates the challenges posed by the already limited availability of open-source data. **To address this issue, we dedicate approximately one month to creating a high-quality training dataset with the help of 12 experts**. This effort aims to ensure higher-quality model training and more reliable performance evaluation.\\n\\nAt the same time, we have further studied the issue of data scalability from two perspectives. **First**, leveraging high-quality data labeled by human experts, we apply various automated data augmentation techniques to enrich our dataset. **Second**, we are exploring reinforcement learning methods, such as Monte Carlo Tree Search (MCTS), to generate reasoning paths by utilizing GPT-4 to produce high-quality annotation data. For example, Pyomo does not support strict inequality constraints directly, so constraints like x > y often need to be transformed into non-strict inequalities like x >= y + 1e-6. By providing such examples, we aim to guide GPT-4o in generating _processes_, which will guide more accurate modeling and code. Once the model learns to transform strict inequalities, it should also be able to autonomously understand transformations for absolute value constraints. This ability will be a key factor influencing the scalability of automatic data annotation.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Reply to Reviewer feJ7 (3/8)\", \"comment\": \"### Response to Question 3: About self-correction.\\n\\nThank you for your questions and concerns! We carefully reviewed the results and performed a more detailed analysis of self-correction. **The self-correction mechanism does not play a decisive role. And although the correction mechanism is widely used, the ability of correction is brought by the learning pipeline of LLMOPT.** We analyze the role of the self-correction mechanism through the following experiments. \\n\\n### Experiment 1: Fair comparison of self-correction. \\n\\n**In Table 3 of the paper, it is** _**unfair**_ **to directly compare the full LLMOPT with LLMOPT w/o self-correction.** Taking the IndustryOR dataset in the table as an example, the average solving times (AST in the table) of the full LLMOPT is 8.35, while the AST of w/o self-debug is 1.00. This means that w/o self-debug only performs 1 inference, which is unfair to compare with the full LLMOPT which performs 8.35 inferences on average.\\n\\n**A fair self-correction ablation experiment is as follows.** To ensure a fair comparison on Average Solving Times (AST), we use LLM to repeatedly solve the problem 12 times and manually selected the best solution among these repeated reasonings (which means that only one optimal solution needs to be found in 12 repeated experiments). The reason we chose 12 is that self-correction is limited to a maximum of 12 repeated checks, so this is fair. The experimental results on LLMOPT and GPT-4o are as follows:\\n\\n| **Inference Model** | **Correction Mechanism** | **NL4Opt** | **IndustryOR** | **Mamo_E** | **Mamo_C** |\\n| :---: | :---: | :---: | :---: | :---: | :---: |\\n| LLMOPT (Qwen-1.5) | Self-correction | **93.0%** | **46.0%** | **97.0%** | **68.0%** |\\n| LLMOPT (Qwen-1.5) | Best of 12 repeats | 89.0% | 42.0% | 94.0% | 65.0% |\\n| GPT-4o | Self-correction | 84.0% | 34.0% | 90.0% | 38.0% |\\n| GPT-4o | Best of 12 repeats | 84.0% | 32.0% | 89.0% | 35.0% |\\n\\n\\nFrom the results in the table, it can be observed that the self-correction mechanism has a clear advantage compared to the best of 12 repeats, indicating that the self-correction mechanism is effective for both LLMOPT and GPT-4o. However, **regardless of whether GPT-4o employs self-correction, its performance is consistently worse than that of LLMOPT, suggesting that self-correction is not the decisive factor influencing the results. The ability to correctly solve problems and perform corrections depends on the model\\u2019s inherent capabilities**. Regardless of the correction mechanism used, the superiority of LLMOPT over GPT-4o is evident, demonstrating that the learning pipeline of LLMOPT is an effective method for enhancing the correction capabilities of LLMs. In the following Experiment 2, we will design more detailed experiments to further elaborate on these findings.\"}", "{\"title\": \"Reply to Reviewer JwJz (1/2)\", \"comment\": \"Thank you for your reply and constructive comments!\\n\\n## Response: About KTO vs SFT\\nThank you for your suggestion! We are glad that you recognized the results of the previous ablation experiments. \\n\\nIn the DPO paper, the DPO method is conducted based on the SFT model because it requires a reference model to align newly generated content with existing quality benchmarks. Pre-trained models without SFT often lack sufficient domain-specific knowledge, making them unsuitable as reference models. That is the reason why we did not perform the experiment setting of only KTO.\\n\\nTo address your concerns, we add the experiment you suggested, where only KTO is performed. Since the training and testing of the model requires time (approximately 2 days), we may share you the results later. Thank you for your patience! \\n\\n## Response: AST Clarification\\nYour understanding is correct! The runtime of the solver should not be different. However, the AST, as the average number of correction processes performed, demonstrates the performance of the method and is therefore different. We are glad to clarify the definition of AST for you.\\n\\n## Response: Evaluating formulation correctness\\nSince solver outputs often include print logs unrelated to the exact answer, a straightforward value-to-value comparison is unsuitable and can result in statistical inaccuracies.\\n\\nTypically, the solver output looks like this:\\n\\n```\\nModel unknown\", \"variables\": \"\", \"decisionvariablex\": \"Size=2, Index=Elements\", \"key\": \"Lower : Body : Upper\", \"1\": \"None : 6000.0 : 6000.0\", \"2\": \"None : 1400.0 : 4000.0\", \"objectives\": \"\", \"totalprofit\": \"Size=1, Index=None, Active=True\", \"none\": \"None : 40.0 : 40.0\", \"constraints\": \"\", \"weightedsumconstraint\": \"Size=1\", \"upperboundconstraints\": \"Size=2\", \"the_optimal_solution_is\": \"Allocate 6,000 to production 1.\\nAllocate 1,400 to production 2. \\nThe corresponding objective function value (total profit) is 192,000.\\n```\\n\\nTo evaluate accuracy, the optimal value found by the solver should first be extracted from the above string, i.e., \\\"192,000\\\". Then, the accuracy is determined by comparing this value with the ground truth. \\n\\nFirstly, extracting the objective value from solver logs using string matching is challenging. Secondly, matching values like \\u201c192000.0\\u201d with \\u201c192,000,\\u201d \\u201c2.666666667\\u201d with \\u201c2.67,\\u201d and other unexpected variations is equally difficult. A common approach is to use LLMs (such as GPT-4) for extraction and comparison through carefully designed prompts [1].\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": \"I thank the authors for the hard work and gathering additional results for the rebuttal.\\n\\n### KTO vs SFT\\n\\nI have another suggestion. Based on your clarification in `2. Datasets of SFT and KTO`, my understanding is that the SFT dataset contains only the preferred outputs, whereas the KTO dataset contains the preferred (labelled as 'desirable') and less preferred responses.\\n\\nIn a way, SFT is finetuning only preferred responses. In the DPO paper, the authors ablated the two settings (i.e. what they termed `Preferred-FT` in S6 is equivalent to SFT stage here, if I understood correctly), and found DPO (based on preference dataset) is superior to `Preferred-FT` (SFT stage). I appreciate the ablation results, but they don't completely show that both stages are necessary. It would be useful to compare against an ablation where **only KTO is performed**.\\n\\nDo let me know if this is impractical given the rebuttal window.\\n\\n### AST Clarification\\n\\nThanks for clarifying this, I might have misunderstood AST as the runtime of the solver itself, which should not be that different for different formulations/code of the same LP problem.\\n\\n### Evaluating formulation correctness\\n\\nI am slightly puzzled about why an LLM is used to evaluate correctness. Assuming that you have access to the 'ground-truth' objective value, is it not possible to compare directly against the optimal objective returned by the solver/program? \\n\\n\\n### Choice of solver\\n\\nThanks for clarifying the use of solvers for LLMOPT and the considered baselines. Could the authors clarify whether the choice of solvers (and the framework-specific API) affects performance results? It seems that the authors have compared against OPTIMUS and ORLM using their original solvers and not the Pyomo wrapper.\\n\\n\\n### About dataset labelling\\n\\nI appreciate the detailed description of the data labelling process. Can the authors confirm (1) whether they plan on releasing the labelled dataset, (2) provide additional details on how the labelers were compensated and how working conditions were kept fair.\\n\\nAs I mentioned before, the dataset contains a lot of problems, even if each problem can be labelled on the order of a few minutes (which is infeasible for more challenging problems), this is a huge undertaking.\"}", "{\"title\": \"Reply to Reviewer feJ7 (4/8)\", \"comment\": \"### Experiment 2: The success of correction comes from LLMOPT.\\n\\nThank you for your interest in self-correction! In fact, **multi-instruction SFT and KTO alignment are the basis of self-correction. They not only improve the modeling and solving capabilities of LLMOPT, but also bring correction capabilities. If SFT and KTO are absent, high-quality correction cannot be achieved.**\\n\\nTo further analyze this issue, we conducted additional experiments. We compared the following methods:\\n\\n1. LLMOPT. Full LLMOPT with self-correction during testing. Same as in the paper.\\n2. LLMOPT w/o KTO. LLMOPT with multi-instruction SFT but no model alignment during learning, with self-correction during testing.\\n3. LLMOPT corrected by GPT-4o. Inference is performed using the full learned model, and correction is performed using GPT-4o and the same prompt.\\n4. LLMOPT w/o self-correction (best of 12 repeats). Inference is repeated 12 times using the full learned model, and the best solution among the 12 solutions is manually selected as the final solution (which means that only one optimal solution is found in 12 repeated experiments). The reason we choose 12 is that self-correction is limited to a maximum of 12 repeated checks, so this is fair.\\n5. ORLM (best of 12 repeats). In addition, we reproduce ORLM [3] and add a correction mechanism. Since the ORLM model has lost its generalization ability, ORLM is used to repeat the reasoning 12 times and manually select the best solution among the 12 solutions to simulate the correction process.\\n\\nThe experimental results of Solving Accuracy (SA) are as follows:\\n\\n| | **NL4Opt** | **IndustryOR** | **Mamo_E** | **Mamo_C** |\\n| --- | :---: | :---: | :---: | :---: |\\n| **LLMOPT** | **93.0%** | **46.0%** | **97.0%** | **68.0%** |\\n| LLMOPT w/o KTO | 90.0% | 43.0% | **97.0%** | 65.0% |\\n| LLMOPT corrected by GPT-4o | 89.0% | 41.0% | 95.0% | 66.0% |\\n| LLMOPT w/o self-correction (best of 12 repeats) | 89.0% | 42.0% | 94.0% | 65.0% |\\n| ORLM (best of 12 repeats) | 88.0% | 39.0% | 87.0% | 46.0% |\\n\\n\\nFrom the results, it can be seen that, despite using the same prompt, the performance of LLMOPT w/o KTO is worse than that of the original LLMOPT across all three tasks. This demonstrates that KTO not only improves the ability to formulate optimization problems but also enhances the self-correction capabilities of the LLM. Furthermore, the correction method utilizing GPT-4o performs worse than LLMOPT, indicating that the learning processes through multi-instruction SFT and KTO also improves the LLM\\u2019s correction ability. **Overall, the LLMOPT pipeline enhances the model\\u2019s comprehensive ability to handle optimization problems.** The experimental results for the reproduced ORLM (best of 12 repeats) show that LLMs fine-tuned using LLMOPT exhibit significantly stronger correction ability compared to those fine-tuned using ORLM.\"}", "{\"title\": \"Reply to Reviewer izRU (2/5)\", \"comment\": \"### Response to Question 4: Scalability.\\n\\nWe are glad that you are paying attention to the data problem! We share you our understanding of high-quality data and our attempts to make data scalable.\\n\\n**Why do we spent a lot of manpower and resources to manually label data**? As discussed in Section 5 of the paper, high-quality data plays a critical role in the performance of the model. However, there is a lack of open-source data in the field of optimization, and the available data is often disorganized and unreliable. For example, the following problem is an example from the NL4Opt dataset [1], which has been open-sourced by ORLM. This is a simple _integer programming problem_ (with only two variables and two linear constraints). The problem is as follows.\\n\\n> A chair produced by Elm Furniture yields a profit of 43, while every dresser yields a 52 profit. Each week, 17 gallons of stain and 11 lengths of oak wood are available. Each chair requires 1.4 gallons of stain and 2 lengths of oak wood, while each dresser requires 1.1 gallons of stain and 3 lengths of oak wood. Determine the maximum profit.\\n\\nThe answer provided by ORLM is 236.5. A clear inconsistency arises here: given that the profits for both chairs and dressers are integers, it is illogical for the maximum profit to be a decimal. Upon verification, this incorrect result stems from the solution of producing 5.5 chairs and no dressers. The correct ground truth should be 224, achieved by producing 4 chairs and 1 dresser. Errors in labeling are not uncommon in open-source datasets, and various types of mistakes are universal across different datasets, such the lack of ground truth, the infeasibility of the problem. This further aggravates the challenges posed by the already limited availability of open-source data. **To address this issue, we dedicate approximately one month to creating a high-quality training dataset with the help of 12 experts**. This effort aims to ensure higher-quality model training and more reliable performance evaluation.\\n\\nAt the same time, we have further studied the issue of data scalability from two perspectives. **First**, leveraging high-quality data labeled by human experts, we applied various automated data augmentation techniques to enrich our dataset. **Second**, we are now exploring reinforcement learning methods, such as Monte Carlo Tree Search (MCTS), to generate reasoning paths by utilizing GPT-4 to produce high-quality annotation data. For example, Pyomo does not support strict inequality constraints directly, so constraints like x > y often need to be transformed into non-strict inequalities like x >= y + 1e-6. By providing such examples, we aim to guide GPT-4o in generating _processes_, which will guide more accurate modeling and code. Once the model learns to transform strict inequalities, it should also be able to autonomously understand transformations for absolute value constraints. This ability will be a key factor influencing the scalability of automatic data annotation.\"}", "{\"summary\": \"This paper proposes LLMOPT, which finetunes an LLM to formulate optimization problems (from problem description -> mathematical model/solver model). The finetuning is performed in two stages: SFT stage (where the LLM is finetuned using MLE on ground truth outputs), and KTO-based alignment (where the LLM is finetuned based on desirability labels on LLM generated outputs).\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper addresses an interesting and timely problem. Optimization modeling is a huge topic, and introducing techniques to automate (partially) this task will have significant impact.\\n\\nTo my knowledge, this is also one of the first works that builds/finetunes LLMs specifically for writing optimization models.\\n\\nThe empirical analysis is comprehensive (spanning many benchmarks) and the resulting performance gains are impressive.\", \"weaknesses\": [\"1. **Novelty**: The authors mentioned ORLM, which similarly trains an LLM to do optimization modeling, but did not provide a direct comparison. I also read ORLM (not in the most detail), but it appears to do some data augmentation to train an LLM for model formulation. It seems the main difference is the alignment step (using KTO) and the self-reflection step, can the authors explain the novelty of their method compared to ORLM?\", \"2. **KTO alignment**: There are a few comments on this:\", \"**Writing/clarity**: The writing in S3.3.2 is quite hard to follow, I had to read the original paper to understand what this part is doing. Importantly, Equation (3) is not correct, in the original DPO paper, the optimal reward function has an additional log partition function term. I did not check if this affected the rest of the formulation.\", \"**KTO dataset**: Based on my understanding: (1) the SFT step does not use the KTO dataset (which contains GPT4 generated responses, and desirability labels), and (2) the alignment step does not use the original dataset (which contains ground truth formulations). Is my understanding correct? If so, what is the motivation for not using the KTO dataset during SFT and original dataset in KTO alignment (where the ground truth are all labeled as 'desirable')?\", \"**Purpose of KTO**: The authors state that `KTO loss function encourages the optimal model $\\\\pi^*$ to produce completions that align more closely with expert-labeled data`. **Minor**: this can't be the optimal model, but the learned policy $\\\\pi$? **Major**: This is exactly the same purpose as SFT, so we are back to the above point of why the KTO dataset is not used directly through SFT, it would likely be more stable and have lower variance compared to KTO updates.\", \"**Ablation study**: Can the authors more comprehensively ablate the importance of the KTO step? I carefully examined the results in Table 3 (ablation study)---which seems to indicate that `w/o KTO` on many benchmarks does not significantly improve SA. I would be interested to also see how LLMOPT performs when the KTO dataset is used directly during SFT.\", \"3. **Questions about results**:\", \"**AST**: Can the authors help me understand the big difference in average solution time (AST) plotted in Fig 4? I had a look at some examples in `NLP4LP`, these are very straightforward LP problems. As such, I am very surprised that LLMOPT and GPT4 have almost 2x difference in solution time. Given the simplicity of the problems in this dataset, (1) this difference is unlikely to be explained by clever reformulations that improve solution time, (2) and unlikely to be noise.\", \"**Solution accuracy**: Can the authors elaborate on exactly how solution accuracy is calculated? Is this based on some test cases? My intuition is that it is extremely difficult to check the formulation directly, so how is SA computed and is this a robust evaluation method?\", \"**Performance by problem type**: In L422-L431, the authors claim their method is more general, which I understood as achieving better performance on more types of problems (e.g. LP, MILPs, QPs). To confirm this claim, I would like to see the performance by problem type (similar to the Figure 8 in App G).\"], \"questions\": [\"I also have some minor concerns:\", \"**Dataset construction**: In L219, the authors mentioned they had 1763 'seed' problems. Did they label formulations for all 1763 problems? If so, this is a pretty big contribution if the authors choose to open-source their dataset.\", \"**Augmented data**: My understanding is that additional examples were introduced by mutating the original seed problems. Were these also formulated and labeled by human experts? How many examples in total (original + mutated) were used in training?\", \"Can the authors describe the procedure of collecting the labelled formulations (if human experts were indeed used). This seems like a huge undertaking especially if they are required to write out the mathematical models and code.\", \"**Eq(4) and (5)**: The notation $\\\\nu \\\\sim \\\\cdot$ was slightly confusing, since it is not a random variable, which is where this notation is mainly used.\", \"**Eq (5)**: Are $\\\\lambda_D$ and $\\\\lambda_U$ learnt parameters or hyperparameters?\", \"**Solver**: What solver did LLMOPT use? Is it `Pyomo` as shown in the method figure? Are the baselines evaluated with the same solvers?\", \"ps. this is just a suggestion, but the use of the word `alignment` is slightly misleading, the authors are not aligning the LLM to principles, but doing a different stage of finetuning with a different dataset.\", \"pps. another minor suggestion, but the authors should make more prominent that KTO is an existing method, I saw the citation, but it is slightly buried, and might convey the impression that KTO is an original contribution of this work.\", \"I am going to start with a conservative rating, but open to revising if the authors address my concerns.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer JwJz (1/9)\", \"comment\": \"We appreciate your valuable and thoughtful feedback. We have carefully considered your feedback and provided detailed responses below. If there are any other questions, please feel free to ask, and we will respond promptly.\\n\\n## Response to concerns in Weakness 1: Compared with ORLM.\\n\\nThe contributions of LLMOPT and ORLM are fundamentally different.\\n\\n1. **ORLM focuses on data augmentation methods, while LLMOPT focuses on how learning is conducted.** Although ORLM introduced four kinds of data augmentation methods, it does not focus on the learning process and without comprehensively evaluate model performance. In contrast, LLMOPT designs a detailed process for data, learning, and auto-testing. It not only declares the learning workflow at the methodological level (e.g., multi-instruction SFT and model alignment) but also conducts a thorough evaluation of model performance. **Therefore, LLMOPT is the first novel approach to explore both what to learn and how to learn.**\\n2. **ORLM focuses solely on generating solution code, whereas LLMOPT addresses both the formulating and solving of optimization problems.** Specifically, ORLM performs a straightforward task: inputting an optimization problem and directly inferring the corresponding solver Python code. In contrast, LLMOPT introduces the **Learning to define** as a general formulation for optimization problems, enabling the generation of more accurate code. By using the five-element formulation as an intermediate step, LLMOPT can clearly define the problem and identify potentially overlooked hidden conditions, resulting in higher-quality code generation.\\n3. **LLMOPT conducted comprehensive seesaw tests (see the Section 5 and Appendix E in our paper), while ORLM has largely lost its ability to solve other basic problems.** We have reproduced and evaluated ORLM\\u2019s performance using the open-source model provided in [1]. The results show that what ORLM (based on LLaMA-3-8B) can do is only generating Coptpy solver code and the ORLM model cannot answer any other questions (e.g., _If all cats can climb trees, and Mike\\u2019s pet is a cat, then can Mike\\u2019s pet climb trees?_). This indicates that ORLM has significantly lost its capability on solving basic problems but optimization.\\n4. **The additional experiments show the superior generalization performance of LLMOPT compared to ORLM.** We find a new dataset from the _ICML 2024 Challenges on Automated Math Reasoning (Task 3)_ [2], which was not used in the training of either LLMOPT or ORLM. Since the test data for this dataset does not have open-source ground truth, we randomly sampled 200 data from its training dataset to serve as the test data. The solving accuracy results are as follows.\\n\\n| | **GPT-4o** | **ORLM** | **LLMOPT** |\\n| :---: | :---: | :---: | :---: |\\n| The Competition Dataset [2] | 78.5% | 84.0% | **89.5%** |\\n\\nThe results show that (a) **Compared to ORLM, LLMOPT shows better generalization performance even on a completely new dataset.** (b) Both LLMOPT and ORLM outperform GPT-4o, highlighting the potential of learning-based approaches in solving optimization problems.\"}", "{\"title\": \"General Response to Reviewers and Revision Submitted\", \"comment\": \"We would like to express our heartfelt gratitude to all the reviewers for their valuable feedback and constructive suggestions. We are encouraged by their recognition of our proposed methods as novel (`Reviewer JwJz`, `Reviewer iJY5`), meaningful (`Reviewer JwJz`), and incorporating a well-designed five-element formulation (`Reviewer izRU`). Additionally, we appreciate the reviewers' acknowledgment of our comprehensively designed experiments and ablation studies (`Reviewer izRU`, `Reviewer JwJz`, `Reviewer iJY5`, `Reviewer feJ7`) and for noting that our paper is well-written (`Reviewer feJ7`) and includes detailed discussion (`Reviewer iJY5`).\\n\\n**We have also made revisions to the manuscript. Below, we provide a summary of the key revisions of the paper (highlighted in blue text in the PDF)**.\\n\\n1. We provide a detailed description of the process of data labeling and augmentation in Appendix K. We explain the role of experts in the process and the reliability of data labeling. (`Reviewer izRU`, `Reviewer JwJz`)\\n2. We add more detailed ablation experiments on the self-correction mechanism and analyze the experimental results, which are presented in Appendix M. The results show the overall performance improvement brought by LLMOPT, including enhancements in the formulation, solving, and correction capabilities of LLMs. (`Reviewer izRU`, `Reviewer iJY5`, `Reviewer feJ7`)\\n3. We provide a more detailed analysis of the results in Appendix L, including a statistical breakdown of solving accuracy by problem types, highlighting the improvement in generalization performance achieved by LLMOPT. (`Reviewer izRU`)\\n4. We have carefully reviewed Section 3.3.2 of the paper, supplemented the explanation of the model notations in equation 3 (`Reviewer iJY5`), added the purpose of KTO, andrevised the description of the conditions in equation 4 and equation 5 (`Reviewer JwJz`). \\n5. We carefully reviewed our paper and revised the Tyops. (`Reviewer iJY5`)\\n\\n**Detailed responses to each reviewer\\u2019s comments are provided separately. If you have any questions or concerns, please do not hesitate to reach out. We are committed to providing detailed and professional responses to address any concerns you may have as promptly as possible**.\"}", "{\"title\": \"Reply to Reviewer JwJz (6/9)\", \"comment\": \"**(3) Results of labeling.** We have already introduced this information in Appendix A.2, and a more detailed description is provided below.\\n\\nFirst, we used 1,763 seed problems for data augmentation, resulting in approximately 3,000 raw optimization problems. Then, through the expert labeling process described in Response (1), we obtained 3,276 optimization problems correctly labeled with five-element formulations and solver codes. Among the incorrectly labeled data, 3,145 entries are incorrectly labeled for the five-element, while 3,295 entries are incorrectly labeled for the solver code.\\n\\nThen, we constructed the SFT dataset using the 3,276 correctly labeled data points. These data points are used to generate three types of data pairs to support multi-instruction SFT: _(question, five-element)_, _(question, code)_, and _(five-element, code)_. As a result,**the SFT dataset contains 3x3,276=9,828 training samples**.\\n\\nNext, we introduce the construction of the KTO dataset.** In the KTO dataset, samples with **_**desirability**_** labeled as True are identical to the samples in the SFT dataset, totaling 9,828 samples.** Samples labeled as False are constructed from incorrectly labeled data. Similar to the multi-instruction SFT approach, these samples are divided into three categories: data incorrectly labeled for the five-element representation are used to construct _(question, five-element, False)_ samples, while data incorrectly labeled for the solver code are used to construct _(question, code, False)_ and _(five-element, code, False) _samples. As a result, the samples labeled as False in the KTO dataset total 3,145 + 3,295 * 2 = 9,735. **Thus, the entire KTO dataset contains 9,828 (True) + 9,735 (False) = 19,563 samples.**\\n\\nFinally, general open-source data are added to both the SFT and KTO datasets to maintain the generalization capability of the LLM. Specifically, 20,000 samples are randomly selected from [3] and add to the SFT dataset, while 30,000 samples are randomly selected from [4] and add to the KTO dataset. These have been described in Appendix A.2 of the submitted paper. As a result, the final SFT dataset contains 29,828 samples, and the KTO dataset contains 49,563 samples.\\n\\n**(4) Reliability of expert labeling.**\\n\\nHere is a brief and anonymized introduction to the **qualifications of the experts** involved. The data labeling process is completed by a team of 12 experts working collaboratively. The preliminary review and expert labeling phases are carried out by 9 undergraduate or with above degree (majoring in Computer Science or Mathematics, all of whom have taken optimization courses), including 4 graduate students in related fields (1 of whom is a Ph.D. candidate). The expert review phase is conducted by 1 university professor interested in machine learning and optimization and 2 algorithm engineers researching operations optimization. The data aggregation phase is completed by all experts except the undergraduates. Throughout the data labeling process, **each expert worked independently, and no single problem was assigned to the same expert more than once in the first three phases**. These details will be provided in the paper.\\n\\nThank you again for your concerns of the data processes! We will add the above content to the corresponding data description section and appendix in the paper.\"}", "{\"comment\": \"Dear Reviewer JwJz,\\n\\nWe are delighted to hear that your concerns have been addressed and that you have raised the score. We are glad to participate in the discussion with you, and we will incorporate all the latest results in future versions of this paper. \\n\\nThank you for recommending our paper for acceptance. We sincerely appreciate your valuable feedback and suggestion once again!\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Reply to Reviewer JwJz (9/9)\", \"comment\": \"**References**:\\n\\n[1] ORLM open-source model. [https://huggingface.co/CardinalOperations/ORLM-LLaMA-3-8B](https://huggingface.co/CardinalOperations/ORLM-LLaMA-3-8B)\\n\\n[2] ICML 2024 Challenges on Automated Math Reasoning (Task 3). [https://www.codabench.org/competitions/2438/](https://www.codabench.org/competitions/2438/)\\n\\n[3] Additional training data for SFT. [https://instructions.apps.allenai.org/](https://instructions.apps.allenai.org/)\\n\\n[4] Additional training data for KTO. [https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned-kto](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned-kto)\\n\\n[5] Pyomo Modeling Language. [https://www.pyomo.org/](https://www.pyomo.org/)\\n\\n[6] GLPK (GNU Linear Programming Kit) solver. [https://www.gnu.org/software/glpk/](https://www.gnu.org/software/glpk/)\\n\\n[7] IPOPT (Interior Point OPTimizer) solver. [https://coin-or.github.io/Ipopt/](https://coin-or.github.io/Ipopt/)\\n\\n[8] SCIP (Solving Constraint Integer Programs) solver. [https://www.scipopt.org/index.php](https://www.scipopt.org/index.php)\\n\\n[9] Ali AhmadiTeshnizi, et al. OptiMUS: Scalable optimization modeling with (MI)LP solvers and large language models. ICML 2024.\\n\\n[10] Ziyang Xiao, et al. Chain-of-Experts: When LLMs meet complex operations research problems. ICLR 2024.\\n\\n[11] Zhengyang Tang, et al. ORLM: Training large language models for optimization modeling. CoRR, abs/2405.17743, 2024.\\n\\n[12] Kawin Ethayarajh, et al. Model alignment as prospect theoretic optimization. ICML 2024.\\n\\n[13] Definition of loss function in the KTO code. [https://github.com/huggingface/trl/blob/main/trl/trainer/kto_trainer.py#L1109-L1179](https://github.com/huggingface/trl/blob/main/trl/trainer/kto_trainer.py#L1109-L1179)\\n\\n[14] Settings of hyperparameters in the official KTO code. [https://github.com/huggingface/trl/blob/main/trl/trainer/kto_config.py#L86-L89](https://github.com/huggingface/trl/blob/main/trl/trainer/kto_config.py#L86-L89)\\n\\n[15] Rafael Rafailov, et al. Direct Preference Optimization: Your Language Model is Secretly a Reward Model. NeurIPS 2023.\"}", "{\"title\": \"Reply to Reviewer JwJz\", \"comment\": \"Thank you for your patience. We conduct additional experiments for LLMOPT (only KTO), and the results are as follows.\\n\\n| | **NL4Opt** | **MamoEasy** | **MamoComplex** | **IndustryOR** | **NLP4LP** | **ComplexOR** |\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: |\\n| **GPT-4o** | 83.0% | 90.0% | 38.0% | 34.0% | 35.2% | 36.4% |\\n| **LLMOPT (only SFT)** | 90.0% | 97.0% | 65.0% | 43.0% | 64.9% | 54.6% |\\n| **LLMOPT (only KTO)** | 90.0% | 95.0% | 57.0% | 42.0% | 56.8% | 45.5% |\\n| **LLMOPT (SFT+KTO)** | 93.0% | 97.0% | 68.0% | 46.0% | 83.8% | 72.7% |\\n\\n\\nThe results indicate that **neither only SFT nor only KTO is sufficient to achieve optimal performance**. The only SFT configuration lacks model alignment, leading to hallucination issues. In the only KTO configuration, the pre-trained model is used as the reference model. However, without fine-tuning the reference model using domain-specific knowledge, the maximum performance improvement cannot be achieved (as the KL divergence between the two models is incorporated in the KTO loss).\"}", "{\"title\": \"Reply to Reviewer JwJz (7/9)\", \"comment\": \"## Response to concerns in Weakness 3: Explanation of the results.\\n\\n1. **Response to AST experimental results.**\\n\\nWe are glad that you pay attention to the specific results on the NLP4LP dataset. In fact, NLP4LP is a medium-difficulty dataset for LLM. Although the problems in this dataset are all linear programming, they often involve a rich variety of constraints and problem contexts. From the results in Table 2 of the submitted paper, we can see that the three methods of GPT-4 Directly, Reflexion, and Chain-of-Experts all achieved low solving accuracy (35.8%, 46.3%, and 53.1% respectively).\\n\\nAs mentioned in Section 4.1 in the paper, **the a verage solving times (AST) refers to the average number of self-correction processes performed during the test, and the maximum number of self-correction re-solves is limited to 12.** To better explain the phenomenon you raised, we provide the specific numerical values of the results on the NLP4LP dataset in Figure 4(a), as shown in the following table.\\n\\n| | **GPT-4-Turbo** | **GPT-4o** | **LLMOPT (Qwen-1.5-14B)** |\\n| :---: | :---: | :---: | :---: |\\n| **SA** | 37.8% | 35.2% | 83.8% |\\n| **AST** | 10.08 | 10.40 | 7.00 |\\n\\nThe self-correction mechanism is employed to verify whether the five-element or solver code contains errors and to iteratively refine the it in an attempt to produce a correct solution. To avoid inefficient loops of corrections, this mechanism imposes a maximum number of attempts, set at 12 as mentioned in the paper. However, for complex code, even advanced LLMs sometimes struggle to complete the correction process successfully, manifesting in repeated attempts that fail to yield a correct solution. In such cases, the weaker-performing LLMs reveal significant limitations. First, their solving accuracy (SA) is relatively low, resulting in a higher rate of incorrect solutions. Second, the frequency of unsuccessful correction attempts significantly increases the solving time. Although the optimal solution could not be found, these ineffective attempts lead to a sharp rise in the average solving time (AST). From the table, it can be observed that GPT-4-Turbo and GPT-4o exhibit much lower SA compared to LLMOPT, indicating the lack of ability to tackle NLP4LP problems. **Even with the self-correction mechanism, these models fail to effectively enhance solving accuracy. However, these LLMs will still try to correct repeatedly, resulting in an increase in AST.** Therefore, for LLM with poor performance, not only is the solving accuracy very low, but frequent invalid corrections will also significantly drag down the average solution time, ultimately resulting in a double disadvantage.\\n\\n2. **The evaluation of solving accuracy (SA).**\\n\\nAs described in Section 4.1 of the paper, solving accuracy (SA) indicates the percentage of LLMs that correctly solve the optimization problem, i.e., find the optimal solution. SA or similar evaluation methods have not been specifically mentioned in previous work [9-11]. In the experiments of this paper, _the judgment of correct solution_ is sufficient when the following three conditions are met at the same time:\\n\\n**(a) LLM generates code based on the problem and five-element, and executes it without error**.\\n\\n**(b) Executing the solver code output the optimal solution and its corresponding objective function value in the terminal**.\\n\\n**(c) The output objective function value is consistent with the ground truth provided by the dataset**.\\n\\nConditions (a) and (b) can be automated using Python. When determining condition (c), we use GPT-4o to analyze the consistency between the output of the executed code and the ground truth, and then determine whether the solution is correct (this is obviously within the capabilities of GPT-4o). The prompt used is: _The optimal objective function value of an optimization problem is {ground_truth}. Please determine whether the following solution printed by the solver is correct. The output of the solver is: {solver_output}._\\n\\nWe have checked the LLMOPT solutions on the NL4Opt dataset, and the experts' judgment on whether all problems _are_ _solved correctly_ is consistent with the judgment of the above process.\"}", "{\"title\": \"ICLR Public Discussion Phase Ending Soon\", \"comment\": \"Dear Reviewer,\\n\\nThis is a kind reminder that the dicussion phase will be ending soon on November 26th. Please read the author responses and engage in a constructive discussion with the authors.\\n\\nThank you for your time and cooperation.\\n\\nBest,\\n\\nArea Chair\"}", "{\"title\": \"Reply to Reviewer feJ7 (1/8)\", \"comment\": \"Thank you for your comprehensive review and insightful feedback. We have carefully considered your feedback and provided detailed responses below. If there are any other questions, please feel free to ask, and we will respond promptly.\\n\\n### Response to your concerns in Question 1: Capabilities of LLM and implementation of LLMOPT. \\n\\nThank you for your question! We carefully checked the two articles [1, 2] mentioned in your review, which focus on efficiently and accurately solving specific optimization problems. We address your concerns from two aspects:\\n\\n1. **LLMOPT is not an LLM-based solver or solving method, so LLMOPT is different from solving methods like [1, 2].** In real-world tasks, data is not pre-processed and requires human experts to extract data and model optimization problems from the natural language descriptions. Taking the Traveling Salesman Problem as an example, node and path information is not directly provided; experts need to manually extract and identify the problem type, then transform it into a specific data structure (such as a set of nodes and edge weights) to apply solving methods. In LLMOPT, the role of the LLM is not to solve the problem but to replace experts in completing the modeling task and orchestrating the entire solving process. **Specifically, it extracts key information from the natural language description of the optimization problem, identifies the problem type, models the problem, and subsequently automatically invokes the specified solver or solving method**. LLMOPT eliminates the need for expert modeling, achieving full automation from the original problem description to the solution. Therefore, LLMOPT is fundamentally different from solving methods like [1, 2].\\n2. **LLMs understand the description of optimization problems and invoke solvers rather than directly solving optimization problems.** The ability of LLMs to solve large-scale optimization problems remains an open question, as it involves multiple fields such as mathematical reasoning, computation, and advanced algorithms. **Therefore, we focus on correctly modeling large-scale optimization problems and generating code for solving them, rather than directly using LLMs to solve the problems.** We believe the strength of LLMs lies in understanding natural language and generating code, which means LLMs excel at transforming natural language descriptions of original problems into mathematical models and code, rather than directly solving them. Thus, we designed LLMOPT, leveraging LLMs to model problems into a general _five-element_ formulation, select appropriate solvers, and generate solving code. The actual solving process is delegated to specialized, powerful solvers to ensure the accuracy and efficiency of the solutions.\\n\\nWe explained the motivation of LLMOPT from two aspects, aiming to clarify the rationale and approach of LLMOPT. Regarding your question about whether LLMs have the capability to solve large-scale optimization problems, **under the LLMOPT framework, this depends on the modeling capability of the LLM and the solving capability of the specific solver used.** In the experiments of this paper, we utilized three open-source solvers\\u2014GLPK [3], IPOPT [4], and SCIP [5]\\u2014via Pyomo code, where SCIP is capable of handling problems with hundreds of thousands of variables and constraints. Theoretically, as long as an LLM can correctly model the problem based on its description and generate Pyomo solving code, these large-scale problems can be correctly solved. Therefore, the goal of LLMOPT is to enable the LLM to learn how to properly define problems and generate solving code to automatically invoke these powerful solvers and solution methods.\"}", "{\"title\": \"Reply to Reviewer JwJz (4/9)\", \"comment\": \"4. **Explanation of hyperparameters (such as lambda_D and lambda_U in equation 5).**\\n\\nThank you for your interest in the details of LLMOPT! **In the submitted paper, all the hyper-paremeters have been introduced in Appendix B.** In equation 5, lambda_D and lambda_U are hyper-parameters used to represent the desirable and undesirable weights. In LLMOPT, we set lambda_D=1.0 and lambda_U=1.0 according to the default setting of the KTO paper [12] and the official code [14].\\n\\n5. **Explanation of notation in equation 4 and equation 5.**\\n\\nThank you for raising such valuable questions! In equation 4 and equation 5, **the notation $v \\\\sim \\\\cdot$ is consistent with the expression in equation 8 of the KTO original paper [12].** We have carefully reviewed all formulas and this notation, and we agree with your view that \\u201cthe notation $\\\\sim$ typically denotes being drawn from a probability distribution.\\u201d This representation may cause confusion. Specifically, we will change $v \\\\sim v_{\\\\text{desirable}} | u$ to $d = \\\\text{True} \\\\mid u, v$, and $v \\\\sim v_{\\\\text{undesirable}} | u$ to $d = \\\\text{False} \\\\mid u, v$.\\n\\n6. **Ablation analysis for KTO.**\\n\\nThank you for your interest in the effect of KTO! We designed an experiment to perform ablation analysis on KTO. All the following experiments are performed in a fair setting with the five-element formulations and the self-correction mechanism. The baseline is GPT-4o. On the one hand, we conducted an SFT-only experiment on Qwen1.5-14B to analyze the performance improvement brought by SFT. On the other hand, we performed KTO alignment on the above SFT-based model to analyze the performance improvement brought by KTO over SFT. The experimental results are shown in the following table.\\n\\n| | **NL4Opt** | **MamoEasy** | **MamoComplex** | **IndustryOR** | **NLP4LP** | **ComplexOR** | **Avg. of Improvement** |\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\\n| **GPT-4o** | 83.0% | 90.0% | 38.0% | 34.0% | 35.2% | 36.4% | / |\\n| **LLMOPT (only SFT)** | 90.0% | 97.0% | 65.0% | 43.0% | 64.9% | 54.6% | / |\\n| **SFT Improves over Baseline** | **+7.0%** | **+7.0%** | **+27.0%** | **+9.0%** | **+29.7%** | **+18.2%** | **+16.3%** |\\n| **LLMOPT (SFT+KTO)** | 93.0% | 97.0% | 68.0% | 46.0% | 83.8% | 72.7% | / |\\n| **KTO Improves over SFT** | **+3.0%** | **+0.0%** | **+3.0%** | **+3.0%** | **+18.9%** | **+18.1%** | **+7.7%** |\\n\\nThe results show that (1) SFT achieves an average improvement of 16.3% over the baseline, while KTO achieves an average improvement of 7.7% over SFT. The ratio of improvement brought by KTO to that brought by SFT is approximately 1:2. The goal of model alignment is to reduce hallucinations in LLMs, thereby improving their overall performance when dealing with various optimization problems. As a result, KTO is less likely to achieve the rapid performance boost for specific tasks that SFT can deliver. We believe the 1:2 performance improvement ratio is reasonable. (2) For datasets not included in the training set, such as NLP4LP and the Complex dataset, the performance improvement brought by KTO is even more significant. Problems in these datasets are more novel for LLMs and are more likely to trigger hallucinations or other types of errors. **The impressive improvements observed with KTO highlight the effectiveness and necessity of KTO alignment**.\"}", "{\"comment\": \"Dear Reviewer izRU,\\n\\nAs the deadline approaches, we sincerely hope that your concerns have been addressed. If you have any questions, please feel free to ask directly!\\n\\nWe have provided detailed responses regarding the data labeling process and the necessity of involving human experts. To address your concerns, we have also conducted ablation experiments on the five-element formulation and self-correction mechanism. The results show that the five-element outperforms other formulation approaches, achieving better performance and superior generalization on new datasets. Additionally, the experiments highlight the advantages of self-correction and LLMOPT can improve the capability of handling corrections. The specific details have been thoroughly discussed in our previous responses.\\n\\nWe sincerely appreciate your effort and constructive comment once again! \\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Reply to Reviewer iJY5 (3/6)\", \"comment\": \"### Response to concern in Weakness 3: The contribution of self-correction.\\n\\n**The learning processes through multi-instruction SFT and KTO also improves the LLM\\u2019s correction ability**. If SFT and KTO are absent, high-quality correction cannot be achieved. Multi-instruction SFT and KTO alignment are the basis of self-correction. They not only improve the modeling and solving capabilities of LLMOPT, but also bring correction capabilities.\\n\\nTo further analyze this issue, we conducted additional experiments. We compared the following methods:\\n\\n1. LLMOPT. Full LLMOPT with self-correction during testing. Same as in the paper.\\n2. LLMOPT w/o KTO. LLMOPT with multi-instruction SFT but no model alignment during learning, with self-correction during testing.\\n3. LLMOPT corrected by GPT-4o. Inference is performed using the full learned model, and correction is performed using GPT-4o and the same prompt.\\n4. LLMOPT w/o self-correction (best of 12 repeats). Inference is repeated 12 times using the full learned model, and the best solution among the 12 solutions is manually selected as the final solution (which means that only one optimal solution is found in 12 repeated experiments). The reason we choose 12 is that self-correction is limited to a maximum of 12 repeated checks, so this is fair.\\n5. ORLM (best of 12 repeats). In addition, we reproduce ORLM [3] and add a correction mechanism. Since the ORLM model has lost its generalization ability, ORLM is used to repeat the reasoning 12 times and manually select the best solution among the 12 solutions to simulate the correction process.\\n\\nThe experimental results of Solving Accuracy (SA) are as follows:\\n\\n| | **NL4Opt** | **IndustryOR** | **Mamo_E** | **Mamo_C** |\\n| :--- | :---: | :---: | :---: | :---: |\\n| **LLMOPT** | **93.0%** | **46.0%** | **97.0%** | **68.0%** |\\n| LLMOPT w/o KTO | 90.0% | 43.0% | **97.0%** | 65.0% |\\n| LLMOPT corrected by GPT-4o | 89.0% | 41.0% | 95.0% | 66.0% |\\n| LLMOPT w/o self-correction (best of 12 repeats) | 89.0% | 42.0% | 94.0% | 65.0% |\\n| ORLM (best of 12 repeats) | 88.0% | 39.0% | 87.0% | 46.0% |\\n\\nFrom the results, it can be seen that, despite using the same prompt, the performance of LLMOPT w/o KTO is worse than that of the original LLMOPT across all three tasks. This demonstrates that KTO not only improves the ability to formulate optimization problems but also enhances the self-correction capabilities of the LLM. Furthermore, the correction method utilizing GPT-4o performs worse than LLMOPT, indicating that the learning processes through multi-instruction SFT and KTO also improves the LLM\\u2019s correction ability. Overall, the LLMOPT pipeline enhances the model\\u2019s comprehensive ability to handle optimization problems. **The experimental results for the reproduced ORLM (best of 12 repeats) show that LLMs fine-tuned using LLMOPT exhibit significantly stronger correction ability compared to those fine-tuned using ORLM**.\"}", "{\"title\": \"ICLR Public Discussion Phase Ending Soon\", \"comment\": \"Dear Reviewer,\\n\\nThis is a kind reminder that the dicussion phase will be ending soon on November 26th. Please read the author responses and engage in a constructive discussion with the authors.\\n\\nThank you for your time and cooperation.\\n\\nBest,\\n\\nArea Chair\"}", "{\"title\": \"Gentle Reminder of the Rebuttal Deadline\", \"comment\": \"Dear Reviewer izRU,\\n\\nAs the deadline approaches, we sincerely hope to address your concerns and discuss the rebuttal with you further. If you have any questions, please feel free to ask directly! Moreover, if you find our response satisfactory, could you please kindly consider the possibility of updating the rating. Thank you very much for your valuable suggestion.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"metareview\": \"This paper introduces LLMOPT, which fine-tunes LLMs to enhance their ability to model optimization problems from natural language descriptions. Specifically, the fine-tuning process consists of two stages: (1) the supervised fine-tuning (SFT) stage, where the LLM is trained using MLE on ground truth outputs, and (2) the KTO-based alignment stage, where the LLM is fine-tuned with desirability labels applied to its generated outputs. LLMOPT employs a novel five-element formulation as a universal framework for defining diverse types of optimization problems, subsequently generating solver code based on this formulation. Experiments on six real-world datasets demonstrate LLMOPT's effectiveness in improving the accuracy and generalizability of LLMs in solving complex optimization problems.\\n\\nMost reviewers agree that this paper makes meaningful contributions. During the rebuttal phase, the authors addressed the majority of the reviewers' concerns, and I suggest that the authors revise the manuscript accordingly in the final version.\\n\\nIn response to the ethical concerns about data labeling, the authors provided detailed explanations on the annotation process and the background of the annotators. Therefore, I believe further ethical review is unnecessary. Nevertheless, I strongly recommend that the authors provide the additional details (including aspects such as the methodology, the reliability, and the annotator compensation) in the final version.\\n\\nOverall, I recommend accepting this paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers izRU, JwJz, iJY5, and feJ7 rated this paper as 6: borderline accept (keep the score), 3: reject (raised to 6), 5: borderline reject (keep the score), and 5: borderline reject (keep the score), respectively.\\n\\nThe reviewers raised the following concerns.\\n- Complexity of Data Labeling (raised by Reviewers izRU and iJY5).\\n- Insufficient Experiments (raised by Reviewers izRU and JwJz)\\n- Unclear Experiment Details (raised by Reviewers izRU, JwJz, and feJ7)\\n- Scalability (raised by Reviewers feJ7)\\n- Ethical Concerns Regarding Data Labeling (raised by Reviewer izRU)\\n\\nIn response, the authors addressed the concerns about the complexity of data labeling, insufficient experiments, and unclear experiment details by adding additional experiments and more clarifications in the rebuttal phase. Reviewer feJ7 preserves the concerns on scalability. However, scalability remains a common challenge in this domain, and for large-scale problems, we typically rely on structured data acquisition methods rather than extracting variable information directly from natural language descriptions. Despite the scalability issue, this paper still offers meaningful contributions. I strongly encourage the authors to incorporate the discussions to further address the concern in the final version.\\n\\nRegarding the ethical concerns raised by izRU about data labeling, the authors have provided a detailed explanation on the annotation process and the background of the annotators. Therefore, I believe further ethical review is unnecessary. Nevertheless, I strongly recommend that the authors provide the additional details (including aspects such as the methodology, the reliability, and the annotator compensation) in the final version.\\n\\nOverall, I recommend accepting this paper.\"}", "{\"title\": \"Reply to Reviewer feJ7 (7/8)\", \"comment\": \"### Example 2:\\n\\nThe problem is as follows. \\n\\n```\\nA machine can operate continuously for 4 years or can be sold at the end of each year and replaced with a new one. The purchase price of a new machine at the beginning of each year and the resale price of machines of different ages at the end of each year are listed in Table 4.2 (unit: 10,000 yuan). Additionally, the operation and maintenance costs for a new machine in its first year are 0.3 million yuan, while the annual operation and maintenance costs for machines in their second, third, and fourth years of usage are 0.8 million yuan, 1.5 million yuan, and 2.0 million yuan, respectively. Determine the optimal replacement strategy for the machine to minimize the total costs incurred for replacement, purchase, and operation/maintenance over 4 years. \\n\\\\begin{table}[h]\\n\\\\centering\\n \\\\begin{tabular}{|c|c|c|} \\\\hline\\n Year & Purchase Price at Beginning of Year (10,000 yuan) & Resale Price of Machine Used for j Years (10,000 yuan) \\\\\\\\ \\\\hline\\n Year 1 & 2.5 & 2.0 \\\\\\\\ \\\\hline\\n Year 2 & 2.6 & 1.6 \\\\\\\\ \\\\hline\\n Year 3 & 2.8 & 1.3 \\\\\\\\ \\\\hline\\n Year 4 & 3.1 & 1.1 \\\\\\\\ \\\\hline\\n \\\\end{tabular}\\n\\\\caption{Table 4.2: Machine Purchase Price and Resale Price }\\n\\\\end{table}\\n```\\n\\n**Code generated by LLMOPT:**\\n\\n```python\\n# Import Pyomo and solver\\nfrom pyomo.environ import *\\nfrom pyomo.opt import SolverFactory\\n\\n# Create a Pyomo model\\nmodel = ConcreteModel()\\n\\n# Define sets of nodes\\nmodel.N = RangeSet(1, 5)\\n\\n# Define arcs and their costs based on the given W matrix\\narcs = [\\n (1, 2), (1, 3), (1, 4), (1, 5),\\n (2, 3), (2, 4), (2, 5),\\n (3, 4), (3, 5),\\n (4, 5)\\n]\\n\\n# Define the costs for each arc (from the W matrix)\\nw = {\\n (1, 2): 0.8, (1, 3): 2.0, (1, 4): 3.8, (1, 5): 6.0,\\n (2, 3): 0.9, (2, 4): 2.1, (2, 5): 3.9,\\n (3, 4): 1.1, (3, 5): 2.3,\\n (4, 5): 1.4\\n}\\n\\n# Define the set of arcs\\nmodel.A = Set(initialize=arcs, dimen=2)\\n\\n# Define variables x[i,j] = 1 if arc (i,j) is selected, 0 otherwise\\nmodel.x = Var(model.A, within=Binary)\\n\\n# Define the objective function: minimize total cost\\ndef obj_rule(model):\\n return sum(w[i, j] * model.x[i, j] for (i, j) in model.A)\\nmodel.obj = Objective(rule=obj_rule, sense=minimize)\\n\\n# Flow conservation constraints\\ndef flow_rule(model, k):\\n # For node 1 (source)\\n if k == 1:\\n return sum(model.x[1, j] for j in model.N if (1, j) in model.A) == 1\\n # For node 5 (sink)\\n elif k == 5:\\n return sum(model.x[i, 5] for i in model.N if (i, 5) in model.A) == 1\\n # For intermediate nodes\", \"else\": \"return sum(model.x[i, k] for i in model.N if (i, k) in model.A) == sum(model.x[k, j] for j in model.N if (k, j) in model.A)\\nmodel.flow = Constraint(model.N, rule=flow_rule)\\n\\n# Solve the model using a solver\\nsolver = SolverFactory('glpk')\\nsolver.solve(model)\\n\\n# Print the results\\nprint(\\\"Optimal Replacement Strategy:\\\")\\nfor (i, j) in model.A:\\n if value(model.x[i, j]) > 0.5:\\n print(\\\"Replace machine at year {} and use until year {}\\\".format(i, j-1))\\n\\nprint(\\\"Optimal Total Cost: {:.2f} million yuan\\\".format(value(model.obj)))\\n```\"}", "{\"title\": \"Gentle Reminder of the Rebuttal Deadline\", \"comment\": \"Dear Reviewer iJY5,\\n\\nAs the deadline approaches, we sincerely hope to address your concerns and discuss the rebuttal with you further. If you have any questions, please feel free to ask directly! Moreover, if you find our response satisfactory, could you please kindly consider the possibility of updating the rating. Thank you very much for your valuable suggestion.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Reply to Reviewer feJ7\", \"comment\": \"Thank you for your response! We would like to clarify that **LLMOPT can handle large-scale problems**:\\n\\n1. In our paper, the experiments cover nearly all datasets of optimization problems described in natural language, including linear programming problems with over 100 constraints and 500 variables, as well as traveling salesman problems with around 30 nodes. **This is not the limit of the problem scale that LLMOPT can handle.** \\n2. In our real industrial scenarios, as the previous reply, Example 1 in Reply (6/8) is an example of a large-scale linear programming problem in the context of car insurance leads distribution, which involves **2M+ leads (users) and 22 communities (companies)**, corresponding to the `model.I` and `model.J` variables in the code, read from the data files. \\n3. **As long as the solver can handle the corresponding problem scale, LLMOPT only needs to focus on formulating the problem and generating code**. As we previously clarified, the challenge of LLMOPT lies in problem defining, not solving. For problems with simple descriptions but large data volumes, LLMOPT can handle them well, as it focuses on correct formulating and matching suitable solvers. \\n4. **The challenge for future work lies in how to extract the correct problem formulation from more complex language descriptions (e.g., long texts)** and match a broader variety of solvers when dealing with complex problems, which is the direction we are currently exploring.\\n\\nThanks again for your question! If you have any other questions, and please feel free to ask.\"}", "{\"title\": \"Reply to Reviewer JwJz (2/2)\", \"comment\": \"## Response: Choice of solver\\nThanks for your question! Here is the differences between the solvers used by LLMOPT, ORLM, and OptiMUS.\\n\\n1. LLMOPT can **automatically choose** the most suitable solver from **three open-source solvers** (covering nearly all optimization types) based on the problem type. \\n2. **The close-source coptpy solver is the only choice of ORLM** [2, 3]. The coptpy solver is developed by Sunshu Technology. \\n3. OptiMUS generates code by calling GPT-4, so it can use various solvers. However, OptiMUS has to **specify the solver manually**, and **only the close-source Gurobi** solver is used in [4].\\n\\n**On solving the optimization problem correctly, the key issue lies in correctly modeling the problem, generating the appropriate code**. Different solvers have minimal impact on the solution for the same type of problem. However, using the wrong solver for a problem can have significant consequences. For example, the GLPK solver does not support nonlinear optimization problems. If applied to such problems, it will lead to incorrect results. Therefore, we believe that LLMOPT\\u2019s ability to select the appropriate solver based on the problem type is a significant advantage. LLMOPT selects the most suitable solver from three open-source options. As mentioned in the previous response, these solvers are designed for different types of problems. Assigning a single specific solver to all problems would undoubtedly result in a decline in performance. Thank you for your question!\\n\\n## Response: About dataset labelling\\nThanks for your concerns about the dataset labeling! As mentioned above, these data used in this paper are labeling by 12 experts over a period of 1 month, which is turly a huge undertaking. In fact, we are continuing to collect and label new data from real industrial and financial scenarios. We will propose a comprehensive benchmark for leveraging LLMs to solve optimization problems, featuring a well-designed dataset and a complete evaluation process. \\n\\nWe also place great importance on protecting the rights of experts. To support this project, we establish a collaboration fund between the anonymous enterprise and the anonymous university to ensure experts are fairly compensated. We also ensure a reasonable workload for them and provide dedicated working conditions. For instance, to reduce the typing burden, we provide each expert with access to GPT-4o APIs to assist in completing annotations efficiently. We firmly believe that high-quality data can only be produced when the rights of annotators are safeguarded.\\n\\n**References**: \\n\\n[1] Pan Lu, et al. MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts. ICLR 2024. \\n\\n[2] Zhengyang Tang, et al. ORLM: Training large language models for optimization modeling. CoRR, abs/2405.17743, 2024.\\n\\n[3] ORLM open-source model. [https://huggingface.co/CardinalOperations/ORLM-LLaMA-3-8B](https://huggingface.co/CardinalOperations/ORLM-LLaMA-3-8B)\\n\\n[4] Ali AhmadiTeshnizi, et al. OptiMUS: Scalable optimization modeling with (MI)LP solvers and large language models. ICML 2024.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your effort in preparing the rebuttal. It addresses some of the questions I raised. However, I still have concerns regarding the learning contribution and novelty of this paper, as well as the significant human effort required, which limits the scalability of the proposed method. I will maintain my score but remain open to discussion with the AC and other reviewers in the next stage.\\n\\nThank you.\"}", "{\"title\": \"Thank you\", \"comment\": \"I thank the authors for a detailed and committed rebuttal.\\n\\nThe latest round of responses and ablation study have addressed all my main concerns. I encourage the authors to incorporate all the latest results in future versions of this paper. As such, I am happy to recommend acceptance of this work, and have changed the rating accordingly.\\n\\nBest,\\n\\nThe Reviewer\"}", "{\"title\": \"Reply to Reviewer feJ7 (6/8)\", \"comment\": \"### Example 1:\\n\\nBased on LLMOPT, we developed an industrial copilot product (name omitted for anonymity), which has been widely applied to real-world scenarios such as financial lending, road planning, and travel optimization, in conjunction with our self-developed large-scale industrial solver (name omitted for anonymity). \\n\\nThe following is an example of large-scale application in the context of car insurance leads distribution. The problem involves setting upper and lower limits on allocations to ensure balanced distribution of users among insurance companies, optimizing callback effectiveness, and safeguarding the interests of partners. \\n\\n**Code generated by LLMOPT:**\\n\\n```python\\nimport pandas as pd\\nimport numpy as np\\nimport pyomo.environ as pyomo_env\\nfrom Anonymous.Python.Package import PyomoDistributedModelBuilder\\n\\n# Custom model builder class extending PyomoDistributedModelBuilder\\nclass myModelBuilder(PyomoDistributedModelBuilder):\\n def __init__(self, input_conf, output_conf, model_conf, data_conf, solver_conf, shard_id, shard_count):\\n # Initialize the parent class\\n super().__init__(input_conf, output_conf, model_conf, data_conf, solver_conf, shard_id, shard_count)\\n \\n def build_model(self):\\n # Build the optimization model by assembling datasets, variables, objectives, and constraints\\n model = self.model\\n model = self.read_dataset(model) \\n model = self._attach_variables(model) \\n model = self._attach_objective(model) \\n model = self._attach_local_constraints(model) \\n model = self._attach_global_constraints(model)\\n self.model = model\\n return model\\n\\n def read_dataset(self, model):\\n # Load data: define sets and parameters\\n model.I = Set(initialize=self.parse_sets(input_conf)) # Set of leads\\n model.J = Set(initialize=self.parse_sets(input_conf)) # Set of communities\\n model.p = pyomo_env.Param(model.I, model.J, initialize=self.parse_params(input_conf)) # Weights\\n model.u = pyomo_env.Param(model.J, initialize=self.parse_params(input_conf)) # Upper bounds\\n model.l = pyomo_env.Param(model.J, initialize=self.parse_params(input_conf)) # Lower bounds\\n return model \\n\\n def _attach_variables(self, model):\\n # Decision variable: `x[i, j]` represents assignment from lead `i` to community `j`\\n model.x = pyomo_env.Var(model.I, model.J, within=pyomo_env.NonNegativeReals, bounds=(0, 1))\\n return model\\n\\n def _attach_objective(self, model):\\n # Maximize the weighted sum of assignments\\n obj_expr = sum(model.p[i, j] * model.x[i, j] for i in model.I for j in model.J)\\n model.obj = pyomo_env.Objective(expr=obj_expr, sense=pyomo_env.maximize)\\n return model\\n\\n def _attach_local_constraints(self, model):\\n # Each lead must be assigned to exactly one community\\n def _attach_leads_sum_n(model, i):\\n return sum(model.x[i, j] for j in model.Com) == 1\\n model.x_sum = pyomo_env.Constraint(model.Leads, rule=_attach_leads_sum_n)\\n return model\\n\\n def _attach_global_constraints(self, model):\\n # Ensure total assignments to each community meet upper and lower bounds\\n for j in model.Com:\\n total_com = sum(model.x[i, j] for i in model.Leads)\\n self.add_global_ineq_constraint(model, self._global_ineq_constr_num, \\n pyomo_env.Constraint(expr=total_com <= model.u[j]))\\n self._global_ineq_constr_num += 1 \\n \\n self.add_global_ineq_constraint(model, self._global_ineq_constr_num, \\n pyomo_env.Constraint(expr=total_com >= model.l[j]))\\n self._global_ineq_constr_num += 1 \\n return model\\n```\"}", "{\"comment\": \"Dear Reviewer feJ7,\\n\\nWe have carefully revisited your question and provide explanations from the following two perspectives in the hope of addressing your concerns.\\n\\n**Problem scale**: We have successfully deployed LLMOPT in large-scale real-world industrial scenarios. For example, in the first case mentioned above (reply 6/8), it involves a financial optimization problem with over 2,000,000 users and 22 insurance companies. LLMOPT can correctly model the problem and leverage our anonymous solver to find a solution. We are confident in LLMOPT\\u2019s ability to handle problems of this scale effectively.\\n\\n**Limitations**: The primary challenge for LLMOPT lies in extracting and modeling optimization problems from complex and diverse natural language texts, rather than solving the problems directly (as solving them is the task of the solver). For instance, some problems may not explicitly state that \\\"the number of people must be a positive integer,\\\" yet such constraints can be inferred from common sense. Moreover, certain optimization problems require appropriate relaxation in modeling to be effectively solvable; otherwise, the solver might fail to find a solution. Currently, LLMOPT has achieved state-of-the-art performance on existing datasets. Additionally, we are actively collecting more complex problem descriptions and striving to enhance LLMOPT\\u2019s reasoning capabilities to enable more precise modeling and resolution of complex problems.\\n\\nAs the deadline approaches, we sincerely hope to address your concerns further. We appreciate your effort and constructive comment once again!\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Reply to Reviewer iJY5 (6/6)\", \"comment\": \"## About KTO.\\n\\n### Response to Question 3: Equation (3) for the KTO description.\\n\\nThank you for your question! We apologize for the confusion caused by notation in KTO, and we will improve these description in the paper.\\n\\nIn equation 3, we use the common notation in model alignment research. Similar notation is also used in studies such as [5, 6]. Specifically, $\\\\pi^*$ represents the optimal model, that is, the model after KTO alignment. $\\\\pi_{\\\\text{ref}}$ is the reference model, that is, the model after Supervised Fine-Tuning (SFT). In alignment research, the SFT model (reference model) is usually used as the initial model because it can already demonstrate a certain degree of ability (through supervised training with a large amount of labeled data). However, in order to eliminate model hallucinations and improve the comprehensive ability of the model, the SFT model is often not enough and needs to be aligned. In this process, the optimal model is the alignment target and is a model that is more in line with human needs, while the reference model is an unaligned model and is the benchmark for comparison.\\n\\nWe will add relevant instructions in the paper. Thank you for your question!\\n\\n### Response to Question 4: Experiment setup of w/o KTO.\\n\\nWe apologize for the confusing experimental setup! The setup of w/o KTO in Table 3 is an ablation version of LLMOPT. Specifically, during the learning phase, only multi-instruction supervised fine-tuning (SFT) is performed and KTO alignment is not performed. That is, the model after supervised fine-tuning (LLM_{SFT} in Figure 2(b)) is directly used as the model for deployment in the auto-testing process. All other settings remain consistent with those of LLMOPT.\\n\\n\\nWe hope that our response has addressed your concerns, but if we missed anything please let us know.\\n\\n**References**:\\n\\n[1] OptiMUS: Optimization Modeling Using mip Solvers and large language models. [https://arxiv.org/pdf/2310.06116](https://arxiv.org/pdf/2310.06116)\\n\\n[2] OptiMUS website. [https://optimus-solver.com/](https://optimus-solver.com/)\\n\\n[3] ORLM open-source model. [https://huggingface.co/CardinalOperations/ORLM-LLaMA-3-8B](https://huggingface.co/CardinalOperations/ORLM-LLaMA-3-8B)\\n\\n[4] NL4Opt dataset. [https://huggingface.co/datasets/CardinalOperations/NL4OPT/viewer](https://huggingface.co/datasets/CardinalOperations/NL4OPT/viewer)\\n\\n[5] Kawin Ethayarajh, et al. Model alignment as prospect theoretic optimization. ICML 2024.\\n\\n[6] Rafael Rafailov, et al. Direct Preference Optimization: Your Language Model is Secretly a Reward Model. NeurIPS 2023.\"}", "{\"title\": \"Reply to Reviewer JwJz (3/9)\", \"comment\": \"3. **Explanation of equation 3.**\\n\\nThank you for your meticulous review! Your understanding of DPO [15] is accurate, and the Equation 3 in our paper is also correct. It aligns with the reward expression in Equation 8 of the KTO paper [12] and the official KTO code [13]. **The misunderstanding stems from the fact that the reward function in KTO is different from that in DPO [15]**. This difference is explicitly clarified in Section 3.2 of the KTO paper [13], where the partition function is deliberately subtracted to ensure the validity of the reference point.\\n\\nLet us analyze this issue further. (In this paper, the data representations $u$ and $v$ correspond to $x$ and $y$ in KTO [13] and DPO [15], respectively. For consistency, we will use the notation $u$ and $v$ in the explanation below.)\\n\\n**Optimal policy in model alignment.** In both KTO [13] and DPO [15], the optimal policy can be expressed as $\\\\pi^*(v \\\\mid u) = \\\\frac{1}{Z(u)} \\\\pi_{\\\\text{ref}}(v \\\\mid u) \\\\exp \\\\left( \\\\frac{1}{\\\\beta} r(u, v) \\\\right)$ as shown in equation 4 in [15]. Here, the partition function $Z(u) = \\\\sum_v \\\\pi_{\\\\text{ref}}(v \\\\mid u) \\\\exp \\\\left( \\\\frac{1}{\\\\beta} r(u, v) \\\\right)$ is a constant that depends only on the input $u$, so that the value of the optimal policy $\\\\pi^*(v \\\\mid u)$ is normalized.\\n\\n**Reward function in DPO.** According to the expression of the optimal strategy $\\\\pi^*(v \\\\mid u)$, we can deduce that the reward function of DPO is $r(u, v) = \\\\beta \\\\log \\\\frac{\\\\pi^*(v \\\\mid u)}{\\\\pi_{\\\\text{ref}}(v \\\\mid u)} + \\\\beta \\\\log Z(u)$, where the partition function $Z(u)$ is a constant that is independent of $v$. In DPO [15], it is necessary to compare the relative merits of two different responses $v_1$ and $v_2$ for the same input $u$, and use the difference in their rewards to express the preference $p(v_1 \\\\succ v_2 \\\\mid u) = \\\\sigma \\\\left( r(u, v_1) - r(u, v_2) \\\\right)$. At this time, the partition function $Z(u)$ is eliminated, that is, the partition function has no effect on the preference of DPO.\\n\\n**Reward function in KTO.** Unlike DPO [15], the preference in KTO is no longer achieved by comparing $v_1$ and $v_2$, but refers to the human preference for $v$ in a global context, that is, the desirability of $v$. To achieve this preference, for question $u$, KTO uses $z_{\\\\text{ref}} = \\\\beta\\\\text{KL}(\\\\pi^*(v' \\\\mid u) \\\\mid \\\\mid \\\\pi_{\\\\text{ref}}(v' \\\\mid u))$ to represent the reference point to calculate the value function of answering $v$. The reward function KTO is just the original reward shifted by an input-specific term (i.e., the partition function term is ignored) [13] , resulting in the value function in KTO (as shown in equation 4 in the submitted paper) showing a form similar to the DPO preference function, achieving model alignment. KTO proves that this approach of omitting the partition function is equivalent to the original reward function (Lemma 1 in [13]).\\n\\nTherefore, KTO deliberately designs a special reward function based on the choice of the reference point (characterized by the absence of a partition function term) and demonstrates the validity of this reward formulation, successfully achieving model alignment [13].\"}", "{\"title\": \"Reply to Reviewer feJ7 (5/8)\", \"comment\": \"### Response to Question 4: Training and testing details of LLMOPT.\\n\\nThank you for your concerns! **In our submission, we have already provided the necessary details, including the hardware used for training, as well as the detailed parameters of SFT and KTO, in Appendix B.** Below, we provide a detailed explanation of the training time and computational requirements.\\n\\nDuring the training phase, LLMOPT uses Qwen-1.5-14B as the base model, and the FLOPS calculation includes both training and inference. For those prompt-based methods, the FLOPS calculation only accounts for the inference stage. OptiMUS [7] and Chain-of-Expert [8] both utilize GPT-4 as their inference engine. Based on publicly available information [9], we can roughly estimate that GPT-4 comprises approximately 16*110B model parameters. Here is the detailed calculation of FLOPS. \\n\\n1. **Single-step training FLOPS of LLMOPT.** The calculation for single-step training FLOPS in LLMOPT includes three components: forward FLOPS, backward FLOPS, and optimizer FLOPS. Forward FLOPS, comprising the embedding layer and transformer computations, are approximately $20.68 \\\\times 10^{12}$. Backward FLOPS are twice the forward FLOPS, resulting in approximately $41.36 \\\\times 10^{12}$. Optimizer FLOPS are calculated as $12 \\\\times 14 \\\\times 10^9 = 168 \\\\times 10^9 = 0.168 \\\\times 10^{12}$. Combining these, the total single-step training FLOPS is approximately $20.68 \\\\times 10^{12} + 41.36 \\\\times 10^{12} + 0.168 \\\\times 10^{12} = 62.21 \\\\times 10^{12}$. However, since fine-tuning uses **LoRA**, the actual single-step training FLOPS should be close to the forward FLOPS, approximately $20.68 \\\\times 10^{12}$.\\n2. **SFT and KTO FLOPS of LLMOPT.** Both SFT and KTO use LoRA for training, resulting in similar FLOPS requirements. The batch sizes for SFT and KTO are 24 and 4, respectively, with 3,000 and 30,000 training steps, and **training durations of approximately 26 hours and 72 hours**. Consequently, the total training FLOPS for LLMOPT can be calculated as $5.76 \\\\times 10^5 \\\\times 20.68 \\\\times 10^{12} + 9.6 \\\\times 10^5 \\\\times 20.68 \\\\times 10^{12} = 3.17 \\\\times 10^{19}$.\\n3. **Single inference FLOPS in LLMOPT.** The approximate FLOPS for a single inference in LLMOPT is $20.68 \\\\times 10^{12}$. \\n4. **Single inference FLOPS in those prompt-based methods.** For prompt-based methods like OptiMUS [7] and Chain-of-Expert [8], we focus solely on inference costs, disregarding training costs. Assuming that a single expert is activated during the inference stage, the approximate FLOPS required for a single inference with GPT-4 is $3.88 \\\\times 10^{15}$. \\n\\nIn summary, during the training of LLMOPT, SFT and KTO require approximately 26 hours and 72 hours, respectively. When the number of calls reaches 9,437, the FLOPS of LLMOPT will be lower than other prompt-based methods using GPT4, which shows our cost advantage and huge application potential when calling on a large scale usage.\\n\\n\\n### Response to Question 5: Examples of code generated by LLMOPT. \\n\\nAs mentioned in the response to Question 1, LLMOPT aims to enable LLMs to better model optimization problems and generate correct solver codes. **There is no doubt that LLM is capable of generating code to solve large-scale complex problems.** Here we provide the code generated by the LLM to solve the optimization problem.\"}", "{\"title\": \"Gentle Reminder of the Rebuttal Deadline\", \"comment\": \"Dear Reviewer feJ7,\\n\\nAs the deadline approaches, we sincerely hope to address your concerns and discuss the rebuttal with you further. If you have any questions, please feel free to ask directly! Moreover, if you find our response satisfactory, could you please kindly consider the possibility of updating the rating. Thank you very much for your valuable suggestion.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Reply to Reviewer izRU (3/5)\", \"comment\": \"## About generalization and the five-element formulation.\\n\\n### Response to Question 5: Detailed explanation of generalization capabilities\\n\\nThank you for your suggestion to the five-element! Through two new experiments, we explain the generalization performance of LLMOPT from multiple perspectives.\\n\\n**Experiment 1: Comparative experiment on the generalization performance of five-element**.\\n\\nBased on 200 test data of different scenarios in the NL4Opt and IndustryOR datasets, experiments are deployed based on GPT-4o to compare the accuracy of solutions under different formulations. The instructions used for the open formulation is: _Please model the following optimization problem, and then write the corresponding pyomo code based on the modeling to solve the problem._\\n\\nIn order to demonstrate the generalization performance of the five-element, we classify the data according to the problem scenario (the number of data is in brackets). The solving accuracy results under different formulations are shown in the following table:\\n\\n| | **w/o Formulation** | **Open Formulation** | **CoT [2]** | **Five-element Formulation** |\\n| :---: | :---: | :---: | :---: | :---: |\\n| Manufacturing (62) | 53.2% | 58.1% | 53.2% | **59.7%** |\\n| Health (23) | 65.2% | **78.3%** | 69.6% | **78.3%** |\\n| Retail (23) | 56.5% | **65.2%** | 60.9% | **65.2%** |\\n| Transportation (33) | 72.7% | 75.8% | **81.8%** | **81.8%** |\\n| Agriculture (16) | 56.3% | 56.3% | 56.3% | **68.8%** |\\n| Others (43) | 51.2% | 55.8% | 54.8% | **62.8%** |\\n| All (200) | 58.0% | 63.5% | 61.0% | **67.5%** |\", \"the_results_show_that\": \"(a) **the five-element formulation has obvious advantages over other formulations in all scenarios.** (b) the solving accuracy of any formulation is higher than that of directly generating solving code. Using formulation as an intermediate process is conducive to generating correct code.\\n\\n**Experiment 2: Generalization performance on new dataset.**\\n\\nWe find a new dataset on the ICML 2024 Challenges on Automated Math Reasoning (Task 3) [3], whose data is not used for the training of LLMOPT. Since the test dataset does not open source the ground truth, we randomly selected 200 pieces from the training data as test data. The solving accuracy results are shown in the following table:\\n\\n| | **GPT-4o** | **GPT-4o + 5-elem** | **LLMOPT w/o 5-elem** | **LLMOPT** |\\n| :---: | :---: | :---: | :---: | :---: |\\n| The Competition Dataset | 78.5% | 81.5% | 87.0% | **89.5%** |\\n\\nFrom the above results, we can see that (a) **LLMOPT achieved the best performance on the new dataset,** which is 8.0% higher than GPT-4o; (b) on the new dataset, whether LLMOPT or GPT-4o, the five-element as an intermediate process brought significant performance improvement.\\n\\nIt is worth to note that, in the experimental results shown in Table 2 of the paper, the data in NLP4LP and ComplexOR did not participate in the learning process and data augmentation. However, **the results still show that LLMOPT improves by 11.8% and 6.0% on these two datasets respectively, achieving SOTA performance** and demonstrating the generalization performance of LLMOPT.\"}", "{\"title\": \"Gentle Reminder of the Rebuttal Deadline\", \"comment\": \"Dear Reviewer JwJz,\\n\\nAs the deadline approaches, we sincerely hope to address your concerns and discuss the rebuttal with you further. If you have any questions, please feel free to ask directly! Moreover, if you find our response satisfactory, could you please kindly consider the possibility of updating the rating. Thank you very much for your valuable suggestion.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper proposes a learning-based LLM for solving Optimization Problems. The experimental results demonstrate the effectiveness of the proposed LLMOPT compared with prompt-based methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The introduction of this manuscript is clear.\\n\\n2. Some improvement over baseline results.\", \"weaknesses\": \"1. It is questionable whether LLMs are capable of solving large-scale optimization problems. If their applicability is limited to small-scale problems, such as the Traveling Salesman Problem with just 10 nodes, the significance of this method is unclear. A problem of this size can even be solved by the Hopfield network proposed in the 1980s. In fact, both traditional heuristics and more recent neural combinatorial optimization methods are capable of handling problems with thousands of nodes [1, 2].\\n\\n2. In Abstract, the authors mention, \\\"to prevent hallucinations in LLMs, such as sacrificing solving accuracy to avoid execution errors.\\\" However, I do not believe this phenomenon should be described as hallucination. Hallucination typically refers to instances where the output of an LLM appears reasonable but is, in fact, fabricated. In this case, reducing solving accuracy to prevent execution errors does not align with the conventional meaning of hallucination. Please provide a more precise explanation or reconsider this terminology.\\n\\n3. As shown in Table 3, the self-correction mechanism plays a decisive role in the performance of LLMOPT. However, this mechanism is not originally proposed in this manuscript. As the authors describe on Page 6, \\\"Inspired by Chen et al. (2024), to enhance optimization generalization, we implement self-correction to automatically analyze the output results and identify errors arising during the execution of the solver code.\\\" Furthermore, the prior study [3] has also employed this mechanism to improve the performance of LLMs in solving vehicle routing problems.\\n\\n4. In Section 4, the authors do not report the training time of LLMOPT, making it unfair to directly compare it with prompt-based methods.\\n\\n5. Please provide the code generated by the LLM to solve the optimization problem. I am doubtful that current LLMs are capable of generating code sophisticated enough to solve medium- or large-scale problems.\\n\\n[1] Fu Luo, et al. Neural combinatorial optimization with heavy decoder: Toward large scale generalization. In Proceedings of the Advances in Neural Information Processing Systems, 2023.\\n\\n[2] Huigen Ye, et al. GNN&GBDT-guided fast optimizing framework for large-scale integer programming. In Proceedings of International Conference on Machine Learning, pp. 39864\\u201339878. 2023.\\n\\n[3] Zhehui Huang, et al. Can Large Language Models Solve Robot Routing?. arXiv, 2024.\", \"questions\": \"Please refer to the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer feJ7 (8/8)\", \"comment\": \"We hope that our response has addressed your concerns, but if we missed anything please let us know.\\n\\n**References**: \\n\\n[1] Fu Luo, et al. Neural combinatorial optimization with heavy decoder: Toward large scale generalization. In Proceedings of the Advances in Neural Information Processing Systems, 2023.\\n\\n[2] Huigen Ye, et al. GNN&GBDT-guided fast optimizing framework for large-scale integer programming. In Proceedings of International Conference on Machine Learning, pp. 39864\\u201339878. 2023.\\n\\n[3] GLPK (GNU Linear Programming Kit) solver. [https://www.gnu.org/software/glpk/](https://www.gnu.org/software/glpk/)\\n\\n[4] IPOPT (Interior Point OPTimizer) solver. [https://coin-or.github.io/Ipopt/](https://coin-or.github.io/Ipopt/)\\n\\n[5] SCIP (Solving Constraint Integer Programs) solver. [https://www.scipopt.org/index.php](https://www.scipopt.org/index.php)\\n\\n[6] ORLM open-source model. [https://huggingface.co/CardinalOperations/ORLM-LLaMA-3-8B](https://huggingface.co/CardinalOperations/ORLM-LLaMA-3-8B)\\n\\n[7] Ali AhmadiTeshnizi, et al. OptiMUS: Scalable optimization modeling with (MI)LP solvers and large language models. ICML 2024.\\n\\n[8] Ziyang Xiao, et al. Chain-of-Experts: When LLMs meet complex operations research problems. ICLR 2024.\\n\\n[9] GPT4: All Details Leaked. [https://medium.com/@daniellefranca96/gpt4-all-details-leaked-48fa20f9a4a](https://medium.com/@daniellefranca96/gpt4-all-details-leaked-48fa20f9a4a)\"}", "{\"title\": \"Reply to Reviewer iJY5\", \"comment\": \"Thank you for your response! We're happy to hear that some of your concerns have been resolved.\\n\\n**In order to deal with the optimization generalization issue effectively, LLMOPT is the first learning-based approach to propose learning-to-define and intact framework for leveraging LLMs to define and solve general optimization problems, achieving state-of-the-art performance across various tasks**. Unlike ORLM, which focuses solely on data, LLMOPT designs a comprehensive learning process. LLMOPT also introduces the universal five-element formulation to define various optimization problems, designes the self-correction mechanism. Moreover, LLMOPT achieves SOTA optimization generalization performance across a wide range of optimization tasks, outperforming both prompt-based methods like OptiMUS and learning-based approaches like ORLM.\\n\\nHigh-quality data is crucial for research in every field. We believe that someone always needs to take the first step, investing time and effort to accomplish this task initially. We will propose a comprehensive benchmark for leveraging LLMs to solve optimization problems, featuring a well-designed dataset and a complete evaluation process. We are currently utilizing a range of automated data augmentation techniques combined with MCTS to generate simulation data, aiming to enhance our dataset and address the issue of data scalability.\\n\\nThank you again for your kind questions! We would greatly appreciate the opportunity to discuss further with you to address your concerns, and please feel free to ask.\"}", "{\"title\": \"Reply to Reviewer JwJz (8/9)\", \"comment\": \"3. **Performance by problem type.**\\n\\nThank you for your valuable question! In order to demonstrate the generality of LLMOPT, we re-analyzed all experimental results and classified the types of optimization problems involved in each dataset, including Linear Programming (LP), Integer Programming (IP), Mixed Integer Programming (MIP), Nonlinear Programming (NP), Combinatorial Optimization (CO), Multi-objective Programming (MOP) and Others. The following table shows **the SA performance of LLMOPT in solving various types of problems on different datasets**. The data is the _solving accuracy (SA)_, and the values in brackets represent _(number of problems solved correctly/number of problems in the dataset)_. Detailed statistics are as follows:\\n\\n| | **NL4Opt** | **MamoEasy** | **MamoComplex** | **IndustryOR** | **NLP4LP** | **ComplexOR** |\\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: |\\n| **LP** | 90.5% (38/42) | - | 76.0% (19/25) | 60.0% (12/20) | 77.8% (7/9) | 60.0% (3/5) |\\n| **IP** | 95.8% (46/48) | 100.0% (36/36) | 100.0% (5/5) | 45.5% (5/11) | 0.0% (0/1) | - |\\n| **MIP** | 90.0% (9/10) | 95.3% (61/64) | 73.9% (17/23) | 47.7% (21/44) | 95.2% (20/21) | 80.0% (4/5) |\\n| **NP** | - | - | 100.0% (1/1) | - | 100.0% (3/3) | - |\\n| **CO** | - | - | 60.0% (21/35) | 33.3% (3/9) | 33.3% (1/3) | 100.0% (1/1) |\\n| **MOP** | - | - | - | 37.5% (3/8) | - | - |\\n| **Others** | - | - | 50.0% (5/10) | 25.0% (2/8) | - | - |\\n| **Total** | **93.0% (93/100)** | **97.0% (97/100)** | **68.0% (68/100)** | **46.0% (46/100)** | **83.8% (31/37)** | **72.7% (8/11)** |\\n\\nThe results show that **LLMOPT is universal in various types of optimization problems and can solve almost all kinds of optimization problems.** However, due to the different distribution of problem formulating difficulties, the accuracy of LLMOPT on these datasets is also different, which will be one of the focuses of future work.\\n\\n\\n## Response to Questions about the solver.\\n\\nThank you for your attention to solver fairness! We provide a detailed description of the solvers used by LLMOPT and the comparison methods here.\\n\\n1. **Solvers used by LLMOPT.** We use Pyomo [5] code in LLMOPT, which is a Python-based, open-source optimization modeling language with a diverse set of optimization capabilities. However, Pyomo is just a modeling language. In the learning process of LLMOPT, the training data consists of calling three open-source solvers (GLPK [6], IPOPT [7], and SCIP [8]) with the help of Pyomo code. Which solver to use for inference is at the discretion of the LLM. The codes used for training are labeled by experts as mentioned above.\\n2. **An introduction of the solvers.** GLPK (GNU Linear Programming Kit, [6]) focuses on solving linear programming (LP) and mixed integer linear programming (MILP) problems, and is suitable for small to medium-sized problems. IPOPT (Interior Point OPTimizer, [7]) is designed for solving large-scale nonlinear optimization problems (NLP), supports sparse constraints and quadratic programming, and performs well in the field of continuous optimization. SCIP (Solving Constraint Integer Programs, [8]) is a powerful and flexible solver that supports mixed integer nonlinear programming (MINLP) and complex constrained optimization (CP) problems, and is suitable for handling large-scale complex problems. **These three optimizers cover most types and sizes of optimization problems, and are all open source.**\\n3. **Solver details of baselines.** In OptiMUS [9] and Chain-of-Experts [10], the solver used is determined by the 'Solver' keyword in the SNOP representation, which is automatically determined by the method. In ORLM [11], the open source model [1] has lost its generalization performance and can only use the coptpy solver.\\n\\nThanks again for your attention to the solver details! We will add these details to the paper.\\n\\nWe hope that our response has addressed your concerns, but if we missed anything please let us know.\"}", "{\"comment\": \"Thank you for the detailed response. I have thoroughly reviewed the authors' response and their replies to other reviewers. At this time, I do not have any additional questions for the authors. I appreciate the authors' effort in explaining the contribution of the paper, but I would like to keep the score for now as I'm not sure if the paper's contribution is sufficient for ICLR. As mentioned previously, I remain open to discussions with the AC and other reviewers in the next stage.\"}", "{\"title\": \"Reply to Reviewer izRU (4/5)\", \"comment\": \"### Response to Question 2: In-depth analysis of five-element formulation.\\n\\nIn order to generate the correct solver code, we believe that LLM should first learn to define the optimization problem, so a formulation is needed as an intermediate process for generating code. When choosing a suitable formulation, on the one hand, we considered that **the formulation should correspond one-to-one with the definition of the optimization problem**, that is, the problem that can be written as formula (1) in Section 3.2.1 of the paper can be written with this formulation. On the other hand, since the mathematical expression is too abstract, **this formulation should include a more detailed description of the problem than formula (1)** **and should be easier to convert into Pyomo code**. Considering the above two aspects, we propose the five-element: a formulation that aligns one-to-one with the mathematical definition of the optimization problem while also incorporating the necessary problem description.\\n\\nIntuitively, **any optimization problem that can be expressed in the form of formula (1) can be fully described by the five-element formulation, as the five-element is designed to correspond one-to-one with formula (1).** Specifically, variables, objectives, and constraints represent the core components of the optimization problem. To facilitate better code generation, sets and parameters are included as optional descriptions, completing the five-element formulation.\\n\\nIn order to illustrate that the five-element formulation are applicable to various optimization problems, we have given 7 examples of using the five-element to define different optimization problems in Appendix J, including linear programming, integer programming, mixed-integer programming, nonlinear programming, and combinatorial optimization (0-1 knapsack problem, traveling salesman problem), to illustrate the applicability of the five-element to various problems.\\n\\n## About self-correction.\\n\\n### Response to Question 3: Comparisons performance analysis to self-correction mechanism.\\n\\nTo further explore the superiority of self-correction and LLMOPT, we deploy experiments on the NL4Opt and IndustryOR datasets (NL4Opt has relatively simple problems, while IndustryOR has relatively complex problems). We change two correction mechanisms, one is correction by GPT-4o with the same prompt of self-correction, and the other is to repeat the inference 12 times and manually judge the optimal solution (which means that only one optimal solution needs to be found in 12 repeated experiments). The reason we chose 12 is that self-correction is limited to a maximum of 12 repeated checks, so this is fair. We also conducted experiments on GPT-4o and ORLM [4]. (We reproduced the open source model of ORLM [4], but found that this model seems to have lost other abilities except writing coptpy code for optimization problems. We find that ORLM has a serious seesaw problem, which is performance as without generalization ability, and cannot answer other questions. Therefore, only the \\\"Best of 12 repeats\\\" correction mechanism is experimented.) The results are as follows:\\n\\n| **Inference Model** | **Correction Mechanism** | **IndustryOR Dataset** | **NL4Opt Dataset** |\\n| :---: | :---: | :---: | :---: |\\n| LLMOPT (Qwen-1.5) | Self-correction | **46.0%** | **93.0%** |\\n| LLMOPT (Qwen-1.5) | Correction by GPT-4o | 41.0% | 89.0% |\\n| LLMOPT (Qwen-1.5) | Best of 12 repeats | 42.0% | 89.0% |\\n| GPT-4o | Correction by GPT-4o | 34.0% | 84.0% |\\n| GPT-4o | Best of 12 repeats | 32.0% | 84.0% |\\n| ORLM [4] | Best of 12 repeats | 39.0% | 88.0% |\", \"the_results_show_that\": \"(a) When LLMOPT (Qwen-1.5) is used as the inference model, the correction performance of GPT-4 is lower than the self-correction solving accuracy of LLMOPT. This indicates that the Qwen-1.5 model learned by LLMOPT shows stronger overall capabilities in both solving optimization problems and correction compared to other methods. (b) Although manually selecting the best result from 12 repetitions shows performance improvement (considering once solving correct if one out of 12 repetitions is accurate), it still falls short of the effectiveness compared with the self-correction mechanism. **This highlights that identifying and correcting errors is more critical than simply repeating executions, emphasizing the necessity of implementing a correction mechanism.**\"}", "{\"title\": \"Reply to Reviewer JwJz (2/9)\", \"comment\": \"## Response to concerns about KTO. (Weakness 2 and Questions about equations)\\n\\nThank you for your careful review! We apologize that the introduction of KTO is confusing, and we will carefully revise the description of KTO in the paper. Here we answer your concerns about KTO in Weaknesses and Questions one by one.\\n\\n1. **Purpose of model alignment and KTO.**\\n\\nThank you for your thoughtful feedback. The word \\\"alignment\\\" may be ambiguous, especially for readers in different fields. We apologize for not providing more detailed background information. In fact, aligning generative models with human feedback has been successfully used to make generations more helpful, factual, and ethical, among other desiderata [13, 15].\\n\\n**It is important to clarify that the targets and methodologies of model alignment and supervised fine-tuning (SFT) are different.** Alignment primarily aims to address the issue of hallucination in our work. In this paper, our goal is to enable LLMs to formulate optimization problems and generate corresponding solving code. However, despite SFT training the model to learn how to write solving code, LLMs may still exhibit hallucination when faced with novel problems. This hallucination manifests as outputs that appear plausible but are, in fact, fabricated or inaccurate.\\n\\nA simple yet typical example of hallucination involves the handling of strict inequality constraints (e.g., `>` and `<`). Most solvers cannot directly process such constraints; however, an LLM trained solely with SFT might incorrectly define these conditions when generating code. This error arises from flawed reasoning, where the LLM falsely analogizes Python-supported `>` and `<` operators to optimization problem modeling. The correct approach, however, is to approximate strict inequalities by converting them into non-strict inequalities with a small positive margin. This is a classic case of hallucination: the generated code appears plausible but is fundamentally incorrect. In such scenarios, **model alignment can be performed after SFT to address this issue.** By introducing preference-based interventions (e.g., explicitly marking certain samples as incorrect with desirability = False in the KTO framework used in this paper), hallucination can be mitigated. This results in logically sound models and more executable, standardized solving code. Therefore, alignment is both necessary and critical for achieving robust performance.\\n\\nThank you again for your question! We will clarify the concept of alignment more thoroughly, provide sufficient citations, and offer a more detailed explanation of the purpose of KTO in the paper.\\n\\n2. **Datasets of SFT and KTO.**\\n\\nWe apologize for not clearly expressing the SFT and KTO dataset splits! We have clarified the details of the dataset in the _response to questions about the data augmentation and labeling by experts_ in the next response section. The data labeled as True in the KTO dataset is consistent with the data in the SFT dataset, with a total of 9,828 entries. The number of the False data in the KTO dataset is 9,735. Thank you again for your valuable comments!\"}", "{\"title\": \"Reply to Reviewer iJY5 (2/6)\", \"comment\": \"### Response to concern in Weakness 2: Compared with ORLM\\n\\nWe appreciate your recognition of fine-tuning LLMs for optimization modeling tasks as a novel area of research. The difference between LLMOPT and ORLM:\\n\\n1. **ORLM focuses on data augmentation methods, while LLMOPT focuses on how learning is conducted.** Although ORLM introduced four kinds of data augmentation methods, it does not focus on the learning process and without comprehensively evaluate model performance. In contrast, LLMOPT designs a detailed process for data, learning, and auto-testing. It not only declares the learning workflow at the methodological level (e.g., multi-instruction SFT and model alignment) but also conducts a thorough evaluation of model performance. **Therefore, LLMOPT is the first novel approach to explore both what to learn and how to learn.**\\n2. **ORLM focuses solely on generating solution code, whereas LLMOPT addresses both the formulating and solving of optimization problems.** Specifically, ORLM performs a straightforward task: inputting an optimization problem and directly inferring the corresponding solver Python code. In contrast, LLMOPT introduces a new learning task **Learning to Define** as a general formulation for optimization problems, enabling the generation of more accurate code. By using the five-element formulation as an intermediate step, LLMOPT can clearly define the problem and identify potentially overlooked hidden conditions, enabling LLMOPT resulting in higher-quality code generation.\\n3. **LLMOPT conducted comprehensive seesaw tests (see the Section 5 and Appendix E in our paper), while ORLM has largely lost its ability to solve other basic problems.** We have reproduced and evaluated ORLM\\u2019s performance using the open-source model provided in [1]. The results show that what ORLM (based on LLaMA-3-8B) can do is only generating Coptpy solver code and the ORLM model cannot answer any other questions (e.g., _If all cats can climb trees, and Mike\\u2019s pet is a cat, then can Mike\\u2019s pet climb trees?_). This indicates that ORLM has significantly lost its capability on solving basic problems but optimization.\\n4. **The additional experiments show the superior generalization performance of LLMOPT compared to ORLM**. We find a new dataset from the _ICML 2024 Challenges on Automated Math Reasoning (Task 3)_ [2], which was not used in the training of either LLMOPT or ORLM. Since the test data for this dataset does not have open-source ground truth, we randomly sampled 200 data from its training dataset to serve as the test data. The solving accuracy results are as follows.\\n\\n| | **GPT-4o** | **ORLM** | **LLMOPT** |\\n| :---: | :---: | :---: | :---: |\\n| The Competition Dataset [2] | 78.5% | 84.0% | **89.5%** |\\n\\nThe results show that (a) **Compared to ORLM, LLMOPT shows better generalization performance even on a completely new dataset.** (b) Both LLMOPT and ORLM outperform GPT-4o, highlighting the potential of learning-based approaches in solving optimization problems.\"}", "{\"title\": \"Thank you for the response\", \"comment\": \"Thank you for your detailed response. However, I still have some concerns. As I mentioned in W1 and W5, can LLMOPT effectively address large-scale optimization problems? The example provided in your response (Part 7) does not appear to represent a large-scale optimization problem (please correct me if I am mistaken). Could you provide an example demonstrating how LLMOPT can solve large-scale optimization problems, such as a TSP instance with 1,000 nodes? If not, could you clarify the current limitations on the problem scale that LLMOPT can handle? I am confident that this will make a significant contribution to advancing this field.\"}", "{\"comment\": \"Thank you for your response! For LLMOPT, the difficulty of correctly formulating mathematical models is the same regardless of the scale of the optimization problem. The difference is that small-scale problems derive data from the problem description, whereas large-scale problems rely on data files. LLMOPT focuses solely on mathematical modeling and invoking solvers. Whether a large-scale problem (e.g., large-scale TSP) can be correctly solved is the responsibility of the professional solver and falls outside the scope of LLMOPT\\u2019s capabilities.\\n\\nWe would like to emphasize that LLMOPT focuses on finding out a feasible learning-based way to formulate and solve optimization problems automatically, which achieves an average improvement of 11.08% in optimization generalization on a wide range of the existing common optimization benchmarks.\"}" ] }
9OJflnNu6C
Controllable Unlearning for Image-to-Image Generative Models via $\epsilon$-Constrained Optimization
[ "XiaoHua Feng", "Yuyuan Li", "Chaochao Chen", "Li Zhang", "Longfei Li", "JUN ZHOU", "Xiaolin Zheng" ]
While generative models have made significant advancements in recent years, they also raise concerns such as privacy breaches and biases. Machine unlearning has emerged as a viable solution, aiming to remove specific training data, e.g., containing private information and bias, from models. In this paper, we study the machine unlearning problem in Image-to-Image (I2I) generative models. Previous studies mainly treat it as a single objective optimization problem, offering a solitary solution, thereby neglecting the varied user expectations towards the trade-off between complete unlearning and model utility. To address this issue, we propose a controllable unlearning framework that uses a control coefficient $\epsilon$ to control the trade-off. We reformulate the I2I generative model unlearning problem into a $\epsilon$-constrained optimization problem and solve it with a gradient-based method to find optimal solutions for unlearning boundaries. These boundaries define the valid range for the control coefficient. Within this range, every yielded solution is theoretically guaranteed with Pareto optimality. We also analyze the convergence rate of our framework under various control functions. Extensive experiments on two benchmark datasets across three mainstream I2I models demonstrate the effectiveness of our controllable unlearning framework.
[ "Machine unlearning", "Generative model", "Controllable" ]
Accept (Poster)
https://openreview.net/pdf?id=9OJflnNu6C
https://openreview.net/forum?id=9OJflnNu6C
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y6Q9ZSnkro", "pEJxB56MEG", "kn9Gs6YsaT", "baAtzrJP2Q", "XjbUuaL3Bu", "UfMzzDcgvF", "UJh3OAk9t2", "Pc1IIj64AQ", "GT3Qwdt2h5", "8DitX2kdyd", "8BbdPPDc9l", "8AWb3hHFFn", "5PUCp0evO9", "15KBONr8Dm" ], "note_type": [ "meta_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1734786347124, 1730700990092, 1730711432992, 1732247119077, 1732420250727, 1732261038890, 1731598730827, 1737523471898, 1731598932813, 1732405395195, 1731598845931, 1731598524184, 1730623267505, 1731599031874 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1859/Area_Chair_x9GP" ], [ "ICLR.cc/2025/Conference/Submission1859/Reviewer_icG6" ], [ "ICLR.cc/2025/Conference/Submission1859/Reviewer_NJMm" ], [ "ICLR.cc/2025/Conference/Submission1859/Reviewer_icG6" ], [ "ICLR.cc/2025/Conference/Submission1859/Authors" ], [ "ICLR.cc/2025/Conference/Submission1859/Reviewer_icG6" ], [ "ICLR.cc/2025/Conference/Submission1859/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1859/Authors" ], [ "ICLR.cc/2025/Conference/Submission1859/Reviewer_NJMm" ], [ "ICLR.cc/2025/Conference/Submission1859/Authors" ], [ "ICLR.cc/2025/Conference/Submission1859/Authors" ], [ "ICLR.cc/2025/Conference/Submission1859/Reviewer_hwTN" ], [ "ICLR.cc/2025/Conference/Submission1859/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a novel \\\\epsilon constrained optimization problem to trade off unlearning and model utility in model unlearning, which is different from previous methods that consider only a single objective. The proposed method is supported by both theoretical guarantees and experimental results on two benchmark datasets. Given the positive feedback from all reviewers, I recommend accepting this paper.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers are generally positive about this paper. The discussion is around some technical details, and the authors provided detailed feedback in the rebuttal. The authors provided more discussions on the difference to previous methods and added more experiments in the appendix. Also, the computational complexity of the proposed algorithm is explained in detail. I weighted 70% on the ratings provided by the reviewers and 30% for the improved points in the rebuttal.\"}", "{\"summary\": \"This submission formulates the controllable I2I unlearning problem as a $\\\\epsilon$-constrained problem, which differs from the prior objective. By reformulating the problem as a $\\\\epsilon$-constrained bi-objective function, two Pareto optimal solutions and the valid range of the control coefficient $\\\\epsilon$ can be obtained. Furthermore, the authors provide a theoretical analysis of the convergence of the proposed method under various control functions used to govern the direction of parameter updates. The experimental results on two well-known benchmarks show the effectiveness over the mentioned baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed method is sound. The proposed method reformulates the I2I unlearning problem by integrating the $\\\\epsilon$-constrained method which is widely used in multi-objective optimization. This integration makes the unlearning degree controllable and brings a few theoretical merits, such as convergence analysis.\", \"This submission is well written and organized, which reduces the difficulty in reading and comprehending.\"], \"weaknesses\": [\"This could be an improvement of [1] based on $\\\\epsilon$-constrained method. Technically, please provide the specific design of $\\\\epsilon$-constrained optimization for the I2I unlearning problem. And why $\\\\epsilon$-constrained method is required to integrate with the I2I unlearning problem?\", \"Some claims are not evaluated. For instance, in line 70, how the challenge ``First and foremost, this approach offers a solitary resolution,..\\u2019\\u2019 is addressed?\", \"Evaluation of different crop sizes should be conducted. In practice, not only the degree of forgetting but also the size of crop area is defined by users.\", \"[1] Machine unlearning for image-to-image generative models, ICLR 2024\"], \"questions\": [\"Why the results of Composite Loss is different from those reported in [1]? Please provide more details of implementation differences about it.\", \"According to Fig.4, why the visualization of the retained set of MAE is changed after unlearning? This is quite different from [1].\", \"Can you provide experimental results to demonstrate the proposed enjoys better unlearning efficacy than other methods? The theoretical results sometimes are different from real practice.\", \"[1] Machine unlearning for image-to-image generative models, ICLR 2024\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a machine unlearning approach for generative image-to-image models. Machine unlearning algorithm in generative domain aims to make the model forget a specific subset samples (for e.g. defined by classes) while retaining its generalization capability on the other samples in order to address issues related to privacy and biases. The paper proposes a controllable unlearning algorithm flexible enough to balance between quality/degree of unlearning concepts and the model\\u2019s generalization capabilities. The approach uses a gradient based method to solve a constraint optimization objective where the constrain is to forget a certain specified set while retaining its reconstruction quality on remaining samples. The paper also shows theoretical analysis of its approach using Pareto optimality. The paper shows quantitative and qualitative results on in-painting/out-painting tasks to demonstrate the efficacy of the proposed approach.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The paper explores an unlearning approach for generative image-to-image models that uses gradient based method to solve a constrained optimization objective.\\nThe paper explains the issues present in the current machine unlearning domain and address these issues using a controllable optimization where the users have control over the unlearning optimization (model unlearning while maintaining model generalization). The proposed framework shows better results on ImageNet-1k and Places-365 dataset for in-painting tasks compared to other baseline unlearning approaches. The paper provides detailed ablation experiments and theoretical analysis to explain its proposed algorithm. The paper is well-written, easy to follow and contains a pseudocode that explains the methodology clearly.\", \"weaknesses\": \"It would helpful for the reader to see some discussions around the robustness of the concepts removal. For example is it possible to use some attack that resurfaces the forget set, for example as shown in paper Petsiuk, Vitali, and Kate Saenko. \\\"Concept Arithmetics for Circumventing Concept Inhibition in Diffusion Models.\\\"\\u00a0arXiv preprint arXiv:2404.13706\\u00a0(2024).\", \"it_would_be_helpful_for_the_readers_if_some_more_related_unlearning_papers_are_added_as_references\": \"[1] Petsiuk, Vitali, and Kate Saenko. \\\"Concept Arithmetics for Circumventing Concept Inhibition in Diffusion Models.\\\"\\u00a0arXiv preprint arXiv:2404.13706\\u00a0(2024)\\n\\n[2] Kumari, Nupur, et al. \\\"Ablating concepts in text-to-image diffusion models.\\\"\\u00a0Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\", \"questions\": \"It would be helpful if the paper can answer/comment on the following question/suggestion:\\n\\n1. Is it possible to formulate the unlearning objective that simply in-paints with background content ( i.e. instead of predicting a gaussian type patch in the image for in-painting task, the model predicts the background and does not generate the subject that is to be forgotten). Does this require modification in the formulation that uses Divergence(P_Xf | N(0, sigma)) as condition.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you. Most of my concerns have been addressed. I still have a question about the problem modeling in this submission. The description in Sec. 4.2 suggests that the bi-level unlearning optimization problem could be formulated with the $\\\\epsilon$-constraint methods in multiobjective optimization. To solve such a problem, the authors adopt a special variant of Sequential Quadratic Programming (SQP)[1]. I have not found any special design for formulating the unlearning optimization problem. Can I consider the controllable unlearning framework to be the key technical contribution of this submission?\\n\\n[1] Numerical optimization: theoretical and practical aspects, Springer Science & Business Media, 2006.\"}", "{\"comment\": \"Thank you once again for your valuable comments. We have added the response to Weaknesses in Section 2.2 and Appendix H.1, and the response to Questions is included in Appendix B.\"}", "{\"comment\": \"Thank you! I have acknowledged your motivation to reformulate the unlearning optimization problem with $\\\\epsilon$-constraint method. From my perspective, the ''two more technical contributions'' should be the properties of the proposed reformulation. In general, I am happy to maintain my positive score.\"}", "{\"comment\": \"**1. Weaknesses:** Some discussions around the robustness of the concept removal would be helpful for readers.\\n\\n**Response:** Thank you for your suggestions. We agree that discussing the robustness of concept removal is indeed essential. For concept unlearning within text-to-image models, where the target for unlearning is an abstract concept, such as the \\\"Van Gogh\\\" style or an entity concept like \\\"Musk,\\\" the goal is to ensure that the model's output images do not contain the specified concept. In such cases, the robustness of unlearning can be validated by determining whether it is possible to restore the concept (e.g., through attacking). \\n\\nHowever, this paper focuses on unlearning a set of samples or a distribution, meaning eliminating the knowledge the model has learned from these samples so that it cannot reconstruct this knowledge under any input conditions. In this scenario, we cannot directly verify through attackers as concept unlearning. Instead, we follow the setup of [1], simulating input variability by altering aspects of the input image, such as adjusting the crop region\\u2019s position or scale, to validate the robustness of unlearning. The results have been reported in Appendix G.1. Additionally, we believe your insights significantly contribute to our research, and incorporating a discussion on this issue will aid readers in better understanding. Therefore, we have added a corresponding discussion in Appendix G.1 and supplemented Appendix G.1 with all the references you mentioned (i.e., [2][3]). \\n\\n**Additional Discussion.** Regarding your point on concept unlearning, we agree that discussing its robustness is crucial and is a key area for future research. Current methods for concept unlearning have not yet achieved the desired robustness due to three main reasons: \\n\\n- First, these methods typically define concepts using text in text-to-image generation models. However, concepts are inherently relative for humans, and defining them textually introduces ambiguity since different textual expressions may represent the same concept to people. \\n\\n- Second, most existing work on concept unlearning focuses on breaking the mapping from text to image, which means that given certain text, the model fails to generate images containing the forgotten concept. However, what is actually needed is that the model should be unable to generate images containing the forgotten concept under any text input. \\n\\n- Third, considering the interconnections between concepts, where one concept might be composed of several other concepts, simply unlearning the specified concept may not be sufficient. \\n\\nTherefore, we believe that focusing on the robustness of concept unlearning is essential.\\n\\n**2. Questions:** Can inpainting an image (using background content) be used as a substitute for the unlearning target?\\n\\n**Response:** We deem that describing the unlearning target as inpainting an image using only background content is feasible to some extent, such as concept unlearning. For instance, if we aim to protect privacy by unlearning parts of an image generation model that contain personal information (i.e., an abstract concept), we can first identify the region of the image containing such information, then simply mask this region, and subsequently generate a new image through inpainting, ensuring that the model\\u2019s output aligns with the inpainted new image. However, this approach has two issues:\\n\\n- Firstly, it must be ensured that the new image generated through inpainting does not contain the information that needs to be forgotten. We believe this can be accomplished by incorporating an additional adversarial discriminator using GAN training strategies or by employing reinforcement strategies.\\n\\n- Secondly, aligning the model's output with the inpainted new image merely confuses the knowledge learned by the model, increasing uncertainty during generation, which constitutes a superficial form of unlearning. However, based on our experimental experience, if the goal is merely to erase the influence of certain samples on the model, directly aligning with Gaussian noise may yield a more pronounced unlearning effect.\\n\\n[1] Li, Guihong, et al. \\\"Machine unlearning for image-to-image generative models.\\\" arXiv preprint arXiv:2402.00351 (2024).\\n\\n[2] Petsiuk, Vitali, and Kate Saenko. \\\"Concept Arithmetics for Circumventing Concept Inhibition in Diffusion Models.\\\" arXiv preprint arXiv:2404.13706 (2024).\\n\\n[3] Kumari, Nupur, et al. \\\"Ablating concepts in text-to-image diffusion models.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"**1. Weaknesses:** The definition of unlearning completeness in the paper is problematic.\\n\\n**Response:** Thank you for your insights. This work focuses on improving [1]. Thus, we adhere to existing settings, conducting unlearning at the sample level. Our goal is to eliminate the influence of these samples on the model, or in other words, to erase the knowledge the model has learned from these samples. We fully agree with your point, particularly when assuming the unlearning target is a distribution, or an abstract concept. In such cases, the unlearning target should be the data manifold or a subspace within the data representation space, which could have broader applications in real-world scenarios, such as typical style erasure cases. This is an issue that requires further exploration in future research. \\n\\nFortunately, our framework can be simply extended to accommodate this setting. For instance, as you mentioned, if the generative model's unlearning involves the original image manifold, we can shift the loss in the unlearning target from a pixel space metric to a manifold metric on the image, or introduce manifold regularization in the outputs, and then optimize and adjust based on our unlearning framework. Thank you again for your insightful suggestions, which are very helpful for improving our future work!\\n\\n**2. Weaknesses:** A detailed computational complexity comparison and a discussion on memory usage would be helpful.\\n\\n**Response:** Thank you for your suggestion. We conduct a brief analysis as follows. Assuming we use the Adam optimizer, in each iteration, the computational complexity for methods involving only a single model gradient (i.e., other baselines) arises from the following sources:\\n1. Loss computation: The forward pass to compute the loss has a complexity of $O(N\\u00d7P)$, where $N$ is the number of samples and $P$ is the number of parameters.\\n2. Gradient computation: The backward pass to compute the gradient of the loss has a complexity of $O(N\\u00d7P)$.\\n3. First moment update: Adam updates the first moment estimate (mean) $m_t$. This requires $O(P)$ operations, as each parameter's moment is updated using the computed gradient.\\n4. Second moment update: Similarly, updating the second moment estimate (variance) $v_t$ also requires $O(P)$ operations.\\n5. Parameter update: Adam uses the updated first and second moments to modify the parameters, which requires $O(P)$ operations.\\n\\nThus, the total computational complexity for each iteration is: $O(N\\u00d7P)$\\u00a0(loss\\u00a0computation) + $O(N\\u00d7P)$\\u00a0(gradient\\u00a0computation) + $O(P)$\\u00a0(moments\\u00a0and\\u00a0update) = $O(N\\u00d7P)$.\\n\\nFor each iteration in our algorithm, the computational requirements are:\\n1. Loss computation: The forward pass to compute both $f_1(\\\\theta_t)$ and $f_2(\\\\theta_t)$ requires two forward passes, each with a complexity of $O(N\\u00d7P)$, so this step has a total complexity of $O(N\\u00d7P)$.\\n2. Gradient computation: The backward pass to compute both $\\\\nabla f_1(\\\\theta_t)$ and $\\\\nabla f_2(\\\\theta_t)$ requires two backward passes, each with a complexity of $O(N\\u00d7P)$, so this step also has a total complexity of $O(N\\u00d7P)$.\\n3. First moment update: Adam updates the first moment estimate $m_t$ based on the combined gradient $g_t$, which requires $O(P)$ operations.\\n4. Second moment update: Adam also updates the second moment estimate $v_t$ based on the combined gradient $g_t$, requiring $O(P)$ operations.\\n5. Dual problem solution (Line 8): Solving the dual problem in Line 8 involves vector operations such as dot products and norms, which have a complexity of $O(P)$.\\n6. Parameter update: The model parameters are updated based on the Adam update rule, requiring $O(P)$ operations.\\n\\nIn terms of computational complexity, although our method incurs slightly greater computational costs compared to other baselines, the overall computational complexity remains $O(N\\u00d7P)$. Regarding memory usage, due to model-level gradient operations, we employ a strategy of trading time for space in the practical implementation to address this issue. Through our experiments, we have verified that our method can also be effectively applied to larger models, such as diffusion model, with computational efficiency that is entirely acceptable.\\n\\n**3. Weaknesses:** Confusing evaluation.\\n\\n**Response:** Your feedback is indeed meaningful. We adhere to the settings from prior work [1], which may place more emphasis on erasing sample-specific knowledge, potentially being too stringent in the context of distribution or concept unlearning. We concur with your point that future research on unlearning in generative models should be defined on data manifolds, as this aligns more closely with real-world needs.\\n\\n**4. Weaknesses:** Minor typos.\\n\\n**Response:** We apologize for the imperfect writing. We have carefully reviewed the paper and fixed identified typos.\"}", "{\"title\": \"Comment response\", \"comment\": \"Thank you for your response on the robustness and in-painting objective, it would be great to see this being referred in the manuscript too. I would like to maintain my rating.\"}", "{\"comment\": \"**1. Weaknesses:** Why the $\\\\varepsilon$-constrained method is required to integrate with the I2I unlearning problem? Please provide the specific design of $\\\\varepsilon$-constrained optimization for the I2I unlearning problem.\\n\\n**Response:** We apologize for the lack of clarity in our previous statements. We first explain why applying $\\\\varepsilon$-constrained optimization to I2I unlearning, primarily based on the following three reasons:\\n\\n- Initially, the original unlearning problem is defined as a bi-objective optimization problem (i.e., Eq.3), where the unlearning objective and the objective of preserving model performance are considered equally important. However, this is not the case in practical applications. In real-world scenarios, we often prioritize the unlearning objective, or the unlearning objective is often a hard constraint, such as regulations imposed by governments. Therefore, we aim to satisfy the unlearning constraints first, before improving model performance.\\n\\n- In the real world, the requirements for unlearning standards vary among different individuals or institutions. We aim to develop a method that allows precise control over the degree of unlearning.\\n\\n- Existing methods often require tedious parameter tuning during implementation. We aim to avoid this. In our approach, simply adjusting the form of optimization allows us to determine the boundaries for the hyperparameter $\\\\varepsilon$, thereby clarifying the effective range of hyperparameter values.\\n\\nBased on the reasons above, introducing $\\\\varepsilon$-constrained optimization into I2I unlearning effectively addresses these objectives. Furthermore, theoretically, $\\\\varepsilon$-constrained optimization is equivalent to the original bi-objective optimization problem.\\n\\n**2. Weaknesses:** Some claims are not evaluated, such as line 70.\\n\\n**Response:** We sincerely apologize for any confusion caused by our inadequate expression. In line 70, we initially used \\\"solitary resolution\\\" to convey the meaning of \\\"fixed result.\\\" To avoid misunderstanding, we have replaced it with \\\"fixed result\\\".\\n\\n**3. Weaknesses:** Evaluation of different crop sizes should be conducted.\\n\\n**Response:** Due to space constraints, we report this experiment in Appendix G.1 and G.3.\\n\\n**4. Questions:** Why the results of Composite Loss is different from those reported in [1]?\\n\\n**Response:** We replicate their experiments on our server, utilizing the hyperparameters recommended in the original paper [1], such as learning rate and optimizer parameters. To facilitate a fair comparison, for other experimental details, we adopt the same setup for all compared methods. \\n\\nAs for the differences from the results in [1], we deem they are due to certain experimental settings that we could not fully align with theirs. Specifically, due to limitations in experimental conditions, some parameters like batch size, epochs, and multi-GPU training setups may not exactly match theirs. \\nAdditionally, certain experimental configurations, such as the random seed, could not be aligned with [1] as they have not been disclosed. Furthermore, we do not apply any data augmentation techniques during the experiments, and correspondingly, our method also does not utilize such operations either. For more details, we have reported the experimental settings in Appendix C.\\n\\n**5. Questions:** Why the visualization of the retained set of MAE is changed after unlearning? This is quite different from [1].\\n\\n**Response:** The issue you pointed out does indeed exist. When performing the unlearning operation, [1] updates the encoder's parameters through the L2 loss between encoders, whereas our method achieves updates via the L2 loss of the outputs (i.e., freezing the decoder and updating the encoder). This pixel-level loss results in greater fluctuations in the loss for models with weaker generative capabilities, impacting the encoder. Compared to VQ-GAN and Diffusion, MAE has significantly weaker generative capabilities, which is why this difference arises.\\n\\n**6. Questions:** Can you provide experimental results to demonstrate the proposed enjoys better unlearning efficacy than other methods?\\n\\n**Response:** In Table 1 of the main text, we compare the performance of our method with other methods under the highest degree of unlearning completeness, ensuring that the hyperparameters for the other methods are consistent with those in [1]. Due to space constraints, we have reported the remaining experimental results in Appendix E. Extensive experimental results validate the effectiveness of our method, demonstrating that we surpass existing baselines in terms of both unlearning efficacy and the maintenance of model performance.\\n\\n\\n[1] Li, Guihong, et al. \\\"Machine unlearning for image-to-image generative models.\\\" arXiv preprint arXiv:2402.00351 (2024).\"}", "{\"comment\": \"We sincerely thank all the reviewers for their valuable comments and suggestions, which are crucial for improving our work. We hope our response addresses your concerns.\"}", "{\"summary\": \"In this work, the authors study the problem of machine unlearning (MU) in image-to-image (I2I) generative models. Unlike prior studies, this approach diverges from a single objective to better consider the tradeoff between unlearning completeness and model utility, offering more flexibility for varying user needs. Specifically, the authors first reformulate the bi-objective MU problem into a constrained optimization problem and then propose a gradient-based algorithm to find Pareto optimal solutions. The proposed algorithm comes with a theoretical guarantee for convergence. Additionally, empirical results show that the proposed method provides a good balance between the two objectives, performing competitively among baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed algorithm is well-motivated and comes with a theoretical guarantee for convergence, laying a solid theoretical foundation for application.\\n2. The authors identify an overlooked issue in MU for I2I generative models in previous works: the failure to cater to varying user expectations in the real world, $i.e.,$ lack of controllability. Based on this observation, they derive a novel solution to this new bi-objective problem, which has practical significance for improving I2I generative models.\\n3. The empirical findings align with the theoretical results, demonstrating that the solutions found by the proposed algorithm achieve good performance in terms of both objectives, $i.e.,$ unlearning completeness and model utility.\", \"weaknesses\": \"1. The definition of unlearning completeness in the paper is problematic. The paper uses the KL divergence between distributions of forget data and reconstructed data to evaluate the completeness of unlearning, ultimately approximating it with the L2 loss. However, both losses are not ideal criteria for assessing unlearning performance, as they are defined in pixel space and disregard the original image manifold. Generative models can exploit this by outputting inconsistent pixel values when operating on the forget set, leading to suboptimal unlearning results. This is evident in the artifacts in reconstructed image examples from the forget set in Appendix F, $e.g.,$ in the inpainting task.\\n2. The proposed algorithm doubles memory usage, as it requires storing two separate model gradients. It also involves model-level gradient operations (as described in Algorithm 1, Line 8), making it more complex than other baselines. This can be less practical for larger models. A detailed computational complexity comparison and a discussion on memory usage would be helpful.\\n3. Confusing evaluation. In Table 1, the Inception Score (IS) appears in both columns for the forget set and retain set. It is unclear why IS should be \\\"the less the better\\\" (if that is the meaning of the down-arrow) for the forget set. If unlearning completeness is linked to low-quality generation, then the objective becomes trivial\\u2014a simple classifier to detect forget data would suffice. Echoing my previous point in W1, the generative model should at least produce a natural or similar image, even if the input is out-of-distribution. This requirement is completely overlooked here.\\n4. Minor typos. For example, in Equation 1, $I_\\\\theta=D_\\\\phi(E_\\\\gamma(\\\\mathcal{T}(x)))$ should be $I_\\\\theta=D_\\\\phi(E_\\\\gamma(x))$ to be consistent with the rest of the text.\", \"questions\": \"Is the proposed method applicable to text-guided I2I generative models, such as image editing models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**5. Questions:** Is the proposed method applicable to text-guided I2I generative models, such as image editing models?\\n\\n**Response:** We deem this is feasible. In the current research on unlearning in image generation models, we categorize it into two types based on the unlearning target: one is unlearning a fixed set, viewed as sample-level unlearning, aimed at eliminating the influence of the forgotten set on the model. This scenario is common in I2I generation models and is the focus of this paper. The other is unlearning an abstract concept, typically seen in text-to-image diffusion models, where the goal is to prevent the model from generating images containing the concept after unlearning.\\n\\nFor the latter, concept unlearning, existing research often defines concepts as corresponding text, such as \\\"Van Gogh\\\" representing the concept of Van Gogh's painting style, \\\"Monet\\\" for Monet's style, and \\\"Elon Musk\\\" for the entity concept of Musk. They achieve concept unlearning by defining concepts as text and ensuring the model cannot output images containing the concept given the textual condition. We believe that based on this definition, our method can be easily extended. For instance, preventing outputs that include the concept information under given conditions can be viewed as objective 1, while maintaining the quality of images under other conditions as objective 2. Our framework can precisely control the degree of concept unlearning.\\n\\n**Additional Discussion.** However, we contend that the existing definition may not fully align with our ideal unlearning target, reflected in two aspects: \\n\\n- First, defining concepts with text is not always suitable. Concepts are relative for humans, and textual definitions inherently carry ambiguity, as different textual expressions may correspond to the same concept. If it is necessary to define the concept of being forgotten using text, we believe it may require semantic alignment in the text encoder of the text-to-image generation model.\\n\\n- Second, much of the existing research on concept unlearning focuses on unlearning the mapping from text to image \\u2014 that is, ensuring that the model cannot output images containing the forgotten concept given specific text. However, what we actually need is for the model to be unable to output images containing the forgotten concept, regardless of the text provided. In other words, we want the forgotten model to lack the capability to generate the concept under any given conditions.\\n\\nThe core of these issues lies in the definition of concepts. We argue that concepts should be defined as a data manifold, a distribution, or a subspace, as pointed out by Wang et al. in [2]. Concept unlearning should first identify the concept to be forgotten and then excise it from the original knowledge. For example, if a concept is defined as a distribution $p_1$, and the original model learns the data distribution $p_0$, then concept erasure in a text-to-image model should first determine distribution $p_1$, then determine the distribution $p_0'$ after removing $p_1$ from $p_0$, and finally solve a distributional shift problem from $p_0$ to $p_0'$ to achieve concept unlearning. Under this definition, concept unlearning in text-to-image models might not be as straightforward for our method to extend. However, based on our understanding, flow-based models like the Wasserstein gradient flow could be a potential approach to address this issue, marking a possible future exploration direction.\\n\\nConsidering more complex scenarios, as pointed out in [3], if there are connections between concepts such as a direct pathway in the model\\u2019s weight units that can activate the forgotten concept, or activation through a combination of several other concepts, then merely unlearning the direct pathway to this concept may not be sufficient. We also need to remove the connections from other concepts to the forgotten concept. We believe this represents a more complex class of problems, involving the interpretability of the generative models.\\n\\n[1] Li, Guihong, et al. \\\"Machine unlearning for image-to-image generative models.\\\" arXiv preprint arXiv:2402.00351 (2024).\\n\\n[2] Wang, Peng, et al. \\\"Diffusion models learn low-dimensional distributions via subspace clustering.\\\" arXiv preprint arXiv:2409.02426 (2024).\\n\\n[3] Shumailov, Ilia, et al. \\\"Ununlearning: Unlearning is not sufficient for content regulation in advanced generative ai.\\\" arXiv preprint arXiv:2407.00106 (2024).\"}" ] }
9NfHbWKqMF
SplatFormer: Point Transformer for Robust 3D Gaussian Splatting
[ "Yutong Chen", "Marko Mihajlovic", "Xiyi Chen", "Yiming Wang", "Sergey Prokudin", "Siyu Tang" ]
3D Gaussian Splatting (3DGS) has recently transformed photorealistic reconstruction, achieving high visual fidelity and real-time performance. However, rendering quality significantly deteriorates when test views deviate from the camera angles used during training, posing a major challenge for applications in immersive free-viewpoint rendering and navigation. In this work, we conduct a comprehensive evaluation of 3DGS and related novel view synthesis methods under out-of-distribution (OOD) test camera scenarios. By creating diverse test cases with synthetic and real-world datasets, we demonstrate that most existing methods, including those incorporating various regularization techniques and data-driven priors, struggle to generalize effectively to OOD views. To address this limitation, we introduce SplatFormer, the first point transformer model specifically designed to operate on Gaussian splats. SplatFormer takes as input an initial 3DGS set optimized under limited training views and refines it in a single forward pass, effectively removing potential artifacts in OOD test views. To our knowledge, this is the first successful application of point transformers directly on 3DGS sets, surpassing the limitations of previous multi-scene training methods, which could handle only a restricted number of input views during inference. Our model significantly improves rendering quality under extreme novel views, achieving state-of-the-art performance in these challenging scenarios and outperforming various 3DGS regularization techniques, multi-scene models tailored for sparse view synthesis, and diffusion-based frameworks. The project url is https://sergeyprokudin.github.io/splatformer.
[ "Novel View Synthesis", "Gaussian Splatting", "Point cloud modeling" ]
Accept (Spotlight)
https://openreview.net/pdf?id=9NfHbWKqMF
https://openreview.net/forum?id=9NfHbWKqMF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rgSjUM2qaj", "raqhrX5yt9", "qwjicKCw8u", "qvsowSaCVL", "oPvy6o78oZ", "m3Js3hCNX7", "kIAye8IK3r", "iM90RyqmPI", "gOMeYyfdL5", "f4I5xcBluJ", "cgainzbB8Q", "cfSc2hj0Xm", "aDyfR3ncf7", "ZTFE7DmItp", "ZNZnEpryQS", "X2x8dJW4KU", "M501jXd6s2", "KHW3HSoOTk", "HuTmIm27vg", "FMFNQUXwP2", "FED4lhpknc", "EuuZYdyUUx", "EdPfb8oVqG", "CDHCWvwZ2o", "BkSZ6TIwGN", "BBSAufVW5T", "AanvYxTkS0", "58OGr5FEmU", "0mcCqZDuo9" ], "note_type": [ "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732549399117, 1733176442414, 1732628536539, 1734399983117, 1732549751218, 1730038474068, 1732913670952, 1732550624504, 1730277049287, 1732549954429, 1732607877050, 1732550525933, 1732752907855, 1732689529498, 1732550405815, 1732602804650, 1732549102708, 1737523664632, 1732555058200, 1732550189327, 1730687272451, 1732550236886, 1732550104977, 1732628721298, 1732628692427, 1732549543480, 1729337797523, 1732550646637, 1733038337563 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Submission4834/Area_Chair_xf5h" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Submission4834/Reviewer_MWLJ" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Submission4834/Reviewer_MA5d" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Submission4834/Reviewer_MA5d" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "~Yiping_Ji1" ], [ "ICLR.cc/2025/Conference/Submission4834/Reviewer_K5mf" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Submission4834/Reviewer_K5mf" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4834/Reviewer_MWLJ" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Submission4834/Reviewer_K5mf" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Submission4834/Reviewer_aVEM" ], [ "ICLR.cc/2025/Conference/Submission4834/Authors" ], [ "ICLR.cc/2025/Conference/Submission4834/Reviewer_K5mf" ] ], "structured_content_str": [ "{\"title\": \"Weakness 1/3. Generative Diffusion Priors for OOD-NVS (part-1)\", \"comment\": \"We thank the reviewer for their valuable feedback and for recognizing several positive aspects of our submission, including addressing a significant challenge in neural rendering and proposing an efficient and well-founded approach using Point Transformer 3DGS refinement.\\n\\nBelow, we address the main discussion points raised by the reviewer: (W1) the utilization of generative priors for out-of-distribution novel view synthesis tasks, (W2) the method's capability for scene-level reconstruction and handling unbounded scenes, (W3) the reasons behind the underperformance of single-image-to-3D models, and (Q1) the request for video comparisons.\\n\\n### **Weakness 1/3. Generative Diffusion Priors for OOD-NVS (part-1)**\\n\\nGenerative models, particularly diffusion models, hold significant potential for improving novel view synthesis. We provide a discussion of these methods in the original submission (lines 184-186) and perform a comprehensive quantitative evaluation in the OOD-NVS setup against representative baselines, including SyncDreamer [1], SSDNeRF [12] (Table 1, Figure F.2, Figure F.3), and DiffBIR [13] (Table 3, Figure C.1). The selected methods encompass diverse approaches to integrating diffusion-based priors, including single-image-to-3D techniques leveraging 2D diffusion priors (SyncDreamer), methods utilizing diffusion-based 3D priors (SSDNeRF), and pipelines focused on diffusion-based image enhancement (DiffBIR).\", \"our_analysis_highlights_the_following_limitations_of_these_methods_within_the_considered_ood_nvs_scenario\": [\"**Erroneous Hallucination under OOD Views**: Generative models may hallucinate content in OOD views that is absent from the input views. In real-world scenarios requiring accurate reconstruction, such as surgical scene reconstruction, this behavior presents significant challenges to the models' applicability.\", \"**Limited Capacity for Handling Multiple Input Views**: Most models are restricted in the number of input views they can effectively process and fail to leverage the dense image sets that are often available in the considered OOD-NVS scenario.\", \"**Multi-view Inconsistency and Flickering Artifacts**: These methods struggle to maintain temporal and spatial coherence across generated views.\", \"To further validate our findings, we evaluated additional state-of-the-art open-source diffusion-based methods and extended the comparison to include some previously discussed approaches. The evaluated methods are as follows:\", \"SyncDreamer (ICLR 2024) [1]: A single-image-to-3D method leveraging Stable Diffusion [6]. Please note that the method was already evaluated in the original submission.\", \"SV3D (ECCV 2024) [2]: A single-to-3D method utilizing Stable Video Diffusion [7].\", \"EscherNet (CVPR 2024) [3]: To the best of our knowledge, the only open-source diffusion-based method capable of processing more than 10 input views.\", \"For SyncDreamer and SV3D, we use the first frame of the input view set as the input condition. For EscherNet, we use all of the 32 input views as input conditions.\", \"For a fair comparison with our model, we fine-tuned the released checkpoints of SyncDreamer and EscherNet using the same Objaverse-OOD training set and computational resources employed for SplatFormer. Fine-tuning was necessary for SyncDreamer, as the released model is limited to generating its predefined 16 novel views, and it also enhanced EscherNet\\u2019s performance on our OOD test set. Since the concurrent work SV3D does not provide its training script, we tested its released checkpoint in a zero-shot manner.\", \"To address the multi-view inconsistency in the per-frame generated results of these diffusion models, we also attempted to optimize 3DGS using the output of the diffusion models. Specifically, we employed the fine-tuned EscherNet and the released SV3D to generate 9 OOD views. These generated novel views, combined with 32 ground-truth input views, were used to train a 3DGS. We then evaluated the OOD renderings produced by the distilled 3DGS, denoting these experiments as EscherNet$\\\\rightarrow$3DGS and SV3D$\\\\rightarrow$3DGS. Notably, these experiments also directly address a question raised by Reviewer MA5d (Question 1/3: Pseudo-OOD Views for 3DGS training).\", \"Please refer to the following sections for the experimental results.\"]}", "{\"comment\": \"Thank you for your thoughtful suggestion to include \\u201cobject\\u201d in the title. We appreciate your effort to ensure our work is clearly and effectively presented.\\n\\nWe carefully considered your feedback but decided to retain the current title, SplatFormer: Point Transformer for Robust 3D Gaussian Splatting. This choice reflects the broader scope of our contributions, which extend beyond object-centric applications to include experiments on unbounded scenes. Adding \\u201cobject\\u201d to the title might inadvertently narrow the perceived applicability of our method.\\n\\nHowever, we recognize the importance of emphasizing object-centric aspects of our work. To address this, we will clarify these contributions explicitly in the abstract and introduction. These sections will highlight how our method enhances object representations in 3D Gaussian splatting, ensuring this focus is evident to readers.\\n\\nWe believe this approach balances specificity with the broad applicability of our method while addressing your concern. We appreciate your insight and remain open to further suggestions to improve the clarity and impact of our submission.\\nThank you again for your valuable feedback.\"}", "{\"title\": \"Discussion about 'Hallucinating unseen views'\", \"comment\": \"We sincerely thank the reviewer for their thoughtful response and valuable insights. We fully agree with the reviewer\\u2019s emphasis on the benefits of generative priors, as well as the significance of object-level surface reconstruction and few-shot scene-wise reconstruction. To address these points in greater detail, we are pleased to provide the following discussion.\\n\\n### **1. Hallucinating unseen views**\\nWe agree that diffusion-based methods excel at inpainting or inventing unseen content\\u2014an impressive capability not easily matched by transformer-based feed-forward models. This feature is particularly valuable for creative applications such as AI-assisted design and 3D asset creation. We firmly believe that any work aimed at AI-driven creative tasks should benefit significantly from leveraging diffusion-based models [6,7].\\n\\nHowever, beyond AI creation, there exist real-world scenarios where users can provide a relatively large number of input views to disambiguate scene information, and novel view synthesis (NVS) techniques must prioritize accurate and faithful renderings of the true scene without introducing hallucinated content. For example, in surgical digitalization [20], surgeons require precise and accurate visualizations of incisions; even minor hallucination errors by the system could lead to severe, potentially fatal consequences. Similarly, in a traffic monitoring system that visualizes a bird\\u2019s-eye view of the city street, hallucinating non-existent pedestrians or vehicles could result in misinformation and critical decision-making errors.\\n\\nWe have conducted extensive comparisons with diffusion-based methods in both our original work and the rebuttal, exploring various strategies to further improve their performance in the OOD-NVS setup. These efforts included fine-tuning the methods using our curated dataset and addressing inconsistencies by distilling a 3D geometry structure (3DGS). Our method consistently outperforms these approaches in all variations, both quantitatively and qualitatively:\\n\\n| Results on GSO-OOD | PSNR | SSIM | LPIPS | \\n|---|---|---|---|\\n| SyncDreamer [1] (finetune) | 11.86 | 0.518 | 0.451 |\\n| SV3D [2] (0-shot) | 10.93 | 0.498 | 0.455 |\\n| SV3D$\\\\rightarrow$3DGS | 14.19 | 0.562 | 0.405 |\\n| EscherNet [3] (0-shot) | 13.74 | 0.585 | 0.367|\\n| EscherNet (finetune) | 16.57 | 0.633 | 0.273 |\\n| EscherNet$\\\\rightarrow$3DGS | 18.88 | 0.701 | 0.258 |\\n| 3DGS [8] | 21.78 | 0.746 | 0.250 |\\n| Ours | **25.01** | **0.863** | **0.148** |\\n\\nWe share the reviewer\\u2019s interest in diffusion models and fully acknowledge their exceptional ability to generate photorealistic results. However, as no existing work effectively addresses the hallucination issue of these models in our target scenarios, we aim to tackle this challenge in future research.\"}", "{\"metareview\": \"This paper receives unanimous positive ratings of 6,8,8,8. The AC follows the recommendations of the reviewers to accept the paper. The reviewers comment that the method introduced by the paper is novel and the task is an important direction for rendering unseen test views which addressed a significant gap in the current research.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers asked for additional experiments and some further clarifications, which the authors managed to address well in the rebuttal and discussion phases.\"}", "{\"title\": \"Weakness 2/3, 3/3 and Question on the video\", \"comment\": \"### **Weakness 2/3. Scene-level Refinement, Unbounded Scenes**:\\n\\nRecognizing the importance of addressing OOD-NVS in complex scenes, we discussed SplatFormer\\u2019s potential for real-world unbounded scenarios in Section F: Limitations and Future Directions (**Lines 1162-1185**) of the originally submitted Appendix. To evaluate its performance, we applied our proposed framework to the MVImgNet dataset [5]. On the test set, our method outperforms 3DGS:\\n\\n| Method |PSNR |SSIM|LPIPS|\\n|---|---|---|---|\\n|3DGS [8] |19.81|0.728|0.432|\\n|SplatFormer|**21.68**|**0.757**|**0.424**|\\n\\nWe present the visual comparison in Figure F.1 and Figure G.2, showing that SplatFormer reduces floater artifacts and improves geometry in many cases. However, we acknowledge its limitations in enhancing high-frequency details, likely due to the current network's insufficient capacity for processing large-scale, irregularly distributed point clouds. Future improvements could involve designing a novel multi-scale hierarchical point transformer architecture to handle large scenes and incorporating real-world training data alongside synthetic data. Additionally, since SplatFormer demonstrates strong generalization on real-world objects (Table 2, Figure 5, and Figure F.5), it may be feasible to decompose the scene and process individual objects separately.\\n\\nUnbounded scene reconstruction from limited observations remains an open challenge, with a majority of existing prior-enhanced NVS methods [1,2,3,4] also focusing on object-centric scenes. In contrast, SplatFormer not only excels in such settings but also shows promising potential for unbounded scenes. Improving the training strategy and network architecture will be our focus in future work.\\n\\n### **Weakness 3/3. The Underperformance of Sparse-view Reconstruction Methods**:\\n\\nIn our earlier response (Weakness 1), we addressed state-of-the-art open-source diffusion-based single-image-to-3D models, SyncDreamer [1] and SV3D [2]. Evidence shows that relying on a single input view introduces significant ambiguity, leading the model to hallucinate novel views that are misaligned with the input capture.\\n\\nTo further extend the discussion, we compare our approach to LaRa [4], a state-of-the-art method that predicts 2D Gaussian splats from four input views. While LaRa achieves impressive results in its original paper, we demonstrate that it struggles to produce high-quality OOD views.\\n\\nIn the original submission, we evaluated a fine-tuned version of the LaRa framework on the Objaverse-OOD and ShapeNet-OOD datasets. Here, we extend this evaluation by testing two LaRa models on the GSO-OOD test set: the released checkpoint model and the model fine-tuned on our Objaverse-OOD dataset. To highlight LaRa's degradation in OOD views, we report metrics in both in-distribution views (elevation<=10$\\\\degree$) and out-of-distribution views (elevation>=70$\\\\degree$).\\n\\n| Result on GSO | In-distribution Views | Out-of-distribution Views |\\n|---|---|---|\\n| | PSNR/SSIM/LPIPS| PSNR/SSIM/LPIPS |\\n| LaRa [4] (0-Shot$^\\\\dagger$) | 24.39/0.870 /0.158 | 17.91/0.677/ 0.339|\\n| LaRa [4] (Finetune) |25.22/0.880/0.152 | 19.87/0.721/0.310|\\n| Ours |**30.55/0.961/0.057** | **25.01/0.863/0.148**|\\n\\n$\\\\dagger$: LaRa reports a PSNR of 29.15 on the GSO test set used in their paper. However, we observe that their test images primarily feature empty backgrounds, which inflates the PSNR score. In contrast, our test set includes extensive foreground regions, presenting a more challenging evaluation scenario. \\n\\nWe include a visual comparison between the fine-tuned LaRa and ours in Figure G.1, and the two supplementary videos (_compare_with_major-baselines.mp4_, _compare_with_diffusion-sparse-baselines.mp4_), demonstrating that LaRa's outputs suffer from noticeable blurriness and performance degradation in OOD views.\\n\\nThe results of SyncDreamer [1], SV3D [2], and LaRa [4] indicate that existing single/sparse-to-3D methods struggle to produce accurate OOD views due to their inability to utilize more than four input views to resolve ambiguity. In contrast, our method is agnostic to the number of input views, as it operates directly on the initial 3DGS set, which can be generated from an arbitrary number of views.\\n\\n### **Question 1. Video Comparisons**:\\nWe direct the reviewer to the original supplementary materials, which include qualitative video comparisons (file name: compare_with_major-baselines.mp4) of our method against the major baselines. \\n\\nTo enhance the discussion and facilitate a more comprehensive visual assessment, we provide another video (file name:_compare_with_diffusion-sparse-baselines.mp4_) comparing our method against the diffusion-based and sparse-view baselines (Weakness 1/3 and Weakness 3/3).\"}", "{\"summary\": \"This paper presents SplatFormer, a novel zero-shot model for 3DGS refinement trained on large datasets, in order to enhance the synthesized appearance robustness observed from OOD views. The presented problem OOD-NVS it aims to solve is valuable. Extensive experiments show it achieves SOTA performance on various object-centric datasets.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The presented new problem OOD-NVS is of great value.\", \"Experiments are extensive, which can well validate the performance of the proposed method.\", \"SplatFormer achieves SOTA performance on various object-centric datasets in OOD-NVS task compared to current related methods.\"], \"weaknesses\": [\"Although some experiments using real-world datasets are conducted, all involved datasets are still mainly object-centric. It is still a problem that if this learning-based method can be applied to real-world and non-object-centric scenes with more complex foreground and background. The corresponding data are much more difficult to collect than the object-centric data, and also more difficult to process and use in training.\", \"Lack of reporting geometry results. Although there are many comparisons in appearance, it's another important problem that how much can the refinement benefit the reconstructed geometry. However, there are no results like depth and surface normal are shown.\"], \"questions\": [\"Would like to see some discussion and exploration for non-object-centric scenes.\", \"Would like to see more comparisons on geometry, like surface normal and depth.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to thank the reviewer for their valuable suggestions.\\n\\nIn addition to the depth visualization page in our manuscript (Figure G.3), we have provided additional normal visualization [here](https://1drv.ms/b/s!AsMtUYcbeDptbHOwORrU6XquDsE?e=8dosYR). We also measured the mean absolute error (MAE) between the ground-truth depth and normal maps, and the corresponding rendered depth and normal maps, under out-of-distribution (OOD) views for 3DGS [8] and our method as below. \\n\\n| Results on Objaverse-OOD | Depth-MAE (x1e-4)$\\\\downarrow$ | Normal-MAE$\\\\downarrow$ |\\n|---|---|---|\\n|3DGS [8] |6.70|0.239|\\n|SplatFormer|**4.05**|**0.214**|\\n\\nAs pointed out in several relevant geometry enhanced 3DGS papers, extracting high-quality surface or meshes from 3DGS is intrinsically challenging due to its unstructured, explicit and discontinuous point-based nature (Sugar [18], GoF [19], PGSR [20]) and multiview inconsistency (2DGS [17]). Though constrained by the inherent limitations of 3DGS in exporting high-fidelity surfaces, our method still improves the accuracy of rendered depth map and normal map from 3DGS both quantitatively and qualitatively. Although accurate surface reconstruction is not the primary focus of this paper, we plan to integrate 2DGS or other geometrically accurate 3DGS representations into our framework in future work. This integration will enable us to evaluate more detailed geometric results on benchmark datasets, such as DTU. \\n\\nThis work focuses on improving out-of-distribution views in object-centric scenes. While we acknowledge the limitations of our approach in unbounded scenes, preliminary experiments on MVImgnet (Figure F.1, Table F.1, and Figure G.2) demonstrate the potential of our framework for scene-wise reconstruction. An enhanced network architecture, along with a larger training dataset incorporating real-world scenes, could improve the model's generalization to more diverse scenes and capture setups. However, further improvements for unbounded scenes are outside the scope of this work, and including extensive experiments on this topic would not conform with ICLR's discussion rules and guidelines. We look forward to exploring this in future work.\\n\\nWe sincerely thank the reviewer for their feedback and appreciate the insightful discussions.\\n\\n[17] Huang, Binbin, et al. \\\"2d gaussian splatting for geometrically accurate radiance fields.\\\" ACM SIGGRAPH 2024 Conference Papers. 2024.\\n\\n[18] Gu\\u00e9don, Antoine, and Vincent Lepetit. \\\"Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[19] Yu, Zehao, Torsten Sattler, and Andreas Geiger. \\\"Gaussian opacity fields: Efficient and compact surface reconstruction in unbounded scenes.\\\" arXiv preprint arXiv:2404.10772 (2024).\\n\\n[20] Chen, Danpeng, et al. \\\"PGSR: Planar-based Gaussian Splatting for Efficient and High-Fidelity Surface Reconstruction.\\\" arXiv preprint arXiv:2406.06521 (2024).\"}", "{\"title\": \"Question. Varying input and test views (part 1)\", \"comment\": \"We thank the reviewer for recognizing the strengths of our submission, including the motivation behind the target task, the novelty and soundness of adapting a 3D point transformer for 3DGS refinement, the comprehensive experimental results, and the clarity of the paper. Below, we address the reviewer\\u2019s question concerning the performance of SplatFormer when trained and evaluated with varying input and out-of-distribution test views.\\n\\n## **Question. Varying input and test views**\\nFirst, while SplatFormer is trained with supervision only from top-down out-of-distribution (OOD) views with elevations near $90^\\\\circ$ and the input views with elevations $<=15^\\\\circ$, it can refine the input 3DGS during inference to enhance renderings from a wide range of viewpoints across the entire upper hemisphere. In our original submission, Fig. 2 shows that SplatFormer consistently outperforms 3DGS at elevation angles between $20^\\\\circ$ and $90^\\\\circ$. Additionally, SplatFormer improves renderings at varying distances. To demonstrate this, we reduce the camera-to-origin distance by a specified factor and render zoomed-in, close-up views. Below, we report the PSNRs for different elevation angles and camera radii ($R$) on the GSO-OOD dataset.\\n\\n| Camera Radius \\\\ |Elevation| **$20^\\\\circ$** | **$30^\\\\circ$** | **$40^\\\\circ$** | **$50^\\\\circ$** | **$60^\\\\circ$** | **$70^\\\\circ$** | **$80^\\\\circ$** | **$90^\\\\circ$** |\\n|-----------------------------------|-------------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|\\n| **R = 0.2** | 3DGS | 22.29 | 21.13 | 20.73 | 19.17 | 18.60 |18.00 | 17.46 | 16.84 |\\n| | Ours | **22.99** | **22.19** | **21.85** | **21.50** | **21.44** | **21.18** | **20.89** | **20.18** |\\n| **R = 0.4** | 3DGS | 23.28 | 22.43 | 21.12 | 19.87 | 18.99 | 18.49 | 18.13 | 17.98 |\\n| | Ours | **23.74** | **23.14** | **22.44** | **21.76** | **21.38** | **21.23** | **21.00** | **21.04** |\\n| **R = 0.6** | 3DGS | 25.13 | 23.52 | 22.14 | 21.12 | 20.33 | 19.76 | 19.40 |19.39 |\\n| | Ours | **25.33** | **24.04** | **23.19** | **22.71** | **22.43** | **22.24** | **22.14** | **22.22** |\\n| **R = 0.8** | 3DGS | **27.85** | 25.86 | 24.07 | 22.74 | 21.89 |21.28 | 20.89 | 20.85 |\\n| |Ours | 27.28 | **25.87** | **24.85** | **24.21** | **23.89** | **23.68** | **23.57** | **23.61** |\\n| **R = 1.0**| 3DGS | **29.62** | 26.65 | 24.58 | 23.30 | 22.50 | 21.97 | 21.70 | 21.66 |\\n| | Ours | 28.97 | **27.62** | **26.54** | **25.77** | **25.34** | **25.08** | **24.93** |**25.03** |\\n\\n\\nThe results demonstrate that SplatFormer\\u2019s refinement improves not only the top-down OOD views used during training but also renderings across a wide range of elevations and camera positions. This is achievable as long as the input captures provide full angular coverage along one axis (e.g., azimuth) while having limited angular coverage along another axis (e.g., elevation).\\n\\nNotably, when $R \\\\leq 0.6$ and Elevation $\\\\leq 30^\\\\circ$, the test views and input views share similar elevations but differ in their distances to the object. This corresponds to a case mentioned by the reviewer, and in such instances, our trained SplatFormer consistently enhances 3DGS performance.\"}", "{\"summary\": \"Although existing 3D representations such as 3DGS or Nerf can achieve novel view synthesis, their rendering performances on OOD views with relatively large elevations are relatively limited. This may come from the large differences between training and evaluation OOD views. In this work, the authors propose a framework, named SplatFormer, using transformer to refine the optimized 3DGS for better performances under OOD views. Benefited from the training under both normal and OOD views, SplatFormer can indeed improve the rendering under OOD views.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The core contribution of this work to refine 3D Gaussians with a genelizable transformer is meaningful;\\n2. The authors construct training and evaluation sets for the claimed OOD problem, from ShapeNet and Objaverse dataset.\\n3. Extensive experiments with different baselines on multiple datasets confirm that the proposed method can obviously improve the rendering performances under poses with large elevations.\", \"weaknesses\": \"My major concerns lie on that some comparisons between the proposed method and baselines may be not so fair. For example, the optimization of the proposed method use 32 low-elevation views, while the results of some methods, e.g., LaRa, take only 4 views for input. The lack of training views may naturally affect its performances. Can we apply the proposed framework to the 3D Gaussian primitives generated by LaRa directly? In this way, the performances of proposed refinement might be evaluated more fairly.\", \"questions\": \"Except the mentioned problem in the weakness section, I have some other problems.\\n 1. As the method is mainly proposed to address the problems of rendering under relatively large elevations, the limitation of performances may come from the lack of corresponding training views. Could we just use some novel view synthesis baselines, such as Zero123, SV3D to generate pseudo images from such poses with large elevations, and then optimize 2DGS, 3DGS, etc. for reconstruction? Would this also improve the performances under poses with large elevations?\\n 2. What is the specific settings for the training of the Gaussian primitive transformer? Would it select input views and OOD views randomly? Does different selection strategies have influences on the final performances?\\n 3. How is the efficiency of the transformer? As the density of Gaussian primitives might be quite high after optimization, wouldn't it take great time and memory cost to incorporate such a transformer framework?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA.\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Weakness. Additional Comparison with LaRa on a 4-view Setup\", \"comment\": \"We thank the reviewer for their thoughtful feedback and for recognizing the strengths of our work, including the meaningful contribution of a generalizable transformer for refining 3D Gaussian splats, the construction of datasets tailored to the OOD problem, and the extensive experiments validating the effectiveness of our method.\\n\\nIn the following, we address the identified weakness (W1) concerning comparisons with LaRa under unfavorable conditions and respond to three discussion points: (Q1) utilizing pseudo-OOD views for view synthesis, (Q2) the training setup and OOD view selection, and (Q3) the analysis of SplatFormer\\u2019s resource efficiency.\\n\\nThe reference list for the cited papers is provided in our general response.\\n\\n### **Weakness. Additional Comparison with LaRa on a 4-View Setup**\\n\\nWe appreciate the reviewer\\u2019s comment and suggestion. Since LaRa is computationally expensive and limited to a maximum of four input views due to memory constraints on standard GPUs (e.g., RTX-4090, 24 GB), we can only provide four input views to LaRa for our OOD-NVS tasks, which involve 32 input views. However, following the reviewer\\u2019s suggestion, we are happy to perform an additional comparison under the four-view setup by applying our framework to the Gaussian splats generated by LaRa [4].\\n\\n#### **Experiment Details:**\\nWe used LaRa's released checkpoint to predict 2DGS from _four input views_ for our Objaverse training scenes. Subsequently, we trained our SplatFormer to refine LaRa\\u2019s outputs using the same procedure described in the experiments section of our paper. Even in this constrained four-view setup, SplatFormer achieved a noticeable improvement over LaRa\\u2019s baseline performance on the Objaverse-OOD and GSO-OOD test sets.\\n\\n| Four input views | **Objaverse-OOD** | **GSO-OOD** |\\n|---------------------|------------------------------|------------------------------|\\n| | PSNR / SSIM / LPIPS | PSNR / SSIM / LPIPS |\\n| LaRa [4] | 16.87 / 0.640 / 0.352 | 17.91 / 0.677 / 0.339 |\\n| LaRa + SplatFormer | **18.29 / 0.688 / 0.275** | **18.83 / 0.714 / 0.279** |\\n\\n#### **Discussion:**\\nIt is important to note that LaRa\\u2019s 2DGS encodes information from only four input views, which often results in suboptimal visual quality with significant blur artifacts, particularly when viewed from OOD perspectives. While SplatFormer enhances these outputs, fully reconstructing OOD views from such an extremely sparse input-view setup remains highly challenging. This limitation arises from the inherently ill-posed nature of the reconstruction task under these conditions.\\n\\nOur method, by contrast, is specifically designed for scenarios with denser input views, where more comprehensive information is available for reconstructing high-fidelity outputs. This design aligns with practical applications, such as immersive navigation or surgical visualization (L:048), where capturing dense, calibrated views is both feasible and necessary. Accordingly, in our main paper (Table 1), we focus on comparisons in the dense-input setup, emphasizing LaRa\\u2019s limitations in scaling to scenarios with dense input views.\"}", "{\"title\": \"Response to the authors\", \"comment\": \"Thanks for the authors' detailed response. The additional experiments have effectively addressed my concerns, particularly the fairer comparisons with LaRA, which confirm that the proposed method provides valuable refinements for existing GS-based generation approaches. While the application on object-level data remains somewhat limited, considering the extensive experiments and well-founded motivation, I will raise my score.\"}", "{\"title\": \"Weakness 2/2. Geometry results\", \"comment\": \"We thank the reviewer for raising this valuable question. We will include a geometry comparison experiment in our revised manuscript, as presented in Figure G.3 of the updated appendix.\\n\\nTo quantitatively evaluate performance, we measure the mean absolute error (MAE) between the ground-truth depth and normal maps, and the corresponding rendered depth and normal maps, under out-of-distribution (OOD) views for 3DGS [8] and our method. Specifically, the depth maps for 3DGS and our method are rendered as the weighted average depth of Gaussian primitives, which is a common way to derive depth maps from 3DGS as used in the gsplat toolbox [9] and other 3DGS-related work [10, 16]. We then follow 2DGS [17] to compute the normal maps using finite differences on the estimated surface derived from the depth maps.\\n\\nThe results show that, in addition to improving rendering quality, our method enhances the accuracy of rendered depth and normal maps:\\n\\n| Results on Objaverse-OOD | Depth-MAE (x1e-4) | Normal-MAE |\\n|---|---|---|\\n|3DGS [8] |6.70|0.239|\\n|SplatFormer|**4.05**|**0.214**|\\n\\nWe also provide a visualization of the rendered depth map in Figure G.3 of the updated appendix. In our revised manuscript, we will include more comprehensive qualitative and quantitative results on geometry.\\n\\nAdditionally, since 2DGS [17] produces more regularized depth and normal maps than 3DGS, our method could potentially further improve geometry results when applied to 2DGS refinement, as discussed in Lines 1128\\u20131133 of the Appendix.\"}", "{\"title\": \"Question about speed\", \"comment\": \"Hi Authors,\\nThanks for this great work and I am quite interested in it. I was just wondering if there is extra cost in inference speed using splatformer compared with other methods.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you for the authors' comprehensive response, which addresses many of the concerns raised. I appreciate that the issues identified by various reviewers may not overlap completely. I recommend incorporating **more** visualizations of geometry (such as depth or meshes) for object-level reconstruction using 3D-GS.\\n\\nAdditionally, testing the rendering quality on out-of-distribution scenes from different datasets, like **MipNeRF360** or **Tank and Temples** datasets, would greatly enhance our understanding of its generalization capabilities.\"}", "{\"title\": \"Weakness 1/2. Applicability to Non-Object-Centric Scenes\", \"comment\": \"We thank the reviewer for their thoughtful feedback and for recognizing the strengths of our work, including the importance of addressing the OOD-NVS problem, the extensive experiments validating our approach, and the state-of-the-art performance achieved on various object-centric datasets.\\n\\nBelow, we address the reviewer\\u2019s discussion points: (W1) **Non-Object-Centric Scenes**: Exploring the applicability of our method to real-world, non-object-centric scenes with more complex foregrounds and backgrounds; (W2) **Geometry Comparisons**: Providing additional comparisons on reconstructed geometry, including metrics such as depth and surface normals. We hope these responses address the reviewer\\u2019s questions and further clarify the contributions of our work.\\n\\n## **Weakness 1/2. Applicability to Non-Object-Centric Scenes**\\n\\nWe appreciate the reviewer\\u2019s recognition of our experiments on real-world objects. We believe that strong performance in object-centric scenes can contribute to addressing complex scenes, particularly if a compositional approach is used to decompose the foreground and background, allowing SplatFormer to process individual objects separately.\\n\\nAdditionally, we would like to direct the reviewer\\u2019s attention to Section F, \\\"Limitations and Future Directions\\\" (**Lines 1162-1185**) of the Appendix, where we discussed SplatFormer\\u2019s potential for real-world unbounded scenes. Specifically, we conducted an experiment using the MVImgNet [5] dataset, a large-scale collection of multi-view images. For each capture in MVImgNet, we split the views into frontal views and side views, treating one as input views and the other as OOD test views. We trained SplatFormer on a curated MVImgNet-OOD dataset and tested it on OOD views of held-out scenes. More experimental details are provided in Appendix F. For reference, we include the test results from Table F.1 below:\\n\\n| Method |PSNR |SSIM|LPIPS|\\n|---|---|---|---|\\n|3DGS [8] |19.81|0.728|0.432|\\n|SplatFormer|**21.68**|**0.757**|**0.424**|\\n\\nWe provided visual results in Figure F.1 and added additional results in Figure G.2 of the appendix in our updated manuscript, demonstrating that SplatFormer reduces floater artifacts and improves geometry in certain cases. However, we acknowledge its limitations in enhancing high-frequency details, which may be due to the need for a larger-capacity point transformer or a novel multi-scale hierarchical design to handle large-scale, irregularly spaced point clouds.\\n\\nRegarding the challenge of acquiring real-world datasets, we speculate that incorporating a diverse range of synthetic datasets could reduce reliance on massive-scale real-world datasets. Notably, training our model exclusively with synthetic data has already demonstrated promising results on real-world test scenes (Table 2, Figure 5, and Figure F.5).\\n\\nFinally, unbounded scene reconstruction from limited observations remains an open challenge, with most existing prior-enhanced NVS methods [1, 2, 3, 4] primarily focused on object-centric scenes. In contrast, SplatFormer not only excels in these settings but also demonstrates promising potential for unbounded scenes. Enhancing the network architecture and incorporating more synthetic and real-world training data will be key areas of focus in our future work.\"}", "{\"title\": \"thanks for the responses\", \"comment\": \"Thank you for the detailed responses from the authors. The topic of out-of-distribution (OOD) synthesis is undoubtedly important and has many applications. Among these, object-level surface reconstruction for few-shot asset creation and novel view synthesis of large-scale scenes are particularly critical.\\n\\nI understand that diffusion model-based methods might produce inconsistencies when generating new images. However, these methods can generate views completely unseen during training, a limitation often encountered with transformer-based feed-forward methods.\\n\\nAdditionally, the reviewer strongly recommends that the authors focus on object-level surface reconstruction (e.g., using 2D-GS on DTU datasets) and scene-wise reconstruction for novel view synthesis under a few-shot setting (e.g., employing both generalizable and feed-forward neural renderers on MipNeRF360 datasets).\\n\\nIncorporating these discussions and experiments will significantly enhance the strength of the claims and contributions within the paper.\"}", "{\"title\": \"Summary of the authors' responses\", \"comment\": \"We sincerely thank all reviewers for their valuable comments and constructive suggestions. To facilitate discussion among the reviewers and the area chair, we have summarized the reviewers' feedback in the table below.\\n\\n|Strengths|R-K5mf|R-MA5d|R-MWLJ| R-aVEM|\\n|----------|---|---|---|---|\\n|Addresses a significant research gap | &#10004; | &#10004; | &#10004; | &#10004; |\\n|Proposes a novel and sound method | &#10004; | | | &#10004; |\\n|Construct novel OOD datasets | | &#10004; | | |\\n|Provides extensive experiments | | &#10004; | &#10004; | &#10004; |\\n|Achieves superior performance| &#10004; | &#10004; | &#10004; | &#10004; |\\n|The paper is well-written| | | | &#10004; |\\n||||||\\n|**Weaknesses** | | | | |\\n|No diffusion priors | \\u2718| | | |\\n|Unknown potential in unbounded scenes |\\u2718 | | \\u2718| |\\n|Insufficient analysis on sparse-view baselines | \\u2718 | \\u2718 | | |\\n|Lack of geometry results| | | \\u2718 | |\\n||||||\\n|**Rating**| **5**|**6**|**6**|**8**|\\n\\n\\nWe appreciate the reviewers' acknowledgment that our work addresses a significant research gap and their recognition of several strengths, including a novel dataset, extensive experiments, and a technically sound, high-performance method.\\n\\nTo address the reviewers' comments on limitations, we provided additional results and clarifications as follows:\\n1. **Comparison with Diffusion-based and Sparse-view Methods** (R-K5mf, R-MA5d): In addition to the existing comparisons with SyncDreamer [1], SSDNeRF [12], DiffBIR [13], and LaRa [4] in our original manuscript, we have included more numerical and visual comparisons with state-of-the-art open-source diffusion-based methods [1, 2, 3] and the sparse-view baseline LaRa [4]. We have also analyzed their limitations in the OOD-NVS setup. \\n2. **Potential in Unbounded Scenes** (R-K5mf, R-MWLJ): Building on the discussion in the original manuscript, supported by experimental results on MVImgNet [5] (Appendix F), we have expanded it with additional details, considerations for future improvements, and qualitative comparisons.\\n3. **Geometry Evaluation** (R-MWLJ): We have included qualitative and quantitative evaluations of depth and normal errors.\\n\\nFurthermore, we addressed additional questions raised by the reviewers:\\n1. **Video Comparison** (R-K5mf): We directed the reviewer to the video included in our originally submitted supplementary material (_compare_with_major-baselines.mp4_), which provides visual comparison between our method and representative baselines. Additionally, we included a new video (compare_with_diffusion-sparse-baselines.mp4) comparing our method to [1, 2, 3, 4], highlighting the limitations of diffusion-based and sparse-view baselines in terms of hallucination errors and 3D inconsistencies.\\n2. **Computational Efficiency** (R-MA5d): We evaluated the inference time and memory usage of our model to demonstrate its efficiency. \\n3. **Training Setup** (R-MA5d): We clarified the ratio of OOD views used during SplatFormer's training and included an ablation study on its effect. \\n4. **OOD Views Different from Training** (R-aVEM): We demonstrated that our method consistently improves novel test views across diverse viewing angles and distances and discussed its performance under varying input-view trajectories. \\n\\n### **Additional Updates**:\\n* Additional Image Comparisons: These have been added to Appendix G in the updated manuscript, including:\\n\\t* **Figure G.1**: Comparisons with diffusion-based and sparse-view baselines.\\n\\t* **Figure G.2**: Additional results in unbounded scenes.\\n\\t* **Figure G.3**: Geometry comparisons.\\n* Video Comparisons: Two videos are now available in the supplementary material:\\n\\t* **_compare_with_major-baselines.mp4_**: Originally submitted as part of our supplementary material, this video demonstrates comparisons between our method and major baselines.\\n\\t* **_compare_with_diffusion-sparse-baselines.mp4_**: Newly added to address the questions raised by R1-K5mf and R2-MA5d, this video compares our method with diffusion-based and sparse-view baselines.\\n\\nWe sincerely thank all reviewers and the area chair for their time, patience, and thoughtful feedback.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"comment\": \"Thanks for the detailed responses from the authors. My concerns have been well solved. The promoted OOD problem is worthy of further research, and the method's generalizability is impressive, showing promising results in real-world data. I'd like to raise my rating to 8, despite some challenges still existing for its application to more real-world applications.\"}", "{\"title\": \"Question 2/3. SplatFormer's Training Setup\", \"comment\": \"We thank the reviewer for raising this insightful question. In the revised manuscript, we will include an ablation study to analyze the impact of the ratio of OOD views to input views during SplatFormer\\u2019s training on the final model performance.\\n\\nAs noted in Lines 860\\u2013861 of Appendix D in our manuscript, our training setup involves rendering four target images per scene, with a default setting of 70% OOD views and 30% input views.\\n\\n#### **Experiment Details**\\nTo investigate the effect of varying the OOD view ratio, we conducted an ablation study. Specifically, we trained four SplatFormers on the Objaverse-OOD training dataset, using different ratios of OOD views and input views for the photometric loss. We then evaluated the models' performance on various test views. Then we evaluate the models\\u2019 performance on different test views. Due to computational constraints, this study was performed using a fraction of the computational resources allocated for our full model (4x RTX-4090 GPUs with 200k steps and the gradient accumulation step set to 2).\", \"metrics_were_evaluated_on_objaverse_ood_test_scenes_across_three_elevation_ranges\": \"**low** (20\\u00b0\\u201330\\u00b0), **mid** (40\\u00b0\\u201360\\u00b0), and **high** (70\\u00b0\\u201390\\u00b0).\\n\\n| | **Low (20\\u00b0\\u201330\\u00b0)** | **Mid (40\\u00b0\\u201360\\u00b0)** | **High (70\\u00b0\\u201390\\u00b0)** |\\n|----------------------|-------------------------|-------------------------|-------------------------|\\n| | **PSNR / SSIM / LPIPS**| **PSNR / SSIM / LPIPS**| **PSNR / SSIM / LPIPS** |\\n| **90% OOD** | 25.07 / 0.883 / 0.119 | 23.04 / 0.826 / **0.160** | **22.85 / 0.818 / 0.172** |\\n| **70% OOD (default)**| 25.61 / 0.892 / 0.111 | 23.00 / **0.832** / 0.161 | 22.60 / 0.810 / 0.179 |\\n| **50% OOD** | 25.99 / 0.896 / 0.108 | **23.09** / 0.827 / 0.165 | 22.53 / 0.803 / 0.187 |\\n| **30% OOD** | **26.22 / 0.897 / 0.108** | 22.82 / 0.816 / 0.177 | 21.88 / 0.784 / 0.207 |\\n| **3DGS** | 25.80 / 0.873 / 0.132 | 20.92 / 0.736 / 0.237 | 19.24 / 0.673 / 0.285 |\\n\\n\\n\\n#### **Discussion**\\nOur results indicate that increasing the ratio of OOD views during training improves performance at extreme viewpoints (e.g., high elevations), though it slightly compromises rendering fidelity at lower elevations. We selected the default 70% OOD ratio as it provides a balance between enhancing OOD views and preserving high fidelity for in-distribution views.\\n\\nWe plan to complete this study using the full computational budget and include it in the revised manuscript to provide a more comprehensive analysis of the impact of OOD and input view ratios.\"}", "{\"summary\": \"The paper introduces SplatFormer, a point transformer model for refining 3D Gaussian Splatting (3DGS) representations under out-of-distribution (OOD) view conditions (with initialized Gaussian Splats). This is motivated by 3DGS struggles with quality degradation when test views differ significantly from training views. SplatFormer addresses this by learning to refine Gaussian splats, leveraging attention mechanisms to maintain consistency across viewpoints and removing artifacts in OOD scenariosl, with the collected large-scale object-centric data. The approach outperforms prior methods in robustness on several test datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper introduces a novel and important direction for rendering at unseen, highly relevant test views, addressing a significant gap in current 3D rendering research.\", \"By employing point transformers for aggregating Gaussian splats, the method offers a sound and efficient approach to achieve improved detail and visual fidelity.\"], \"weaknesses\": [\"The paper does not explore the potential of utilizing generative priors for OOD-NVS, particularly by introducing diffusion models to assist in hallucinating unseen views, which could enhance performance in novel view synthesis in a more reasonable way.\", \"The study is primarily focused on object-centric cases, despite the availability of scene-level 3D datasets (scannet, scannet++, blendedmvs, megascene, megadepth, mvsimgnet). Expanding the scope to scene-wise data could provide a broader basis for extrapolation and robustness in more complex environments.\", \"For object-centric cases, single-image-to-3D methods may suffice for preserving geometric consistency and hallucinating texture details. It is unclear why some introduced baselines, including generalizable GS and sparse-view GS, underperform in these scenarios relative to expectations.\"], \"questions\": \"Please refer to the questions in the weaknesses section concerning the problem-solving approach and dataset scope. The reviewer strongly suggests that the authors include a video comparison, as novel view synthesis is highly dependent on visual assessment.\\n\\n\\n\\n--------------------\\nThank you for the detailed explanation and for addressing my concerns. After reviewing the comments from other reviewers and considering your explanation regarding the broader scope of your work, I agree that the merit of addressing OOD challenges in neural rendering using a large-scale model is valuable.\\n\\nI appreciate the clarification and the balance you\\u2019ve struck in presenting the scope of your contributions, the promised clarification in the future abstract and introduction. As a result, I will raise my score to support the acceptance of your submission.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Question 3/3. The Resource Efficiency of SplatFormer\", \"comment\": \"We thank the reviewer for their insightful question regarding the efficiency of SplatFormer. In the final manuscript, we will highlight SplatFormer's resource efficiency and include a detailed analysis and comparison to emphasize its scalability and practicality.\\n\\n#### **Memory and Inference Time Analysis**\\nTo assess SplatFormer's efficiency, we measured its memory usage and inference time across varying numbers of Gaussian splats. The results are summarized as follows:\\n| **Number of Splats:** | 10k-40k | 60k-70k | 90k-100k | _Average_ |\\n|---------------------|-------------|-------------|--------------|-------------|\\n| Memory Usage (MB): | 675 | 801 | 1010 | 782 |\\n| Inference Time (ms): | 62 | 70 | 118 | 75 |\\n\\nIn the GSO test set, the average number of input splats is 50k, with 80% of inputs containing fewer than 70k splats - an amount sufficient to capture high-frequency details. These results demonstrate that SplatFormer scales efficiently with increasing numbers of Gaussian splats, with memory usage and inference time remaining within practical bounds.\\n\\n#### **Comparison with Other NVS Methods**\\nTo provide additional context, we benchmarked SplatFormer's performance against other state-of-the-art NVS methods, including the diffusion-based EscherNet [3] and LaRa [4], which specifically highlights efficiency as a key advantage. The results are as follows:\\n\\n| **Method** | **Encoder Type** | **Output** | **Params (MB)** | **Inference time (ms)** | **Memory Usage (MB)** |\\n|-----------------|-----------------------|------------------------|------------------|-------------------|------------------------|\\n| **EscherNet** [3] | SVD [2] | Single img | 971 | 1402 | 2178 |\\n| **LaRa** [4] | DINO [14] | GS splats | 125 | 157 | 1508 |\\n| **Ours** | 3D Point Transformer [15] | GS splats | **48** | **75** | **782** |\\n\\n\\nWe measured LaRa\\u2019s inference time and memory usage for generating 2D Gaussian splats (2DGS) from four input views, and EscherNet\\u2019s for producing a single output view using the same four input views. \\n\\nNotably, another advantage of our method is that it generates 3DGS splats in a fast single pass, enabling real-time rendering into any view. This is significantly more efficient than approaches like SV3D [2] and EscherNet [3], which rely on 2D diffusion generative models and require multiple time-consuming diffusion steps to produce only a single frame.\\n\\n\\nThe results demonstrate that SplatFormer achieves significantly better inference time and memory efficiency while utilizing a more lightweight architecture, making it a more scalable model. \\n\\nWe will include a discussion on efficiency in the revised manuscript.\"}", "{\"title\": \"Question 1/3. Pseudo-OOD Views for 3DGS training\", \"comment\": \"We appreciate the reviewer\\u2019s insightful suggestion. While leveraging pseudo-OOD views from diffusion-based novel view synthesis (NVS) baselines is an interesting approach, we observed that such views often lack plausibility and introduce artifacts. Incorporating these erroneous pseudo views into the 3DGS training pipeline can actually degrade the final reconstruction quality compared to 3DGS [8].\\n\\n#### **Experiment Details:**\\n\\nTo illustrate this, we conducted experiments using state-of-the-art open-source diffusion-based NVS models: SV3D [2] and EscherNet [3]. As detailed in our response to Reviewer K5mf, EscherNet outperformed other baselines, including SyncDreamer [1] and SV3D [2], due to its ability to process multiple input views, whereas other diffusion-based methods are limited to a single input view. By combining the pseudo-OOD views generated by SV3D or EscherNet with the ground truth input views, we trained 3DGS and obtained the following results:\\n\\n| **Method** | **PSNR** | **SSIM** | **LPIPS** |\\n|---------------------|----------|----------|-----------|\\n| SV3D [2] (0-shot) | 10.93 | 0.498 | 0.455 |\\n| SV3D [2] (0-shot+Distill) | 14.19 | 0.562 | 0.405 |\\n| EscherNet [3] (Finetune) | 16.57 | 0.633 | 0.273 |\\n| EscherNet [3] (Finetune+Distill)| 18.88 | 0.701 | 0.258 |\\n| 3DGS | 21.78 | 0.746 | 0.250 |\\n| Ours | **25.01** | **0.863** | **0.148** |\\n\\nA visual comparison is included in Figure G.1 and the second uploaded video in the supplementary material (file name: _compare_with_diffusion-sparse-baselines.mp4_).\\n\\n#### **Discussion:**\\nOur results indicate that pseudo-OOD views generated by diffusion-based methods are often suboptimal, introducing errors that propagate through the training pipeline and degrade performance. This is reflected in the lower PSNR, SSIM, and LPIPS values for models trained with pseudo views compared to our method.\\n\\nIn contrast, our approach directly refines 3DGS representations using a point transformer, enabling robust generalization to challenging OOD poses without relying on pseudo views. This avoids the pitfalls of hallucinated artifacts and ensures that reconstructions maintain high fidelity.\\n\\nFurther discussion on the limitations of sparse-view and diffusion-based NVS methods is provided in our response to Reviewer K5mf (\\\"Weakness 1: Generative Diffusion Priors for OOD-NVS\\\").\"}", "{\"comment\": \"In summary, we greatly appreciate the reviewer\\u2019s suggestions and feedback. We have conducted extensive experiments to highlight the limitations of diffusion-based models, demonstrate improvements in geometry, and showcase our method's potential in unbounded scenes. In future work, we plan to further explore these directions and incorporate additional recommendations from the reviewer that extend beyond the scope of the current submission.\\n\\nReferences\\n\\n[18] Chen, Yuedong, et al. \\\"Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images.\\\" European Conference on Computer Vision. Springer, Cham, 2025.\\n\\n[19] Charatan, David, et al. \\\"pixelsplat: 3d gaussian splats from image pairs for scalable generalizable 3d reconstruction.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[20] Hein, Jonas, et al. \\\"Creating a Digital Twin of Spinal Surgery: A Proof of Concept.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\"}", "{\"title\": \"Discussion about 'object-level surface reconstruction' and 'few-shot scene-wise reconstruction'\", \"comment\": \"### **2. Object-level surface reconstruction**\\nWe agree that object-level surface reconstruction is a crucial task. Although our method is not specifically designed for surface reconstruction and extraction, as its primary focus is novel view synthesis, it is nonetheless capable of improving and refining the geometry of the input 3DGS set. To demonstrate this, we evaluate the mean absolute error (MAE) between the ground-truth depth and normal maps and their corresponding rendered depth and normal maps under out-of-distribution (OOD) views for 3DGS [8] and our method.\\nSpecifically, the depth maps for 3DGS and our method are rendered as the weighted average depth of Gaussian primitives, a standard approach for deriving depth maps from 3DGS, as implemented in the gsplat toolbox [9] and other 3DGS-related works [10,16]. We then compute the normal maps following 2DGS [17], using finite differences on the estimated surface derived from the depth maps.\\nThe results demonstrate that, in addition to enhancing rendering quality, our method significantly improves the accuracy of the rendered depth and normal maps:\\n\\n| Results on Objaverse-OOD | Depth-MAE (x1e-4) | Normal-MAE |\\n|---|---|---|\\n|3DGS [8] |6.70|0.239|\\n|SplatFormer|**4.05**|**0.214**|\\n\\nWe also include a visualization of the rendered depth map in Figure G.3 of the updated appendix.\\n\\nWe acknowledge that there is still considerable room for improvement in surface reconstruction, particularly due to our use of 3DGS as the basis for 3D representation. Since 2DGS [17] produces more regularized depth and normal maps than 3DGS, our method could potentially yield better geometry results when applied to 2DGS refinement, as discussed in Lines 1128\\u20131133 of the Appendix. Additionally, in our response to reviewer MA5d (Weakness: Additional Comparison with LaRa on a 4-view Setup), we trained a SplatFormer to refine LaRa\\u2019s 2DGS predictions using four input views, achieving noticeable improvements on the two OOD test sets:\\n\\n\\n| Four input views | **Objaverse-OOD** | **GSO-OOD** |\\n|---------------------|------------------------------|------------------------------|\\n| | PSNR / SSIM / LPIPS | PSNR / SSIM / LPIPS |\\n| LaRa | 16.87 / 0.640 / 0.352 | 17.91 / 0.677 / 0.339 |\\n| LaRa + SplatFormer | **18.29 / 0.688 / 0.275** | **18.83 / 0.714 / 0.279** |\\n\\nThis result provides evidence that our proposed network and framework can also be integrated to refine 2DGS representations. While the focus of this paper is not on accurate surface reconstruction, we plan to incorporate 2DGS into our method and evaluate more detailed geometry results on related datasets, such as DTU, in _future work_.\\n\\n### **3. Few-shot Scene-wise Reconstruction**\\n\\nWe sincerely thank the reviewer for emphasizing the importance of few-shot scene-wise reconstruction. As discussed earlier, while we agree that few-shot settings are a critical problem, our focus is on a different but equally meaningful setup. In our approach, we assume that a relatively dense capture is available, albeit from a limited range of viewing angles, and a novel view from extreme angles is required.\\n\\nTo experiment with this setup in complex scenes, we explore SplatFormer\\u2019s potential for real-world unbounded scenarios, as discussed in Section F: Limitations and Future Directions (Lines 1162\\u20131185) of the originally submitted Appendix. To evaluate its performance, we applied our proposed framework to the MVImgNet dataset [5]. On the test set, our method outperforms 3DGS:\\n\\n| Method |PSNR |SSIM|LPIPS|\\n|---|---|---|---|\\n|3DGS [8] |19.81|0.728|0.432|\\n|SplatFormer|**21.68**|**0.757**|**0.424**|\\n\\nWe present the visual comparison in Figure F.1 and Figure G.2, which demonstrate that SplatFormer reduces floater artifacts and improves geometry in many cases. Future improvements could involve designing a novel multi-scale hierarchical point transformer architecture to handle larger scenes, as well as incorporating real-world training data alongside synthetic data. Additionally, since SplatFormer shows strong generalization on real-world objects (Table 2, Figure 5, and Figure F.5), it may be feasible to decompose the scene and process individual objects separately.\\n\\nUnbounded scene reconstruction from limited observations remains a challenging problem. Many prior-enhanced NVS methods [1,2,3,4] also focus on object-centric scenes. Methods like MVSplat [18] and PixelSplat [19], which use generalizable feed-forward neural renderers on MipNeRF360 datasets, focus on interpolating novel views between two input views.\\n\\nWhile our method excels in the object-centric settings discussed in this paper, it also shows promising potential for unbounded scenes. Improving the training strategy and network architecture will be a key focus in our future work.\"}", "{\"title\": \"Weakness 1/3. Generative Diffusion Priors for OOD-NVS (part-2)\", \"comment\": \"We report metrics on the **GSO-OOD** dataset:\\n| Results on GSO-OOD | PSNR | SSIM | LPIPS | \\n|---|---|---|---|\\n| SyncDreamer [1] (finetune) | 11.86 | 0.518 | 0.451 |\\n| SV3D [2] (0-shot) | 10.93 | 0.498 | 0.455 |\\n| SV3D$\\\\rightarrow$3DGS | 14.19 | 0.562 | 0.405 |\\n| EscherNet [3] (0-shot) | 13.74 | 0.585 | 0.367|\\n| EscherNet (finetune) | 16.57 | 0.633 | 0.273 |\\n| EscherNet$\\\\rightarrow$3DGS | 18.88 | 0.701 | 0.258 |\\n| 3DGS [8] | 21.78 | 0.746 | 0.250 |\\n| Ours | **25.01** | **0.863** | **0.148** |\\n\\n\\nWe have included a video comparison in our updated supplementary material (file name: _compare_with_diffusion-sparse-baselines.mp4_). We encourage the reviewer to watch the video, as it visually demonstrates the limitations of diffusion-based methods.\\n#### **The following observations can be made from the presented quantitative and qualitative results:**\\n* Although diffusion-based baselines generally produce visually plausible OOD views, they often hallucinate scene elements that are inconsistent with the input views. This issue also affects EscherNet, even when provided with all 32 input views. For example, when processing Adidas (c) shoes with the iconic three parallel stripes, EscherNet mistakenly hallucinates a fourth stripe in its OOD view.\\n* Likewise, the hallucination error propagates into the distilled 3DGS, making it incapable of addressing the artifacts in OOD views.\\n* These issues are also reflected in the quantitative results of all the baselines, which show a PSNR more than 6 points lower than our method (EscherNet$\\\\rightarrow$3DGS: 18.88 vs. Ours: 25.01).\\n\\nIn contrast, our method addresses the limitations of diffusion-based NVS through holistic reasoning over a set of 3DGS primitives. Another key advantage of our approach lies in its **computational efficiency**: the input splats are refined in a single pass through the 3D point transformer, eliminating the need for the computationally expensive denoising process required to generate each image in diffusion-based approaches.\\n\\nNonetheless, we recognize the impressive capability of diffusion-based methods to generate complex, photorealistic scenes with minimal input. Exploring ways to combine their strengths with the 3D consistency, efficiency, and robustness of our method offers a compelling direction for future research.\"}", "{\"summary\": \"This paper proposes a method towards enhancing the 3DGS performance on out-of-distribution views. This paper leverages a transformer-based point cloud backbone to encode and refine per-scene optimized 3DGS, and supervise on both interpolated views and out-of-distribution views. The empirical results show that this method leads to signficantly improved quality on OOD views.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The task of enhancing OOD views quality is important and with good motivation.\\n2. The idea of training a point cloud backbone to learn priors of OOD views for 3DGS refinement is novel and promising.\\n3. The experiments are extensive and the results are attractive.\\n4. The paper is well-written.\", \"weaknesses\": \"I did not find obvious weaknesses of this paper.\", \"questions\": \"1. It seems that this method is focused on object-centric scenes with specific camera trajectories (mainly difference in elevations). In both the training datasets and the testing datasets, the input views and OOD views are captured similarly. I'm curious about the results when the trainig views and OOD views are not captured similarly to the training data? For example, if the training views are high-elevation and testing views are low-elevation, or if the training and testing views are with similar elevation but are distant.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Question. Varying input and test views (part 2)\", \"comment\": \"However, we acknowledge a limitation of SplatFormer\\u2019s current simplified training procedure, which focuses on a specific input view distribution. It struggles to enhance quality when faced with significantly different input view distributions from those used to optimize the input 3DGS. For example, if only top-down, high-elevation views (Elevation $\\\\geq 50^\\\\circ$) are used for training 3DGS, our current SplatFormer model, trained on the setup with low-elevation views as input views and top-down views as target views, provides limited improvement in quality for low-elevation views (Elevation $\\\\leq 10^\\\\circ$):\\n\\n| Input view $>=50\\\\degree$, Test View$<=10\\\\degree$ | PSNR |SSIM|LPIPS|\\n|---|---|---|---|\\n|3DGS|18.01|0.697|0.297|\\n|SplatFormer|**18.22**|**0.712**|**0.283**|\\n\\nWe attribute this limitation to two factors. First, high-elevation input views capture only a small portion of the scene, leaving much of the lower regions unobserved - areas typically covered by low-elevation views. Since SplatFormer does not learn to generate (or 'hallucinate') completely unseen parts of the scene, it cannot correct artifacts in these unobserved regions. Second, variations in the distribution of input views produce different types of 3DGS artifacts, some of which differ significantly from those encountered during training. We believe this limitation can be addressed by creating a more diverse OOD synthetic training dataset with a wider range of input view trajectories.\\n\\nDespite this limitation, the main contribution of our work lies in introducing the important problem of OOD-NVS and proposing a novel architecture and learning paradigm to address it. In future work, we plan to expand and diversify both the training dataset and the applications of SplatFormer, enhancing its robustness to arbitrary shifts in train-test camera distributions and its ability to handle unbounded, large-scale scene processing.\"}", "{\"title\": \"Response from reviewer\", \"comment\": \"Thank you for your detailed response. I appreciate the efforts to enhance the depth and normal visualizations and to discuss the challenges related to surface quality in 3DGS systems.\\n\\nRegarding the focus of your work, I believe that including **\\\"Object\\\"** in the title and abstract would more accurately reflect the current scope of the work, particularly as it relates to object-centric scenes. If you are willing to make this adjustment, I would be more inclined to **raise my score** for your submission, as it would better align the paper's content with its **title and abstract**, enhancing clarity for readers.\"}" ] }
9MNzHTSDgh
CP-Guard+: A New Paradigm for Malicious Agent Detection and Defense in Collaborative Perception
[ "Senkang Hu", "Yihang Tao", "Zihan Fang", "Guowen Xu", "Yiqin Deng", "Sam Kwong", "Yuguang Fang" ]
Collaborative perception (CP) is a promising method for safe connected and autonomous driving, which enables multiple connected and autonomous vehicles (CAVs) to share sensing information with each other to enhance perception performance. For example, occluded objects can be detected, and the sensing range can be extended. However, compared with single-agent perception, the openness of a CP system makes it more vulnerable to malicious agents and attackers, who can inject malicious information to mislead the perception of an ego CAV, resulting in severe risks for the safety of autonomous driving systems. To mitigate the vulnerability of CP systems, we first propose a new paradigm for malicious agent detection that effectively identifies malicious agents at the feature level without requiring verification of final perception results, significantly reducing computational overhead. Building on this paradigm, we introduce CP-GuardBench, the first comprehensive dataset provided to train and evaluate various malicious agent detection methods for CP systems. Furthermore, we develop a robust defense method called CP-Guard+, which enhances the margin between the representations of benign and malicious features through a carefully designed mixed contrastive training strategy. Finally, we conduct extensive experiments on both CP-GuardBench and V2X-Sim, and the results demonstrate the superiority of CP-Guard+.
[ "Collaborative perception", "security", "defense", "malicious agent detection" ]
Reject
https://openreview.net/pdf?id=9MNzHTSDgh
https://openreview.net/forum?id=9MNzHTSDgh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yjko4ug4Pz", "tbTBP5EH4t", "sXlfygpPHx", "qpqtTiWpp6", "mn4EZScXYX", "jwnrwqjxM9", "j8jKCyK74T", "iWiwtah7i8", "iMANJvr5JQ", "dhyVXJ0hC3", "RnYKHBJDO4", "RUX8OhkSAe", "P4uOZtMYTs", "JVIEZolao5", "H4DFqBwXDK", "EqWsjwFdkA", "BGD1PKVacv", "8bELDIA0bX", "6y9rZ9AaOZ", "1sqORCsyUC", "1Ks4bonkro", "0Guvh4BD6O" ], "note_type": [ "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1730600270369, 1737523392302, 1732383060979, 1730059103948, 1732370077691, 1733198830005, 1732603262112, 1730731199125, 1732603662116, 1732369971216, 1730708877996, 1732369900181, 1732502609218, 1732650895814, 1732370030698, 1732384064659, 1732369724618, 1731192645367, 1732384018602, 1732382975160, 1733062440746, 1734710964652 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission362/Reviewer_mcjh" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission362/Authors" ], [ "ICLR.cc/2025/Conference/Submission362/Reviewer_RS1k" ], [ "ICLR.cc/2025/Conference/Submission362/Authors" ], [ "ICLR.cc/2025/Conference/Submission362/Reviewer_fZua" ], [ "ICLR.cc/2025/Conference/Submission362/Reviewer_mcjh" ], [ "ICLR.cc/2025/Conference/Submission362/Reviewer_fZua" ], [ "ICLR.cc/2025/Conference/Submission362/Reviewer_mcjh" ], [ "ICLR.cc/2025/Conference/Submission362/Authors" ], [ "ICLR.cc/2025/Conference/Submission362/Reviewer_UHsS" ], [ "ICLR.cc/2025/Conference/Submission362/Authors" ], [ "ICLR.cc/2025/Conference/Submission362/Reviewer_RS1k" ], [ "ICLR.cc/2025/Conference/Submission362/Reviewer_SymD" ], [ "ICLR.cc/2025/Conference/Submission362/Authors" ], [ "ICLR.cc/2025/Conference/Submission362/Authors" ], [ "ICLR.cc/2025/Conference/Submission362/Authors" ], [ "ICLR.cc/2025/Conference/Submission362/Reviewer_SymD" ], [ "ICLR.cc/2025/Conference/Submission362/Authors" ], [ "ICLR.cc/2025/Conference/Submission362/Authors" ], [ "ICLR.cc/2025/Conference/Submission362/Authors" ], [ "ICLR.cc/2025/Conference/Submission362/Area_Chair_wBC9" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes a new approach, CP-GUARD+, to detect attacks against collaborative perception (CP), mainly in autonomous driving scenarios. Their method leverages a mixed contrastive training strategy to detect attacks at the feature level and addresses the limitations of prior works in computation cost and latency. This work also constructed a new dataset, CP-GuardBench, which is the first dataset for malicious agent detection in CP systems. Their methods outperform prior work (MADE and ROBOSAC) on the V2X-Sim dataset.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper designs a valid method to detect malicious agents in CP at the feature level motivated by the limitations of prior works: computation-intensive and time-consuming. To evaluate the effectiveness of their method, this paper constructed a new dataset, CP-GuardBench. On the V2X-Sim dataset, their methods outperform the existing methods (MADE and ROBOSAC).\", \"weaknesses\": \"I have the following major concerns in this paper:\\n\\n### Threat model is not clear\\n\\nI am not fully convinced of their threat model about what kind of attacks they assume and how their defense works on them. This paper assumes a white-box threat model in which adversaries can inject malicious perturbations into their own intermediate features. However, the collaborative perception of autonomous driving has to be real-time. When does the adversary generate the attacks, e.g., C&W and PGD. As Eq. (1)-(6) assume that the adversary can know the intermediate features of other agents, it should also need communication time. As the CAV is the main target of this paper, this paper should provide more rigorous discussions about their threat model, i.e., how attack and defense works\\n\\n### No comparison with generic anomaly detection methods\\n\\nI appreciate their evaluation of MADE and ROBOSAC, which are the recent CP defenses. Meanwhile, this paper should also compare their method not only defense specialized for CP with more generic anomaly detection methods. Their motivation, feature-level detection, makes sense, but this may also remove the motivation to consider CP domain-specific requirements. As the encoder can work to absorb the domain differences, we should apply generic methods for the features because these are already \\\"features\\\". For example, there are many prior anomaly detection methods leveraging contrastive learning [a, b, c]. These are just examples, but this paper should provide more sufficient discussions of why existing works are not suitable for their research or demonstrate that their method has significant advantages via quantitative evaluation.\\n\\n[a] Cho, Hyunsoo, Jinseok Seol, and Sang-goo Lee. \\\"Masked contrastive learning for anomaly detection.\\\" IJCAI 2021\\n[b] Guille-Escuret, Charles, et al. \\\"CADet: Fully Self-Supervised Out-Of-Distribution Detection With Contrastive Learning.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n[c] Reiss, Tal, and Yedid Hoshen. \\\"Mean-shifted contrastive loss for anomaly detection.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 2. 2023.\\n\\n### No comparison with existing methods in their proposed dataset\\n\\nThis paper should provide benchmark evaluation results of existing methods on their proposed dataset. CP-GuardBench. This paper claims that the dataset creation is one of their contributions. In this case, this paper should provide how representative prior methods work on their methods along with their method. Otherwise, we cannot deny the possibility that the dataset could be cherry-picked. If prior work is over-performant or under-performant on their dataset, this paper should provide further cause analysis. As the dataset will be publically used, this paper should provide official benchmark results.\\n\\n### Building upon an unpublished work \\n\\nThis paper mentions their unpublished work, CP-Guard, as a prior work. Research paper should not cite the unavailable reference. It also could potentially break the review anonymity. If this paper wants to mention it, they should have published it as a preprint and cited it as a third-party paper. Based on this, I feel that this paper is not ready to be reviewed.\", \"questions\": [\"Could you elaborate more about the attack timeline (e.g., when we can get data from ego and helping CAVs, when the attack is generated, and when will it be deployed)?\", \"How representative is the CP system described in Fig. 2 in the autonomous driving domain?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No concerns\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Reply for Reviewer mcjh (2/2)\", \"comment\": \"**Q2: No comparison with generic anomaly detection methods.**\\n\\n**Reply:** We appreciate the reviewer's feedback regarding comparisons with generic anomaly detection methods. However, we would like to clarify that our research focus and contributions are fundamentally different. Our work primarily addresses malicious agent attacks in collaborative perception scenarios, and CP-Guard+ is the first to leverage feature-level detection in this domain. Our emphasis is on system-level design rather than improving specific detection techniques.\\n\\nThis is why we focused our comparisons on other CP defense systems (ROBOSAC and MADE) that share similar system-level objectives. CP-Guard+ represents a paradigm shift from output-level to feature-level detection, providing:\\n\\n- An end-to-end CP defense framework\\n- Significant reduction in computational overhead (70.36 FPS vs 56.86/20.76 FPS)\\n- Superior detection performance (10.03%-62.98% improvement in AP)\\n- A new benchmark dataset (CP-GuardBench) for CP defense evaluation\\n\\nWhile we acknowledge the value of generic anomaly detection methods, our primary contribution is advancing the system-level design of CP defense mechanisms. We will consider adding a brief comparison with generic methods in the discussion section to provide broader context.\\n\\n> The advancement in leveraging contrastive learning for anomaly detection is rapidly developing. [3] (IJCAI'21) proposed a masked contrastive learning framework to learn task-specific features, which is more suitable for malicious detection tasks. [4] (AAAI'23) further proposed a mean-shifted contrastive loss to overcome the adaptation failure of contrastive loss for malicious detection. [5] (NeurIPS'24) introduced CADet, a fully self-supervised method that is capable of detecting two types of out-of-distribution samples simultaneously. All the above methods are designed for general anomaly detection tasks validated on CIFAR-10 dataset, while the application of contrastive learning for malicious detection in the field of autonomous driving collaborative perception is still in its infancy.\\n> \\n\\n**Q3: No comparison with existing methods in their proposed dataset.**\\n\\n**Reply:** The baseline models compared in our study are not trained directly on our proposed CP-GuardBench dataset because they operate on a hypothesis-and-verification framework and do not utilize intermediate feature information for malicious detection. In contrast, CP-Guard+ is a feature-level collaborative perception (CP) defense method that requires prior training. Additionally, our method is the first to adopt a feature-level malicious agent detection approach.\\n\\nTherefore, it is not feasible to compare existing methods on CP-GuardBench. However, during testing, we evaluate and compare the performance of all methods on the same test set and ensure fairness.\\n\\n**Q4: Building upon an unpublished work.**\\n\\n**Reply:** Thanks for your suggestion, we will remove the related information in the revised manuscript. In addition, \\n\\n**Q6: Attack Timeline Description.**\\n\\n**Reply:** As described in R1, the timeline of the adversarial model is as follows: at the beginning of the time slot when the attack happens, the attacker will first wait for the collaborative agent's message, and then generate the perturbation based on the received message within several iterative optimization steps. After the perturbation generation, the attacker will send the crafted message to the victim agent. Since we consider intermediate feature-level collaborative perception system with low frame rate, good channel condition, and few perturbation iterations, both the transmission delay and computation delay are far less than the time interval of two consecutive frames, which can satisfy real-time requirement.\\n\\n**Q7: Representativeness of CP system in Fig. 2.**\\n\\n**Reply:** Fig.2 is a typical perception workflow in LiDAR-based autonomous driving domain with feature encoder, feature decoder, and prediction head. We add additional perturbation generation and saving process to generate the proposed CP-GuardBench dataset.\\n\\n**Reference**\\n\\n[1] Among Us: Adversarially Robust Collaborative Perception by Consensus. ICCV'23.\\n\\n[2] MADE: Malicious Agent Detection for Robust Multi-Agent Collaborative Perception. IROS'24.\\n\\n[3] Masked Contrastive Learning for Anomaly Detection. IJCAI'21.\\n\\n[4] Mean-Shifted Contrastive Loss for Malicious Detection. AAAI'23.\\n\\n[5] CADet: Fully Self-Supervised Out-Of-Distribution Detection With Contrastive Learning. NeurIPS'24.\"}", "{\"summary\": \"The paper proposed a defense method, named CP-Guard+, against the adversarial attack on intermediate-fusion collaborative perception for connected and autonomous vehicles. Unlike the prior work deploying the hypothesize-and-verify approach to identify the malicious attacker, CP-Guard+ leverages contrastive learning to project benign and malicious feature maps to feature vectors, and then use a classification model to achieve attack detection. Meanwhile, the paper proposed a benchmark of the colalborative perception defense solutions, named CP-GuardBench.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is well written and easy to follow.\", \"The contastive learning based classification outperforms prior approaches.\"], \"weaknesses\": [\"The threat model can be improved to make the attack definition clearer.\", \"The evaluation did not clarify whether one trained CP-Guard+ is universally usefull for different (unknown) attacks.\", \"The benefit and functionality of CP-GuardBench is somehow unclear.\", \"It misses certain related work discussion.\"], \"questions\": \"I enjoy reading the paper. It is built on a safety-critical problem of collaborative perception, which is not yet deployed in scale but is a critical vehicle application widely discussed. The approach of contrastive learning based classfication is straightforward and brings performance gains, in terms of both defense success rate and computation overhead. However, I have several questions that make me uncertain about the validity of the approach, mainly from a security aspect.\\n\\nFirst, the thread model, the adversarial attack on collaborative perception is not clearly defined. I understand the attacker solves an optimization to degrade the victim vehicle's perception performance, but the loss function (attack objective) is a key design choice which the paper did not explain clearly. Are there different choices of the loss function? I think so because I found a security paper on collaborative perception [1] defines a different attack objective. Will the choice of loss function or attack objective affects the effectiveness of CP-Guard+? It is fine to focus on a certain scope of attacks but it must be clearly defined.\\n\\nSecond, which is perhaps my misunderstanding, I am unclear if all experiments are using the same one trained CP-Guard+ or different CP-Guard+ instances given different attack parameters. The later sounds not realistic as a good defense should be universally useful for different variants of attacks and even unknown attacks. In other words, the training details of CP-Guard+, especially the training data to use, is not well defined.\\n\\nThe paper clearly claims CP-GuardBench as one of the major contributions. However, I did not fully understand the use of CP-GuardBench. The benchmark lacks flexibility; it stores the feature space data, which will change on any modification on the perception model or attack methods. As assessing security of a system always needs to consider adaptive attacks, such record of fixed attack methods can hardly be useful for state-of-the-art evaluation for a long time. I appreciate the effort to put these implementation together, but I would recommend to label this as a coding framework, not benchmark. I did not see artifact or code repository links either. Also, what is the difference between CP-GuardBench and the V2X-Sim experiments, given CP-GuardBench is also built from V2X-Sim?\\n\\nThe contrastive learning approach sounds valid to me, but I did not see significant innovation in the algorithm level except using the existing components. The math formulas in the paper are also basic definition of contrastive learning or classification itself. What is the technical contribution besides moving contrastive learning to a new application? At least it deserves a related work section for this.\\n\\nLastly, a frequent question for any defense paper: the adaptive attacks. What if the PGD attack uses the classification as feedback during the attack optimization? It could be acceptable to leave it as future work but at least there should be a discussion on such attack oppotunities.\\n\\n[1] Zhang, Qingzhao, et al. \\\"On data fabrication in collaborative vehicular perception: Attacks and countermeasures.\\\" 33rd USENIX Security Symposium (USENIX Security 24). 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics problems found.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply for Reviewer UHsS (2/2)\", \"comment\": \"**W3: According to some literature, it is not clear whether the assumption \\\"the anomalous feature should cluster together\\\". In anomaly detection, it is always a single-class classification problem and we can assume the benign samples clusters but it is hard to say this applies to the anomalies.**\\n\\n**Reply**: Thanks your insightful comments. After checking some literature, we found that your view is right in anomaly detection, since it is a single-class classification problem. However, in this paper, we train the malicious agent detector with contrastive learning, which will make the benign features and malicious features more compact and make the two kinds of features more easy to classify. We acknowledge it is hard to say there are anomalies clusters, but the operations will make anomaly features more compact. Anyway, we will change the statements following your comments to make it more precise.\\n\\n**Reference**\\n\\n[1] Among Us: Adversarially Robust Collaborative Perception by Consensus. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV\\u201923).\\n\\n[2] Malicious Agent Detection for Robust Multi-Agent Collaborative Perception (IROS\\u201924 Oral).\\n\\n[3] Driver Anomaly Detection: A Dataset and Contrastive Learning Approach (WACV\\u201921).\\n\\n[4] Learning Representation for Anomaly Detection of Vehicle Trajectories (IROS\\u201923).\"}", "{\"title\": \"Reply to the rebuttal\", \"comment\": \"The authors have provided detailed answers for my questions, the experiments and attack models that already included in this work seems correct. I prefer to keep my original rating, and the main reason is similar with what other reviewers have pointed out, a relatively straightforward method needs more comprehensive analysis on different types of attacks.\"}", "{\"comment\": \"I appreciate the authors' response to my comment. Meanwhile, my concerns are not fully resolved. So, I maintain my score.\", \"q1\": \"I am still not fully convinced of the attack feasibility. The timeline should consist of 2 lines: the victim's perception line and the attack generation line. My question is that the perception could be finished while processing step 3 which belongs to the attack generation line. For the other works, they are not limited to the \\\"feature\\\" level attack. So, I cannot deny a possibility that a very lightweight attack generation (e.g., just putting on a ghost trash can in the point cloud) may exist. However, this work should clarify more how to find effective attacks on the features, which are typically not interpretable to humans. I understand the author's argument. The attack could be possible if the attack generation can be that fast, but it was not clearly shown in this paper. This also related to my question about the representative CP system. I wanted to see how realistic the threat model is. Can the system be secure if the perception rate is 20 FPS? Can we defend the attack just by discarding the message delayed by the attack generation?\"}", "{\"summary\": \"This paper proposes a new dataset CP-GuardBench as the first one for malicious agent detection in collaborative perception systems and then it proposes CP-Guard+ as a robust malicious agent detection method. Compared with the previous hypothesize-and-verify paradigm, CP-Guard+ can detect malicious agents directly at the feature level in one pass. Experiments on CP-GuardBench and V2X-Sim demonstrate the performance of CP-Guard+.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method is straightforward and easy to understand.\\n\\n2. The proposed method is compared with two state-of-the-art CP defense methods on both accuracy and time in experiments.\\n\\n3. The performance of the proposed method is good enough.\", \"weaknesses\": \"1. For each collaborative detector, CP-GUARD+ need to train the corresponding binary classifier to detect the malicious agents which has less expansibility.\\n\\n2. Table 2 is hard to read for comparing the results of different methods. The results of your CP-Guard+ are not highlighted.\\n\\n3. No experimental results on real-world dataset. The V2X-Sim is the data generated by simulator. It would be better to have experiments on real-world datset.\", \"questions\": \"1. In Section 3.1, five attacks are used to generate the attack data. How would the performance of the trained model would be if it meets other types of attacks (not these five attacks)? It is common that the attacker creates new types of attack technology to defeat the defenses. The method should be able to work well even in this situation.\\n\\n2. Can you consider the out-of-distribution attack? For example, train your binary classifier on only three or four types of attacks and then test on the rest one or two types of attacks.\\n\\n3. In Table 2, why there is no results on the other two types of attacks? I think this table is the most important to show your performance, the results should be as many as possible.\\n\\n4. Are the compared baselines trained on your CP-GuardBench? The training data should be the same for fair comparison.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Q2:\\nI still do not fully understand why the generic anomaly detection methods are not applicable. As in my comments, the encoder should absorb the domain differences. If this paper aims for a system paper, this paper needs more discussion about the target system to justify why their detection approach is reasonable and needs more domain-specific evaluation such as traffic and driving scenario simulation. This is why I wanted to know the representative CP system.\", \"q3\": \"I still do not understand the reason why the prior methods cannot work on their dataset even though they can work on the V2X-Sim dataset. If some required information is not currently in their dataset, it should be included. Otherwise, the contribution of this dataset should be seen as quite limited since this dataset sounds like it is just dedicated to their method. Additionally, as also mentioned above, at least generic anomaly detection methods should be applicable since they can work at the feature level. As I said, a paper with dataset contribution should have official benchmark results as long as prior methods exist in the domain.\"}", "{\"title\": \"Reply for Reviewer fZua (2/2)\", \"comment\": \"**Q1: In Section 3.1, five attacks are used to generate the attack data. How would the performance of the trained model would be if it meets other types of attacks (not these five attacks)? It is common that the attacker creates new types of attack technology to defeat the defenses. The method should be able to work well even in this situation.**\\n\\n**Reply**: To evaluate our method's generalization ability, we conducted experiments using a leave-one-out strategy. In this approach, we iteratively excluded one type of attack from the training set, trained the model on the remaining attacks, and then tested its performance on the held-out attack type. The experimental results are presented below (We set the perturbation budgets $\\\\Delta=0.5$).\\n\\n| Held-out Attack | Accuracy | TPR | FPR | Precision | F1 Score |\\n| --- | --- | --- | --- | --- | --- |\\n| PGD | 99.66 | 98.29 | 0.00 | 100.00 | 99.14 |\\n| BIM | 99.93 | 100.00 | 0.09 | 99.66 | 99.83 |\\n| C&W | 100.00 | 100.00 | 0.00 | 100.00 | 100.00 |\\n| FGSM | 92.32 | 62.03 | 0.00 | 100.00 | 76.57 |\\n| GN | 89.79 | 50.17 | 0.17 | 98.67 | 66.52 |\\n| Average | 96.34 | 82.09 | 0.05 | 99.67 | 88.41 |\\n\\nThe results demonstrate our method's strong generalization ability on unseen attacks. Compared to conventional training approaches (shown below), our method experiences only a marginal decrease in overall performance. Notably, it even outperforms the conventional approach in terms of False Positive Rate (FPR) and Precision metrics. These findings underscore our method's robust capability to detect and handle unseen attack patterns.\\n\\n| | Accuracy | TPR | FPR | Precision | F1 Score |\\n| --- | --- | --- | --- | --- | --- |\\n| Normal | 98.08 | 97.07 | 1.66 | 93.45 | 95.29 |\\n\\n**Q2: Can you consider the out-of-distribution attack? For example, train your binary classifier on only three or four types of attacks and then test on the rest one or two types of attacks.**\\n\\n**Reply**: The results are shown in the above tables, thanks for your suggestion!\\n\\n**Q3: In Table 2, why there is no results on the other two types of attacks? I think this table is the most important to show your performance, the results should be as many as possible.**\\n\\n**Reply**: Actually, we have the results of other two types of attacks before. However, we found that these two attacks (FGSM and GN) are not strong attacks, so there influences for the perception performance are not that much compared with other attacks, which only reduce performance by 2-5 percentage points. Therefore, we neglect the results. Anyway, we show the results here for your reference and will add it to the revised manuscript.\\n\\n| **Method** | **\\u2206 = 0.25, N_mal = 1 [email protected]** | **\\u2206 = 0.25, N_mal = 1 [email protected]** | **\\u2206 = 0.5, N_mal = 1 [email protected]** | **\\u2206 = 0.5, N_mal = 1 [email protected]** | **\\u2206 = 0.25, N_mal = 2 [email protected]** | **\\u2206 = 0.25, N_mal = 2 [email protected]** | **\\u2206 = 0.5, N_mal = 2 [email protected]** | **\\u2206 = 0.5, N_mal = 2 [email protected]** |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| Upper-bound | 79.97 | 78.40 | 79.94 | 78.40 | 79.94 | 78.40 | 79.94 | 78.40 |\\n| No defense (FGSM Attack) | 77.99 | 76.63 | 76.50 | 75.10 | 76.18 | 74.42 | 74.44 | 73.41 |\\n| No defense (GN Attack) | 77.50 | 76.25 | 78.29 | 76.83 | 76.23 | 74.61 | 74.98 | 73.77 |\\n\\n**Q4: Are the compared baselines trained on your CP-GuardBench? The training data should be the same for fair comparison.**\\n\\n**Reply**: The baseline models compared in our study are not trained directly on our proposed CP-GuardBench dataset because they operate on a hypothesis-and-verification framework and do not utilize intermediate feature information for malicious detection. In contrast, CP-Guard+ is a feature-level collaborative perception (CP) defense method that requires prior training. However, the comparison remains fair, because all the methods are tested on the same test set.\\n\\n**Reference**\\n\\n[1] Among Us: Adversarially Robust Collaborative Perception by Consensus. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV\\u201923).\\n\\n[2] Malicious Agent Detection for Robust Multi-Agent Collaborative Perception (IROS\\u201924 Oral).\"}", "{\"summary\": \"The paper addresses security vulnerabilities in Collaborative Perception (CP) systems for autonomous vehicles, which share information among connected vehicles to enhance perception capabilities. This openness, however, exposes CP systems to potential attacks from malicious agents, which can inject adversarial data to disrupt perception outcomes. To mitigate these risks, the authors propose CP-Guard+, a novel feature-level malicious agent detection framework. Unlike traditional output-based verification methods, CP-Guard+ detects malicious agents by analyzing intermediate features, reducing computational overhead. The authors also introduce CP-GuardBench, the first dataset explicitly designed for training and evaluating malicious agent detection methods in CP systems. CP-Guard+ employs a mixed contrastive training strategy to increase the feature separation between benign and malicious agents, improving detection performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The work is well-motivated. It is important to have an efficient feature-level detection method for collaborative perception because it is a real-time and safety-critical task by nature. The paper is generally well-written and easy to follow.\\n\\n2. It is appreciated that the author also publishes the dataset and their methods to generate and annotate the data, which help build a standard benchmark for future works.\", \"weaknesses\": \"1. The method (CL) itself is not novel or has not made any adaptation tailored for the collaborative perception tasks. In previous works such as \\\"Driver Anomaly Detection: A Dataset and Contrastive Learning Approach\\\" (WACV 21), \\\"Learning Representation for Anomaly Detection of Vehicle Trajectories\\\" (IROS 23), they both adopt the CL methods to detect anomaly/adversarial attacks in the feature level. At least the author should discuss these works.\\n\\n2. In anomaly detection, it is important to make sure the methods are well generalized to unseen attacks/anomaly. However, in the methodology, it is unclear whether the model is trained with multiple types of attacks together or only on simple patterns of attacks (e.g. PGD). In the evaluation, it is unclear how the intermediate feature and detector can be generalized (e.g. trained on certain attacks but tested on unseen patterns of attacks).\\n\\n3. According to some literature, it is not clear whether the assumption \\\"the anomalous feature should cluster together\\\". In anomaly detection, it is always a single-class classification problem and we can assume the benign samples clusters but it is hard to say this applies to the anomalies.\", \"questions\": \"Please kindly refer to the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply for Reviewer fZua (1/2)\", \"comment\": \"Dear Reviewer fZua,\\n\\nThanks for your valuable comments and the time you dedicated to reviewing this work. Here we carefully and elaborately reply to your concerns.\\n\\n**W1: For each collaborative detector, CP-GUARD+ need to train the corresponding binary classifier to detect the malicious agents which has less expansibility.**\\n\\n**Reply**: We appreciate this thoughtful observation. Your concern about the need to train a binary classifier for each collaborative detector is valid. However, we would like to clarify several points that mitigate this limitation:\\n\\n1. *Practical Deployment Context*: In practical applications, the model architecture of a collaborative perception system is often fixed within the same automotive company, and collaborative perception is typically only compatible among vehicles of the same brand. Therefore, the collaborative perception system itself has inherent compatibility constraints, and our detection system aligns with this practical reality.\\n2. *Transfer Learning Capability*: The feature patterns that distinguish between benign and malicious agents share common characteristics across different detectors. The trained classifier can be easily fine-tuned for new detectors with minimal additional training data and computational cost. In addition, the one-time training cost is outweighed by the significant computational efficiency gains during inference (70.36 FPS vs 56.86/20.76 FPS for existing methods).\\n\\nTherefore, while CP-Guard+ does require specific training for different detector architectures, this aligns with the practical constraints and deployment patterns of real-world collaborative perception systems, where system-specific optimization is often more valuable than universal compatibility.\\n\\n**W2: Table 2 is hard to read for comparing the results of different methods. The results of your CP-Guard+ are not highlighted.**\\n\\n**Reply**: Thank you for pointing this out, we will make the table more readable in the revised version.\\n\\n| **Method** | **\\u2206 = 0.25, N_mal = 1 [email protected]** | **\\u2206 = 0.25, N_mal = 1 [email protected]** | **\\u2206 = 0.5, N_mal = 1 [email protected]** | **\\u2206 = 0.5, N_mal = 1 [email protected]** | **\\u2206 = 0.25, N_mal = 2 [email protected]** | **\\u2206 = 0.25, N_mal = 2 [email protected]** | **\\u2206 = 0.5, N_mal = 2 [email protected]** | **\\u2206 = 0.5, N_mal = 2 [email protected]** |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| **CP-Guard+**\\u00a0(against PGD attack) | 72.89 | 71.45 | 69.50 | 68.56 | 69.50 | 67.92 | 66.09 | 64.82 |\\n| **CP-Guard+**\\u00a0(against C&W attack) | 69.41 | 66.86 | 60.64 | 55.41 | 64.17 | 61.73 | 58.54 | 53.15 |\\n| **CP-Guard+**\\u00a0(against BIM attack) | 73.35 | 71.46 | 66.83 | 66.05 | 70.91 | 69.11 | 66.30 | 64.62 |\\n| **CP-Guard+**\\u00a0Average | 71.88 | 69.92 | 65.66 | 63.34 | 68.19 | 66.25 | 63.64 | 60.86 |\\n\\n**W3: No experimental results on real-world dataset. The V2X-Sim is the data generated by simulator. It would be better to have experiments on real-world dataset.**\\n\\n**Reply**: We appreciate your concerns. Currently, the field of collaborative perception experiences a scarcity of real-world datasets, with only DAIR-V2X [1] (CVPR'22) and V2V4Real [2] (CVPR'23) providing some support. However, both datasets have their limitations in terms of scale. Specifically, the DAIR-V2X dataset includes only one vehicle and one Roadside Unit (RSU), rendering it unsuitable for multi-vehicle scenarios such as those required for our experiments. Similarly, the V2V4Real dataset, comprising only two vehicles, does not provide a sufficient basis for validating the generalization capabilities of our proposed CP-Guard system. Consequently, we have adhered to dataset settings similar to those used in previous studies [3] (ICCV'23), which also rely on simulated data. As the development of real-world datasets for collaborative perception is rapidly advancing, we plan to extend the validation of our proposed CP-Guard+ to real-world datasets in future work.\"}", "{\"title\": \"Response to the rebuttal.\", \"comment\": \"The author's rebuttal answers most of my questions. Though the paper's methdology is valid, I still slightly lean to rejection because its contribution is rather incremental. In my opinion, when the methdology is rather straightforward, the paper will be in a better shape if the author comprehensively analyze various attack objectives and adaptive attacks, to turely expose the challenges of feature-level anomaly detection.\"}", "{\"title\": \"Thanks the authors for the clarification\", \"comment\": \"The authors have explained my questions. My ratings remain the same due to (1) the authors have not demonstrated how the choice the the collaborative perception system affect the defense performance. Experimenting with other collaborative systems, such as CoBEVT (By Runsheng et al.) will be a plus. (2) the form of attack needs more justification in the paper, or mentioned as a limitation.\"}", "{\"title\": \"Reply for Reviewer UHsS (1/2)\", \"comment\": \"Dear Reviewer UHsS,\\n\\nThanks for your valuable comments and the time you dedicated to reviewing this work. Here we carefully and elaborately reply to your concerns.\\n\\n**W1: The method (CL) itself is not novel or has not made any adaptation tailored for the collaborative perception tasks. In previous works such as \\\"Driver Anomaly Detection: A Dataset and Contrastive Learning Approach\\\" (WACV 21), \\\"Learning Representation for Anomaly Detection of Vehicle Trajectories\\\" (IROS 23), they both adopt the CL methods to detect anomaly/adversarial attacks in the feature level. At least the author should discuss these works.**\\n\\n**Reply**: Thanks for your comments. Here we want to reiterate the innovation points of this paper, its practical significance, and how it overcomes the shortcomings of previous collaborative perception (CP) defense methods. \\n\\n1. Firstly, our method proposed a new paradigm for CP defense, that is feature-level malicious agent detection. The traditional methods such as ROBOSAC (ICCV\\u201923) [1] and MADE (IROS\\u201924 Oral) [2] are hypothesize-and-verify paradigm based method, which need multiple rounds of malicious agent detection iterations at the output level, and requires the generation of multiple hypothetical outputs for verification, incurring high computational\\noverhead. In contrast, our method directly outputs robust CP results with intermediate feature-level detection, significantly reducing the computational overhead. The experiments also proved that.\\n2. Secondly, we proposed a new dataset, CP-GuardBench, the first dataset to facilitate the research of feature-level malicious agent detection in collaborative perception.\\n3. Finally, we propose CP-Guard+, a robust malicious agent detection method with high robustness and computational efficiency. We also conduct comprehensive experiments.\\n\\nAs for contrastive learning, it is a small technique used in our method, and indeed, it works well. Although the technique itself is not that novel, it helps our method to be more robust. As for practical significance, our method can be integrated into a CP system and can detect malicious agents in real-time and robustly, something a traditional method could not do.\\n\\nIn addition, as for the related works you mentioned, we will discuss these two methods and add them to the revised manuscript. Here is the discussion:\\n\\n> Kopuklu et al. [3] propose a contrastive learning-based approach for driver anomaly detection, addressing the open set recognition problem with the Driver Anomaly Detection (DAD) dataset, which includes unseen anomalies in the test set. Similarly, Jiao et al. [4] introduce supervised and unsupervised methods for detecting anomalous vehicle trajectories using contrastive learning and semantic modeling to improve anomaly detection in autonomous driving. Our method also leveraged contrastive learning to enhance the defense performance against malicious agent detection in collaborative perception.\\n> \\n\\n**W2: Concerns about generalization.**\\n\\n**Reply**: To evaluate our method's generalization ability, we conducted experiments using a leave-one-out strategy. In this approach, we iteratively excluded one type of attack from the training set, trained the model on the remaining attacks, and then tested its performance on the held-out attack type. The experimental results are presented below (We set the perturbation budgets $\\\\Delta=0.5$).\\n\\n| Held-out Attack | Accuracy | TPR | FPR | Precision | F1 Score |\\n| --- | --- | --- | --- | --- | --- |\\n| PGD | 99.66 | 98.29 | 0.00 | 100.00 | 99.14 |\\n| BIM | 99.93 | 100.00 | 0.09 | 99.66 | 99.83 |\\n| C&W | 100.00 | 100.00 | 0.00 | 100.00 | 100.00 |\\n| FGSM | 92.32 | 62.03 | 0.00 | 100.00 | 76.57 |\\n| GN | 89.79 | 50.17 | 0.17 | 98.67 | 66.52 |\\n| Average | 96.34 | 82.09 | 0.05 | 99.67 | 88.41 |\\n\\nThe results demonstrate our method's strong generalization ability on unseen attacks. Compared to conventional training approaches (shown below), our method experiences only a marginal decrease in overall performance. Notably, it even outperforms the conventional approach in terms of False Positive Rate (FPR) and Precision metrics. These findings underscore our method's robust capability to detect and handle unseen attack patterns.\\n\\n| | Accuracy | TPR | FPR | Precision | F1 Score |\\n| --- | --- | --- | --- | --- | --- |\\n| Normal | 98.08 | 97.07 | 1.66 | 93.45 | 95.29 |\"}", "{\"title\": \"Reply for Reviewer RS1k (2/2)\", \"comment\": \"**Q2: Universality of CP-Guard.**\\n\\n**Reply:** In this work, we only use the same one trained CP-Guard+ to defense against attacks with different parameters. As stated in section 3.2, the training of CP-Guard+ is based on our generated CP-GuardBench dataset, which contains five attacks with attack ratio within [0,0.3]. Since our CP-Guard+ has certain generalization ability, it can also be applied to different attack scenarios. To prove our method's generalization ability, we conducted experiments using a leave-one-out strategy. In this approach, we iteratively excluded one type of attack from the training set, trained the model on the remaining attacks, and then tested its performance on the held-out attack type. The experimental results are presented below (We set the perturbation budgets $\\\\Delta=0.5$).\\n\\n| Held-out Attack | Accuracy | TPR | FPR | Precision | F1 Score |\\n| --- | --- | --- | --- | --- | --- |\\n| PGD | 99.66 | 98.29 | 0.00 | 100.00 | 99.14 |\\n| BIM | 99.93 | 100.00 | 0.09 | 99.66 | 99.83 |\\n| C&W | 100.00 | 100.00 | 0.00 | 100.00 | 100.00 |\\n| FGSM | 92.32 | 62.03 | 0.00 | 100.00 | 76.57 |\\n| GN | 89.79 | 50.17 | 0.17 | 98.67 | 66.52 |\\n| Average | 96.34 | 82.09 | 0.05 | 99.67 | 88.41 |\\n\\nThe results demonstrate our method's strong generalization ability on unseen attacks. Compared to conventional training approaches (shown below), our method experiences only a marginal decrease in overall performance. Notably, it even outperforms the conventional approach in terms of False Positive Rate (FPR) and Precision metrics. These findings underscore our method's robust capability to detect and handle unseen attack patterns.\\n\\n| | Accuracy | TPR | FPR | Precision | F1 Score |\\n| --- | --- | --- | --- | --- | --- |\\n| Normal | 98.08 | 97.07 | 1.66 | 93.45 | 95.29 |\\n\\n**Q3: Benefit of CP-GuardBench.**\\n\\n**Reply:** Our idea to design CP-GuardBench is to facilitate the training and evaluation of feature-level collaborative perception defense methods against general CP system attacks. Since this work is the first attempt to leverage feature-level knowledge for CP defense, we hope this dataset can also be used in future research adopting the same idea of feature-level malicious detection in collaborative perception. In the future, we will also expand the CP-GuardBench to cover more attack scenarios, model backbones and parameters settings to boost the state-of-the-art evaluation.\\n\\n**Q4: Missing related works discussion on contrastive learning.**\\n\\n**Reply:** This work is the first one to leverage contrastive learning for feature-level malicious detection in collaborative perception, so we pay more attention to its application design instead of improving its technical details. But there are indeed some related works on contrastive learning for malicious detection we should also mention in the related work section. Thanks for pointing it out. Here is our added related works description on this.\\n\\n> The advancement in leveraging contrastive learning for anomaly detection is rapidly developing. [4] (IJCAI'21) proposed a masked contrastive learning framework to learn task-specific features, which is more suitable for malicious detection tasks. [5] (AAAI'23) further proposed a mean-shifted contrastive loss to overcome the adaptation failure of contrastive loss for malicious detection. [6] (NeurIPS'24) introduced CADet, a fully self-supervised method that is capable of detecting two types of out-of-distribution samples simultaneously. All the above methods are designed for general anomaly detection tasks validated on CIFAR-10 dataset, while the application of contrastive learning for malicious detection in the field of autonomous driving collaborative perception is still in its infancy.\\n> \\n\\n**Reference**\\n\\n[1] On Data Fabrication in Collaborative Vehicular Perception: Attacks and Countermeasures. USENIX Security'24.\\n\\n[2] Among Us: Adversarially Robust Collaborative Perception by Consensus. ICCV'23.\\n\\n[3] MADE: Malicious Agent Detection for Robust Multi-Agent Collaborative Perception. IROS'24.\\n\\n[4] Masked Contrastive Learning for Anomaly Detection. IJCAI'21.\\n\\n[5] Mean-Shifted Contrastive Loss for Malicious Detection. AAAI'23.\\n\\n[6] CADet: Fully Self-Supervised Out-Of-Distribution Detection With Contrastive Learning. NeurIPS'24.\"}", "{\"title\": \"Reply for Reviewer SymD\", \"comment\": \"Dear Reviewer SymD,\\n\\nThanks for your valuable comments and the time you dedicated to reviewing this work. Here we carefully and elaborately reply to your concerns.\\n\\n**W1: Discuss one related paper.**\\n\\n**Reply:** Thanks for your advice, here we discuss the paper you mentioned and add it to the revised paper.\\n\\n> Zhang *et al.* [1] leverage collaborative perception to defend LiDAR spoofing attack. Specifically, the authors use LiDAR scan data from neighboring vehicles to help the ego vehicle to detect and mitigate LiDAR spoofing attacks. Since current spoofing hardware typically targets one vehicle at a time, comparison with other vehicles' data helps identify discrepancies. This method is essentially to defend against conventional threats (LiDAR spoofing attack, GPS attack, etc.) to autonomous driving, while using collaborative perception as a means of detecting attacks. However, our method is totally different, because our method focus on the threats specific to collaborative perception system, rather than use collaborative perception as a mean to address the general threats for vehicles.\\n> \\n\\n**W2: Consider the attacks that focus on placing specific fake objects rather than using typical perturbations.**\\n\\n**Reply**: Your concerns are about the attacker potentially injecting several fake objects at certain locations. These cases can occur in general attacks on vehicles. For example, a LiDAR spoofing attacker can accurately inject or remove certain objects in the victim's perception results through laser interference or by manipulating the returning LiDAR signals. However, in collaborative perception attack, it is challenging to optimize adversarial perturbations in intermediate feature maps for specific objects and locations, and there have no such attacks developed yet. Therefore, current collaborative perception defense systems have not considered this issue. Perhaps in the future, this will be a valuable topic worthy of investigation.\\n\\n**Q1: The choice of encoder and decoder in Sec. 2.1.**\\n\\n**Reply:** The encoder uses the same architecture as described in [2], consisting of a convolutional backbone. For the decoder, we leverage mean fusion to integrate information from multiple CAVs, followed by a multi-layer convolutional neural network and a prediction head for classification and regression. The entire architecture follows [2].\", \"encoder\": \"```jsx\\nSequential(\\nConv2d(13, 32, kernel_size=3, stride=1, padding=1)\\nBatchNorm2d(32)\\nReLU()\\nConv2d(32, 32, kernel_size=3, stride=1, padding=1)\\nBatchNorm2d(32)\\nReLU()\\nConv3D(64, 64, kernel_size=(1, 1, 1), stride=1, padding=(0, 0, 0))\\nConv3D(128, 128, kernel_size=(1, 1, 1), stride=1, padding=(0, 0, 0))\\nConv2d(32, 64, kernel_size=3, stride=2, padding=1) ->(32,256,256)\\nBatchNorm2d(64)\\nReLU()\\nConv2d(64, 64, kernel_size=3, stride=1, padding=1)\\nBatchNorm2d(64)\\nReLU() ->(64,128,128)\\nConv2d(64, 128, kernel_size=3, stride=2, padding=1)\\nBatchNorm2d(128)\\nReLU()\\nConv2d(128, 128, kernel_size=3, stride=1, padding=1)\\nBatchNorm2d(128)\\nReLU() ->(128,64,64)\\nConv2d(128, 256, kernel_size=3, stride=2, padding=1)\\nBatchNorm2d(256)\\nReLU()\\nConv2d(256, 256, kernel_size=3, stride=1, padding=1)\\nBatchNorm2d(256)\\nReLU() ->(256,32,32)\\nConv2d(256, 512, kernel_size=3, stride=2, padding=1)\\nBatchNorm2d(512)\\nReLU()\\nConv2d(512, 512, kernel_size=3, stride=1, padding=1)\\nBatchNorm2d(512)\\nReLU() ->(512,16,16)\\n)\\n```\", \"decoder\": \"```jsx\\nSequential(\\nConv2d(512 + 256, 256, kernel_size=3, stride=1, padding=1)\\nBatchNorm2d(256)\\nReLU()\\nConv2d(256, 256, kernel_size=3, stride=1, padding=1)\\nBatchNorm2d(256)\\nReLU() ->(256,32,32)\\nConv2d(256 + 128, 128, kernel_size=3, stride=1, padding=1)\\nBatchNorm2d(128)\\nReLU()\\nConv2d(128, 128, kernel_size=3, stride=1, padding=1)\\nBatchNorm2d(128)\\nReLU() ->(128,64,64)\\nConv2d(128 + 64, 64, kernel_size=3, stride=1, padding=1)\\nBatchNorm2d(64)\\nReLU()\\nConv2d(64, 64, kernel_size=3, stride=1, padding=1)\\nBatchNorm2d(64)\\nReLU() ->(64,128,128)\\nConv2d(64 + 32, 32, kernel_size=3, stride=1, padding=1)\\nBatchNorm2d(32)\\nReLU()\\nConv2d(32, 32, kernel_size=3, stride=1, padding=1)\\nBatchNorm2d(32)\\nReLU() ->(32,256,256)\\n)\\n```\\n\\n**Q2: Consideration of transmission delay.**\\n\\n**Reply:** In this paper, we have not consider transmission delay yet. However, your question is thought-provoking and it is worth having a deeper investigation. We will think more about this question and see if we can develop some defense methods to leverage the delay advantage of the ego CAV. Thanks for your valuable question!!\\n\\n**Reference**\\n\\n[1] Cooperative Perception for Safe Control of Autonomous Vehicles under LiDAR Spoofing Attacks\\n\\n[2]Real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net (CVPR\\u201918).\"}", "{\"summary\": \"This paper proposes (1) CP-GuardBench, a pipeline to generate collaborative perception scenarios under malicious attack. The authors built CP-GuardBench by applying 5 different deep learning adversarial generation methods on top of V2X-Sim simulated scenarios. To the best of the reviewer's knowledge, this is the first benchmark in this area. (2) CP-Guard+, a deep learning model to differentiate malicious perception features from benign ones on the encoded feature. The author assumes the cooperative perception is done via a sensor-encode-transmission-fusion-decode pipeline. The author designed this model to intake encoded per-CAV feature maps (both benign and malicious) and outputs a benign/malicious classification by training the model with a contrastive loss to maximize the benign-malicious difference.\\n\\nThe authors performed experiments using (1) and compared to other methods MADE, ROBOSAC and demonstrated improved performance in detection precision and runtime.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The topic of attack-robust cooperative perception has existed and there are other works that seek to identify attacks, but differentiating the attack on the feature level is new. Thus this work has high originality.\\n\\nOverall the paper is clear and easy to follow. The authors' methods are well explained and the experiments with competitor methods are sufficient.\", \"weaknesses\": [\"There is at least one related paper that the author needs to discuss as a related work: Cooperative Perception for Safe Control of Autonomous Vehicles under LiDAR Spoofing Attacks (Proceedings Inaugural International Symposium on Vehicle Security and Privacy).\", \"The author made an assumption that an adversarial attacks occurs in the form of a perturbation to the encoded feature map by using the PGD, BIM, C&W, FGSM, GN tasks. In vision detection tasks, such attacks are effective at misleading a model to produce a wrong class label. However, in a cooperative perception task where the object state is more of an interest, such attacks might not be the prevailing form. According to Figure 7, such perturbations will result in hallucinated false objects spreading over the scene. But in reality, an attacker might as well want to inject just one or two fake objects at designated locations in the scene (for example injecting just a single object in front of the ego to force it to stop). In such cases, the perturbation on the feature map will not look like a noise. How will the model respond?\"], \"questions\": [\"Could the author clarify the `encoder` and `decoder` choice in section 2.1?\", \"Does the cooperative perception system consider transmission delays? (This might be helpful for the ego CAV can differentiate attacks because it has a no-delay advantage)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply for Reviewer RS1k (1/2)\", \"comment\": \"Dear Reviewer RS1k,\\nThank you for your valuable comments and the time you dedicated to reviewing this work. Here we carefully and elaborately reply to your concerns.\\n\\n**Q1: Improvement of threat model.**\\n\\n**Reply:** Thank you for raising this important concern about our threat model. Let us clarify both the system workflow and attack objectives in detail.\\n\\n1. **Attack Objectives and Loss Function Design.** Unlike [1] which focuses on targeted attacks (maximizing/minimizing confidence scores of specific objects), our work addresses more general attacks that cause global perception degradation. Our adversarial loss L_adv is comprehensively designed to achieve multiple objectives simultaneously: confusing proposal classes for detected objects, suppressing scores of correct classes to generate false negatives, creating false positives by increasing scores of background classes, and maximizing intersection-over-union (IoU) of bounding box proposals to degrade localization.\\n2. **System Workflow.** In our considered synchronous collaborative perception system , each frame follows a sequential timeline:\\n \\n a) **Local Perception Phase.** All agents, including malicious ones, simultaneously process their sensor data and extract intermediate features using feature encoders. This phase operates entirely in parallel without any inter-agent communication, maximizing computational efficiency.\\n \\n b) **Feature Communication Phase.** During this phase, all agents broadcast their extracted features across the network, with malicious agent k receiving feature information {F_{j\\u2192i}} from other agents. The communication overhead is kept minimal through our feature-level transmission approach.\\n \\n c) **Attack Generation Phase.** Malicious agent k formulates the attack optimization problem:\\n $$\\n \\\\begin{align*}& \\\\max_{\\\\delta} L_{adv}(Y^{\\\\delta}, Y) \\\\\\\\\\\\\\\\& \\\\text{s.t.} \\\\quad \\\\|\\\\delta\\\\| \\\\leq \\\\Delta \\\\\\\\\\\\\\\\& \\\\text{where:} \\\\\\\\\\\\\\\\& Y^{\\\\delta} = f_{decoder}(f_{aggregator}(F_{0\\\\rightarrow i}, F_{k\\\\rightarrow i}^{\\\\delta}, \\\\{F_{j\\\\rightarrow i}\\\\})) \\\\\\\\\\\\\\\\& F_{k\\\\rightarrow i}^{\\\\delta} = \\\\Gamma_{k\\\\rightarrow i}(F_k + \\\\delta)\\\\end{align*}\\n $$\\n For each proposal $z$, with $u = \\\\text{argmax}\\\\ {z_n|i = 0...m}$, the loss function is:\\n \\n \\\\begin{equation}\\n L_{adv}(z', z) = \\\\begin{cases} -\\\\log(1 - z'_u) \\\\cdot \\\\text{IoU}(z',z) & \\\\text{if } u \\\\neq k \\\\text{ and } z_u > \\\\tau^+ \\\\\\\\\\\\\\\\ -\\\\lambda \\\\cdot z'_u \\\\cdot \\\\log(1 - z'_u) & \\\\text{if } u = k \\\\text{ and } z_u > \\\\tau^- \\\\\\\\\\\\\\\\ 0 & \\\\text{otherwise}\\\\end{cases}\\n \\\\end{equation}\\n \\n d) **Defense and Final Perception Phase.** In this final phase, the ego vehicle receives all feature information, including corrupted features, performs CP-Guard+'s feature-level defense mechanisms, and completes the final object detection task.\\n \\n3. **Scope and Future Extensions.** While our current implementation focuses on general perception degradation attacks widely tested in state-of-the-art research [2, 3], we recognize the potential for more sophisticated attack strategies in real-world scenarios, including false object injection for provoking specific responses, targeted attacks on specific object classes, and temporal attacks across multiple frames. We plan to extend CP-Guard+ to address these specifically designed attacks in future work, viewing our current model as a fundamental step toward securing collaborative perception systems.\"}", "{\"title\": \"Reply for Reviewer mcjh (1/2)\", \"comment\": \"Dear Reviewer mcjh,\\nThank you for your valuable comments and the time you dedicated to reviewing this work. Here we carefully and elaborately reply to your concerns.\\n\\n**Q1: Threat model is not clear.**\\n\\n**Reply:** Thank you for raising this important concern about our threat model. Let us clarify the system workflow in detail. In this work, we consider a synchronous collaborative perception system with a low frame rate (e.g., 10 FPS), **each frame** follows a sequential timeline:\\n\\n1. **Local Perception Phase** (0-30ms). In this stage, all agents including potential malicious ones, simultaneously process their sensor data. Each agent employs feature encoders to extract intermediate features, with all processing occurring in parallel without any inter-agent communication.\\n2. **Feature Communication Phase** (30-50ms). In this phase, all agents broadcast their extracted features across the network. Malicious agent $k$ receives feature information ${F_{j\\u2192i}}$ from other agents. Thanks to our feature-level transmission approach, which generates significantly smaller payloads compared to raw sensor data, and our assumption of good channel conditions, the communication overhead remains minimal.\\n3. **Attack Generation Phase** (50-70ms). This phase represents the core of our threat model. Here, malicious agent $k$ formulates the attack optimization problem:\\n\\\\begin{align}& \\\\max_{\\\\delta} L_{adv}(Y^{\\\\delta}, Y) \\\\\\\\\\\\\\\\& \\\\text{ s.t.} \\\\quad \\\\|\\\\delta\\\\| \\\\leq \\\\Delta \\\\\\\\\\\\\\\\& \\\\text{where:} \\\\\\\\\\\\\\\\& Y^{\\\\delta} = f_{decoder}(f_{aggregator}(F_{0\\\\rightarrow i}, F_{k\\\\rightarrow i}^{\\\\delta}, \\\\{F_{j\\\\rightarrow i}\\\\})) \\\\\\\\\\\\\\\\& F_{k\\\\rightarrow i}^{\\\\delta} = \\\\Gamma_{k\\\\rightarrow i}(F_k + \\\\delta)\\\\end{align}\\nThe adversarial loss $L_{adv}$ is carefully designed to achieve multiple objectives simultaneously: confusing proposal classes for detected objects, suppressing scores of correct classes to generate false negatives, creating false positives by increasing scores of background classes, and maximizing intersection-over-union (IoU) of bounding box proposals to degrade localization. For each proposal $z$, with $u = \\\\text{argmax}\\\\ {z_n|i = 0...m}$ being the highest confidence class, we define the loss function as:\\n\\\\begin{equation}\\nL_{adv}(z', z) = \\\\begin{cases}\\n -\\\\log(1 - z'_u) \\\\cdot \\\\text{IoU}(z',z) & \\\\text{if } u \\\\neq k \\\\text{ and } z_u > \\\\tau^+ \\\\\\\\\\\\\\\\\\n -\\\\lambda \\\\cdot z'_u \\\\cdot \\\\log(1 - z'_u) & \\\\text{if } u = k \\\\text{ and } z_u > \\\\tau^- \\\\\\\\\\\\\\\\\\n 0 & \\\\text{otherwise}\\n\\\\end{cases}\\n\\\\end{equation}\\n\\n where $z'$ and $z$ represent the perturbed and original output proposals, respectively, $\\\\tau^+$ and $\\\\tau^-$ serve as confidence thresholds for positive/negative samples, and $\\\\lambda$ acts as a weighting parameter balancing different objectives. The perturbation generation process employs efficient iterative optimization techniques, including PGD attack with 5-10 iteration steps ($\\\\delta_t = \\\\text{Proj}(\\\\delta_{t-1} + \\\\alpha \\\\cdot \\\\text{sign}(\\\\nabla_\\\\delta L))$) and GN attack with single-step generation ($\\\\delta = \\\\varepsilon \\\\cdot \\\\text{sign}(N(0,I))$). The generated perturbation $\\\\delta$ is then applied to the original feature $F_k$ to obtain the corrupted feature $F_{k\\\\rightarrow i}^{\\\\delta}$\\n\\n4. **Defense and Final Perception Phase** (70-100ms). Finally, during the Defense and Final Perception Phase (70-100ms), the ego vehicle receives all feature information, including any corrupted features. CP-Guard+ performs its feature-level defense mechanisms before completing the final object detection task. This carefully orchestrated sequence ensures that the entire pipeline, including attack generation and defense, can be completed within the 100ms frame interval, meeting the real-time requirements of autonomous driving systems. The above setting is also aligned with recent works in collaborative perception security [1, 2] (ICCV'23, IROS'24). We will make the threat model more clear in the revised manuscript based on your suggestion. Thanks for your valuable comments!!\"}", "{\"comment\": \"Dear Reviewer fZua,\\n\\nThank you for your valuable feedback on our submission. We have thoroughly addressed all your comments and believe that our responses have reasonably resolved the concerns you raised. As the discussion period is coming to a close soon, we kindly ask if you could review our responses at your convenience. If you have any further questions or require additional clarification, please let us know. We are more than willing to provide any additional information you might need.\\n\\nRegards, \\n\\nAuthors of Submission362\"}", "{\"metareview\": \"(a) Summary: This paper proposes a new dataset CP-GuardBench as the first one for malicious agent detection in collaborative perception systems and then it proposes CP-Guard+ as a robust malicious agent detection method.\\n(b) Strengths: The paper is generally well-written and easy to follow. The experimental results seem to support the authors' claims to some extent.\\n(c) Weaknesses: The reviewers pointed out a few major concerns and issues. The contributions are incremental. The proposed method is straightforward but the analysis is not comprehensive. The threat model is not clearly demonstrated. Some comparisons with baseline methods are missing.\\n(d) Although the authors addressed some of the concerns and comments from reviewers, some issues still remain unresolved. The majority of reviewers gave a negative final rating.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers read the authors' rebuttal, but still have concerns that were not fully addressed.\"}" ] }
9M5georQ9T
Optimizing Attention with Mirror Descent: Generalized Max-Margin Token Selection
[ "Aaron Alvarado Kristanto Julistiono", "Davoud Ataee Tarzanagh", "Navid Azizan" ]
Attention mechanisms have revolutionized numerous domains of artificial intelligence, including natural language processing and computer vision, by enabling models to selectively focus on relevant parts of the input data. Building on recent results characterizing the optimization dynamics of gradient descent (GD) and the structural properties of its preferred solutions in attention-based models, this paper explores the convergence properties and implicit bias of a family of mirror descent (MD) algorithms designed for softmax attention mechanisms, with the potential function chosen as the $p$-th power of the $\ell_p$-norm. Specifically, we show the directional convergence of these algorithms to a generalized hard-margin SVM with an $\ell_p$-norm objective when applied to a classification problem using a one-layer softmax attention model. Our theoretical results demonstrate that these algorithms not only converge directionally to the generalized max-margin solutions but also do so at a rate comparable to that of traditional GD in simpler models, despite the highly nonlinear and nonconvex nature of the present problem. Additionally, we delve into the joint optimization dynamics of the key-query matrix and the decoder, establishing conditions under which this complex joint optimization converges to their respective hard-margin SVM solutions.
[ "Attention Mechanism", "Mirror Descent", "Implicit Regularization", "Transformers" ]
https://openreview.net/pdf?id=9M5georQ9T
https://openreview.net/forum?id=9M5georQ9T
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zo4BYvlepj", "zkWMnPP17a", "wur2iYgVeg", "vkmVjP7BqW", "vjXQsAwlml", "qXT881mX5r", "mZ28bOHvMX", "m9s4Vi7SoU", "jInTR0wpZd", "eNnMnibW6U", "djwcNa0D7C", "dASh0xBSVF", "bWQtwIbJmV", "XkDQxTlUJE", "WdTOmnvC9n", "VYmFWQKQvD", "S1Q745yynq", "HUuSqmODsb", "GMs6qUDxwi", "GAdMvgGpXB", "EPziN9UfhM", "B22hks0TI1", "88WsiIFO7t", "7RR6Aa7oLZ", "5LAZ0oYSW3", "2iWOd2f9BP", "1cqIrL4wpk", "1CGMKgxSdw" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732753850894, 1732756201412, 1733287278776, 1733287046531, 1732752212509, 1732749498424, 1732747761834, 1732749090410, 1733191895890, 1733120004077, 1730759711875, 1733146899298, 1733120073310, 1732753275703, 1732751258318, 1737317053977, 1733287201712, 1733120019504, 1733170017647, 1732748099164, 1730731695736, 1732750137768, 1732781018270, 1730692415287, 1732756094100, 1731028303725, 1732748337298, 1733292853302 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Reviewer_7Bsg" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Reviewer_kVFQ" ], [ "ICLR.cc/2025/Conference/Submission10068/Reviewer_vf65" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Reviewer_kVFQ" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Reviewer_vf65" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Reviewer_HzdV" ], [ "ICLR.cc/2025/Conference/Submission10068/Reviewer_7Bsg" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Reviewer_HzdV" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ], [ "ICLR.cc/2025/Conference/Submission10068/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer 7Bsg--Part 1\", \"comment\": \"> **Weakness 1:** The empirical evaluation is limited; the paper includes experiments, and they are primarily focused on synthetic data and a single real-world dataset (Stanford Large Movie Review Dataset). More diverse real-world applications would strengthen the practical implications.\\n\\n**Response:** To address the reviewer\\u2019s concern, in addition to the language model experiments, we **provide a new set of empirical evaluations on training a Vision Transformer model** to perform a classification task on CIFAR-10 in Appendix H5. This experiment involves training a Vision Transformer (ViT) model with $\\\\ell_{1.1}$-MD and Adam over the first 1000 epochs. The results in **the newly added Figure 11** show that $\\\\ell_{1.1}$-MD achieves similar test accuracies to Adam, demonstrating that it can match the performance of other state-of-the-art optimizers for transformer models. In addition, to compare the explainability between $\\\\ell_{1.1}$-MD and Adam, we provide the weight distributions of the resulting two models in **the newly added Figure 12** and we provide a discussion on the explainability of $\\\\ell_{1.1}$-MD:\\n\\nExplainability in attention mechanisms is often defined as the model\\u2019s ability to identify and prioritize the most influential tokens in the input sequence, thereby making its decision-making process more interpretable [[Klein et al.](https://arxiv.org/abs/2409.16756), [Ali et al.](https://arxiv.org/abs/2403.01590), [Abnar et al.](https://arxiv.org/abs/2005.00928)] . This aligns with the concept of feature selection in classical machine learning, where sparse and focused representations improve both interpretability and model robustness.\\n\\nIn our work, $\\\\ell_{1.1}$-MD demonstrates superior explainability compared to other gradient-based methods, such as standard (S)GD and Adam. Specifically, $\\\\ell_{1.1}$-MD produces sparser weight distributions and attention maps that more sharply highlight the most critical tokens. Figures 8 and 13 provide clear evidence of this property in the Stanford Large Movie Review Dataset. For instance, attention maps generated by $\\\\ell_{1.1}$-MD-trained models focus more on sentiment-revealing words, such as \\\"amazing\\\" or \\\"terrible,\\\" while models trained with GD display more diffuse attention patterns, potentially diluting interpretability. This ability to emphasize pivotal tokens directly contributes to the model's transparency and aligns with established literature emphasizing the importance of sparsity for interpretability. \\n\\nFurthermore, the weight distributions in the key, query, and value matrices, shown in Figure 9, highlight that $\\\\ell_{1.1}$-MD encourages sparsity more effectively than GD, while the weight distribution in Figure 12 shows that $\\\\ell_{1.1}$-MD also induces more sparsity compared to Adam. This sparsity enhances interpretability by limiting the model's reliance on non-essential tokens. By aligning the optimization process with explainability objectives, $\\\\ell_{1.1}$-MD offers practical benefits for applications where transparency is crucial [[Klein et al.](https://arxiv.org/abs/2409.16756), [Ali et al.](https://arxiv.org/abs/2403.01590), [Abnar et al.](https://arxiv.org/abs/2005.00928)]. Thus, $\\\\ell_{1.1}$-MD achieves comparable generalization performance to Adam (as shown in Figure 11), and its token selection precision and sparse representations establish it as an interpretable and explainable optimization method (as shown in Figures 8, 9, 12, and 13). These findings underscore the potential of using MD variants to improve both the performance and explainability of attention-based models.\"}", "{\"title\": \"Response to Reviewer 7Bsg--Part 3\", \"comment\": \"> **Weakness 3:** The paper does not thoroughly address the computational overhead of implementing mirror descent compared to standard gradient descent.\\n\\n**Response:** Thank you for the feedback. This is an important point. While the MD family of algorithms, in general, may have a substantial computational overhead compared to GD, the subfamily that we study in our paper have a very small overhead (linear in the number of parameters) and are easily implementable because the update rule is *coordinate-wise separable*. We discuss this below and in the revision in Appendix H1. \\n\\nThe computational overhead is linear in the number of parameters that we are training. To illustrate this, we focus on the $\\\\ell_p$-AttGD algorithm in our paper, and compare its computation requirement to how GD would optimize the same set of parameters. The same analysis can be applied on $\\\\ell_p$-JointGD and $\\\\ell_p$-MD and we would get the same result.\\n\\nFor the analysis, let\\u2019s say that the parameter $W$ has $D=d\\\\times d$ entries in total. In GD, we simply had to compute $\\\\eta\\\\nabla\\\\mathcal{L}(W(k))$, and subtract that from the parameter $W(k)$, leading to the update rule $W(k+1)\\\\leftarrow W(k)-\\\\eta\\\\nabla\\\\mathcal{L}(W(k))$.\\n\\nIn $\\\\ell_p$-AttGD, we had to first transform $W(k)$ using the mirror map, which can be done by applying the function $x\\\\mapsto|x|^{p-1}\\\\operatorname{sign}(x)$ element-wise, denote the result of this as $W\\u2019(k)$. This first operation would require an additional $\\\\mathcal{O}(D)$ time. Then, similarly as in GD, we had to compute $\\\\eta\\\\nabla\\\\mathcal{L}(W(k))$ and subtract that from $W\\u2019(k)$, denote this new result by $W(k)^+$. This has identical complexity to that of the GD algorithm. Finally, we need to apply the inverse mirror-map function on $W(k)^+$ and save the result to the parameters in the models. This operation can be done by applying $y\\\\mapsto|y|^{1/(p-1)}\\\\operatorname{sign}(y)$ element-wise, so it will take an additional $\\\\mathcal{O}(D)$ time. In total, our algorithm requires an additional $\\\\mathcal{O}(D)$ time. For space complexity, we can see that we only need to hold a constant amount of additional space for the computation of each entry, so it requires $\\\\mathcal{O}(D)$ additional space.\\n\\nAlthough this adds some overhead compared to GD, its overhead should be similar to that of common adaptive methods such as Adam, and RMSProp.\"}", "{\"title\": \"Response to Reviewer 7Bsg\", \"comment\": \"Thank you for the valuable feedback. We address them as follows.\\n\\n> Limited empirical variety\\n\\n**Response:** Thank you for your comment. We have already conducted experiments in two synthetic settings (as shown in Figures 4, 5, 6, 7, and 10) and two real-world settings for language and vision tasks (as shown in Figures 8, 9, 11, 12, 13, and Table 1). To further strengthen our experimental findings, we have included additional results on the more complex CIFAR-100 dataset, which will be presented in the appendix of the final manuscript.\\n\\nThe setting of this experiment is similar to that of the experiment on CIFAR-10 (training a ViT architecture using Adam and $\\\\ell_{1.1}$-MD for 1000 epochs), with the difference being that the output is a probability distribution on $100$ classes instead of $10$. We get a similar result in terms of the weight distribution of the attention weights in the model, namely that the **$\\\\ell_{1.1}$-MD trained model has sparser weight distribution** when compared to the model trained with Adam. However, we get a higher test accuracy for the model trained by $\\\\ell_{1.1}$-MD by approximately **2.19%** compared to Adam. Specifically, we get **55.22%** from $\\\\ell_{1.1}$-MD and **53.03%** from Adam. This is further detailed in the following table, which shows the test accuracy at every 50 epochs:\\n\\n| Epochs | Adam | $\\\\ell_{1.1}$-MD |\\n| -- | -- | -- |\\n| 50 | 0.51710 | 0.46620 |\\n| 100 | 0.52310 | 0.51470 |\\n| 150 | 0.51770 | 0.52590 |\\n| 200 | 0.52350 | 0.53150 |\\n| 250 | 0.52510 | 0.53480 |\\n| 300 | 0.52300 | 0.53750 |\\n| 350 | 0.52640 | 0.53860 |\\n| 400 | 0.52770 | 0.53670 |\\n| 450 | 0.52150 | 0.53930 |\\n| 500 | 0.52860 | 0.54150 |\\n| 550 | 0.52800 | 0.54200 |\\n| 600 | 0.52480 | 0.54300 |\\n| 650 | 0.52470 | 0.54180 |\\n| 700 | 0.52940 | 0.54870 |\\n| 750 | 0.53200 | 0.54510 |\\n| 800 | 0.52940 | 0.54560 |\\n| 850 | 0.52950 | 0.54930 |\\n| 900 | 0.53150 | 0.54780 |\\n| 950 | 0.52720 | 0.54680 |\\n| 1000 | 0.53030 | 0.55220 |\\n\\n> Dependence on Theoretical Assumptions\\n\\n**Response:** Thank you for the feedback, yes our assumption mainly aligns with the recent works on implicit bias analysis of optimization algorithms for attention [[Tarzanagh et al.](https://arxiv.org/abs/2308.16898), [Vasudeva et al.](https://arxiv.org/abs/2402.05738), [Sheen et al.](https://arxiv.org/abs/2403.08699)] and implicit bias for mirror descent for general machine learning problems [[Sun et al.](https://www.jmlr.org/papers/v24/23-0836.html), [Azizan et al.](https://arxiv.org/abs/1906.03830), [Azizan et al.](https://openreview.net/pdf?id=HJf9ZhC9FX)], and we hope that future works can relax these assumptions.\\n\\n> Computational Overhead\\n\\n**Response:** To follow your suggestion, we will include the runtime of our algorithms for the real-data experiments. As analyzed in our previous response and the revised manuscript, the overhead time and space complexity for $\\\\ell_p$-MD compared to GD is small (linear in the number of parameters trained) and is comparable to the overhead of Adam relative to GD.\"}", "{\"title\": \"Response to Reviewer vf65\", \"comment\": \"Thank you for the feedback. We address them below.\\n\\n> I thank the authors for their response. It is nice to see that you can get improved sparsity at same accuracy than Adam. I have the following suggestions : I believe this result warrants further exploration, as it now seems tangential to the main focus of the paper. It could strengthen the motivation on the benefits of the proposed algorithm compared to what is well-established in the field.\\n \\n**Response:** We thank the reviewer for their insightful comment. As Reviewer HzdV noted, \\u201cThe extension to $\\\\ell_p$-norm \\u2026 opens up new avenues for optimizing attention mechanisms.\\u201d We agree that further experimental investigations and optimization methods could be explored in future works, as such extensions fall outside the scope of this paper.\\n\\nThe primary goal of the current work is to provably demonstrate how different optimization algorithms exhibit **distinct implicit biases**, leading to variations in **convergence rates**, **generalization**, and **explainability** within the same attention model. This is supported by our experimental results on the synthetic dataset (Figures 4--7, 10), text semantic analysis dataset (Figures 8, 9, 13, and Table 1), and image classification dataset (Figures 11 and 12).\\n\\nTo further enhance the experimental findings, we present additional results on CIFAR-100, which will be included in the appendix of the final manuscript.\\n\\nThe setting of this experiment is similar to that of the experiment on CIFAR-10 (training a ViT architecture using Adam and $\\\\ell_{1.1}$-MD for 1000 epochs), with the difference being that the output is a probability distribution on $100$ classes instead of $10$. We get a similar result in terms of the weight distribution of the attention weights in the model, namely that the **$\\\\ell_{1.1}$-MD trained model has sparser weight distribution** when compared to the model trained with Adam. However, we get a higher test accuracy for the model trained by $\\\\ell_{1.1}$-MD by approximately **2.19%** compared to Adam. Specifically, we get **55.22%** from $\\\\ell_{1.1}$-MD and **53.03%** from Adam. This is further detailed in the following table, which shows the test accuracy at every 50 epochs:\\n\\n| Epochs | Adam | $\\\\ell_{1.1}$-MD |\\n| -- | -- | -- |\\n| 50 | 0.51710 | 0.46620 |\\n| 100 | 0.52310 | 0.51470 |\\n| 150 | 0.51770 | 0.52590 |\\n| 200 | 0.52350 | 0.53150 |\\n| 250 | 0.52510 | 0.53480 |\\n| 300 | 0.52300 | 0.53750 |\\n| 350 | 0.52640 | 0.53860 |\\n| 400 | 0.52770 | 0.53670 |\\n| 450 | 0.52150 | 0.53930 |\\n| 500 | 0.52860 | 0.54150 |\\n| 550 | 0.52800 | 0.54200 |\\n| 600 | 0.52480 | 0.54300 |\\n| 650 | 0.52470 | 0.54180 |\\n| 700 | 0.52940 | 0.54870 |\\n| 750 | 0.53200 | 0.54510 |\\n| 800 | 0.52940 | 0.54560 |\\n| 850 | 0.52950 | 0.54930 |\\n| 900 | 0.53150 | 0.54780 |\\n| 950 | 0.52720 | 0.54680 |\\n| 1000 | 0.53030 | 0.55220 |\\n \\n> Could the obtained sparsity be related to support vector sparsity in the SVM?\\n \\n**Response:** This is indeed a direct consequence of our theorems. Specifically as it was shown in our theorems, $\\\\ell_{p}$-AttGD converges to $\\\\ell_{p}$-AttSVM solutions. Further, since $\\\\ell_p$-AttSVM solutions would be sparse when $p$ is close to $1$, this shows that, for example, $\\\\ell_{1.1}$-MD converges to sparser solution compared to GD ($\\\\ell_2$-MD) and Adam as provided by our experiments in Figure 9 and 12.\\n \\n> At the same time, the current manuscript presents some exaggerated claims \\u2026 in token selection\\n \\n**Response:** To address your concern, we removed the word \\u201cextensive\\u201d from Line 88.\\n \\n> Regarding previous work on analyzing attention through the lens of SVMs, \\u2026 case\\n \\n**Response:** We thank the reviewer for their feedback and would like to clarify that **our work does not aim to design or solve different SVM formulations**. Instead, our focus is on analyzing the implicit bias of optimization algorithms, specifically Mirror Descent (MD), in the context of training attention mechanisms. \\n\\n*Implicit bias* refers to the inherent tendency of optimization algorithms to select specific solutions from among the infinitely many parameter configurations that can achieve zero training error, particularly in non-convex settings. Our study highlights how the implicit biases of MD influence the specific properties of attention weights, such as sparsity, robustness, and generalization, offering insights that go beyond standard optimization behaviors e.g., the behavior of standard (stochastic) gradient descent for attention. We hope this clarification resolves the reviewer\\u2019s concern and emphasizes that the unique focus of our work is on implicit bias analysis, not on SVM design or solving.\"}", "{\"title\": \"Response to Reviewer vf65--Part 2\", \"comment\": \"> **Weakness 1-2:** Other core ideas of analyzing the implicit bias of GD/MD algorithms for softmax attention are already present in the cited works (Tarzanagh et al., 2023, 2024; Vasudeva et al., 2024a; Sheen et al., 2024) without fundamentally changing the nature of the problem or leading to significantly different insights.\\n\\n**Response:** We acknowledge that there is existing literature on training attention models using GD. However, to the best of our knowledge, the papers you mentioned (Tarzanagh et al., 2023, 2024; Vasudeva et al., 2024a; Sheen et al., 2024) have not conducted this analysis for MD, nor have they performed a local convergence rate analysis for either MD or GD. The analysis involves significant technical challenges and heavy lifting to extend the results to MD, which is substantially different from the previously cited works. Furthermore the results do lead to significantly new insights. We elaborate on the difference between our work and those papers below.\\n\\n1. (Tarzanagh et al., 2023, 2024) was, to our knowledge, the first to explore the implicit bias of attention mechanism, and they have done their analysis for GD. **Our work extends their result for locally optimal tokens for MD and shows that different training algorithm can induce different implicit biases**. In addition to the new result, we also had to develop a different analysis technique because different tools were needed to analyze MD, such as Bregman divergence. Due to the generality of the Bregman divergence, we could not prove that the parameters will stay around the locally optimal direction in the same way that Tarzanagh did, which was by showing that the parameters will always approach that optimal direction. Instead, we had to show that if the parameters were initially close around the locally optimal direction, then it would not stray too far from the locally optimal direction. \\n\\n2. (Vasudeva et al., 2024a) explored the rate of global convergence of training the attention model using GD for joint optimization and $W$ optimization. Meanwhile, we focused on the $W$ optimization for MD, looking into both its implicit bias and convergence rate. Furthermore, Vasuveda et al. had to rely on an additional assumption on the near-orthogonality of the token features, while we did not have to rely on that. **Most importantly, while Vasudeva et al. theoretically proved the result for normalized GD and GD with Polyak step-sizes, and empirically investigated those as well as Adam, they have not shown that these different optimization algorithms have different implicit biases and token selection properties.** However, we do so both theoretically and empirically.\\n\\n\\n3. (Sheen et al., 2024) examined the implicit bias of gradient flow for self-attention, providing the first theoretical analysis of first-order algorithms for the $(Q,K)$-parameterization in self-attention. However, while gradient flow is a valuable tool in analyzing training algorithms, the practical applicability of these results may be limited, as gradient flow is not typically considered a practical algorithm for training ML models in general. \\n\\n4. Finally, we would like to emphasize that our introduction of $\\\\ell_p$-AttGD and the extension of attention optimization to generalized $\\\\ell_p$-norm objectives provide entirely new theoretical results, including convergence rates under Bregman divergence and insights into token selection properties. These results are absent in the aforementioned papers.\"}", "{\"comment\": \"> **Weakness 2:** There are many unclear things about global and optimal tokens (def.3). First, the fact that there is a feasible solution is very important, precise conditions for it to happen must be put in the paper. The authors simply state that \\u201cunder mild overparameterization and $d \\\\geq \\\\max(T-1, n)$, the problem is almost always feasible (Tarzanagh et al., 2023, Theorem 1).\\u201d These conditions must be put in the paper. Also, it is unclear whether the statement is \\u201cfor all $\\\\alpha_i$, the problem is feasible\\u201d or \\u201cthe problem is feasible for at least one set of $\\\\alpha_i$.\\u201d\\n\\n> **Weakness 3:** I cannot understand why Theorem 1 does not require those conditions: clearly, $W_{\\\\text{opt}}^{\\\\text{mm}}$ must exist for it to hold, and there is nothing in the assumptions guaranteeing it.\\n\\n> **Question 1:** Can the authors clarify the questions above regarding feasibility?\\n\\n**Response:** Thank you for pointing out this particular issue. We agree that it is important to explicitly state the assumptions under which the problem is feasible. To address this, we have revised the paper to include these conditions directly below Definition 4. Specifically, **we have included the following assumptions in the revised manuscript:**\\n\\n\\\"Throughout this paper, we assume feasibility, which means there exists a matrix $W$ that linearly separates the logits $X_{i \\\\alpha_i}^T W z_i$ from the logits $X_{it}^T W z_i$ for all $t \\\\in [T] \\\\setminus {\\\\alpha_i}$ and $i \\\\in [n]$.\\\"\\nAdditionally, we specify that:\\n\\n\\\"Under mild overparameterization, where $d \\\\geq \\\\max(T-1, n)$, the problem is almost always feasible (Tarzanagh et al., 2023, Theorem 1).\\\"\\n\\nPlease refer to the blue part below Definition 4. \\n\\n> **Weakness 4:** It is never properly stated in the paper if \\u201cglobally optimal tokens\\u201d are also \\u201clocally optimal.\\u201d It seems not to be the case, since globally optimal tokens depend on $v$ while locally optimal tokens do not. This also causes issues in Theorem 1: globally optimal tokens do not seem to be always feasible, so $W_{\\\\text{mm}}^{\\\\text{opt}}$ is not always defined, while the $\\\\ell_p$-AttRp always has a solution. For instance, it could very well have a global solution of fixed norm.\\n\\n**Response:** Thank you for your thoughtful feedback. As we clarified in our **response to Question 1**, the feasibility of the SVM problem in Definition 4 is assumed throughout the paper. Regarding the relationship between globally optimal tokens $\\\\text{opt}_i$ and locally optimal tokens $\\\\alpha_i$, globally optimal tokens inherently satisfy the conditions for locally optimal tokens as defined in Definition 3.\\n\\nBy substituting $\\\\alpha_i$ with $\\\\text{opt}_i$ in Definition 3 (Item 2), the resulting optimization problem remains consistent, and the token scores $\\\\gamma\\\\_{i\\\\text{opt}_i}$ are at least as high as those of any support token for sample $i$. This follows directly from the definition, where the globally optimal token score is greater than or equal to the scores of all other tokens, including the support tokens.\\n\\nFrom this discussion, the key difference between globally and locally optimal tokens is that globally optimal tokens require their token score to be higher than that of all tokens, while locally optimal tokens only require this condition for a specific subset of tokens. This implies that both definitions depend on $v$, as the token score is defined in terms of it. However, globally optimal tokens exhibit a stronger dependency on $v$. Consequently, while globally optimal tokens have a greater dependency on $v$, they inherently satisfy the conditions for locally optimal tokens.\", \"title\": \"Response to Reviewer kVFQ--Part 2\"}", "{\"title\": \"General Response\", \"comment\": \"We thank the reviewers for their valuable feedback, which greatly improved our manuscript. Below, we summarize the main contributions **(C1\\u2013C3)**, the reviewers' key points **(P1\\u2013P4)**, and our actions **(A1\\u2013A4)**. Revisions and **new experiments** are highlighted in blue in the paper.\\n\\n**C1. Implicit Bias Analysis with Mirror Descent (MD):** We provide a comprehensive theoretical analysis of MD for softmax attention mechanisms, introducing the $\\\\ell_p$-AttGD framework. This approach demonstrates directional convergence to generalized hard-margin SVM solutions with $\\\\ell_p$-norm objectives and extends prior work limited to gradient descent.\", \"reviewer_kvfq_noted\": \"\\\"The fact that MD achieves sparser weights and leads to better generalization is interesting.\\\" This highlights the significance of analyzing MD's implicit bias and token selection properties.\\n\\n**C2. Different Optimizers and Their Token Selection:** Our motivation is to demonstrate that **different optimizers have different token selection properties**, which in turn leads to different generalization performances. This observation, missing in the attention optimization literature (Tarzanagh et al., 2023, 2024; Vasudeva et al., 2024a; Sheen et al., 2024), is validated by analyzing $\\\\ell_p$-AttGD convergence to $\\\\ell_p$-SVM under different $\\\\ell_p$-norm objectives within the nonconvex softmax framework. Reviewer HzdV stated: \\\" The paper provides a solid theoretical foundation for understanding the convergence properties and implicit bias of MD in attention models. The extension to $\\\\ell_p$-norm objectives adds flexibility in modeling and opens up new avenues for optimizing attention mechanisms.\\u201d \\n\\n**C3. Empirical Insights on Generalization:** Through experiments on synthetic and real-world datasets (e.g., the Stanford Large Movie Review Dataset), we demonstrate that MD algorithms improve token selection and generalization compared to GD. Furthermore, $\\\\ell_{1.1}$-AttGD exhibits superior sparsity and focus, offering improved token selection capabilities. Reviewer 7Bsg mentioned: \\\"The fact that the real-data experiments show some improvements in generalization is noteworthy,\\\" validating the practical value of our empirical contributions.\\n\\n**P1. Practical Relevance and Role of MD:** Several reviewers questioned the practicality of MD, emphasizing its limited usage in deep learning models (Reviewers HzdV and kVFQ). Reviewer HzdV requested examples of MD-based optimizers, while Reviewer kVFQ emphasized the need for clearer motivation for exploring MD in attention training.\\n\\n**P2. Comparison to Adam and Architecture Details:** Reviewers requested comparisons between MD and popular optimizers like AdamW (Reviewers HzdV and vf65). Reviewer kVFQ also suggested including model architecture details in the appendix for reproducibility.\\n\\n**P3. Feasibility and Assumptions of $\\\\ell_p$-AttSVM:** Reviewers kVFQ and vf65 raised concerns about feasibility conditions in the $\\\\ell_p$-AttSVM problem and requested explicit clarification of assumptions related to initialization and step size.\\n\\n**P4. Broader Experimental Scope and Evaluation:** Reviewers vf65 and 7Bsg noted that the synthetic experiments are overly simplistic and called for more diverse datasets to support the claims. Additionally, quantitative evaluations of token selection and training loss values were suggested for real-data experiments.\\n\\n**A1. Clarified Motivation and Practical Relevance of MD:** We revised the introduction to emphasize the novel insights provided by MD, including its token selection properties and implicit bias. Further, examples of MD-based optimizers, such as those used in convex optimization and reinforcement learning, have been added.\\n\\n Further details are provided in the response to Reviewers HzdV and kVFQ.\\n\\n**A2. Comparisons with Adam and Architecture Details:** We added new experiments to compare the test accuracy and weight distribution of Adam and $\\\\ell_{1.1}$-MD on training a Vision Transformer model to learn the CIFAR-10 dataset, to show a competitive performance and a superior explainability of MD, as shown **in the newly added Figures 11 and 12 in Appendix H5**.\\n\\n Further details are provided in the response to Reviewers kVFQ, vf65, and HzdV.\\n\\n**A3. Feasibility and Assumptions in $\\\\ell_p$-AttSVM:** We explicitly stated the feasibility conditions and assumptions (e.g., overparameterization $d \\\\geq \\\\max(T-1, n)$) in the main text. These conditions were also discussed in the context of Theorem 1 and other results.\\n\\n Further details are provided in the response to Reviewers kVFQ and vf65.\\n\\n**A4. Broader Experiments and Quantitative Evaluation:** To strengthen the empirical results, we conducted additional experiments on tasks like sentiment classification with more diverse datasets. Quantitative metrics for token selection accuracy and training loss values have been incorporated.\\n\\n Further details are provided in the response to Reviewers vf65 and 7Bsg.\"}", "{\"title\": \"Response to Reviewer kVFQ--Part 1\", \"comment\": \"> **Weakness 1:** The motivation is quite unclear. ... Also, providing train loss values for the real data experiment would be enlightening.\\n\\n> **Question 2:** Can the authors compare the training curves with that of Adam?\\n\\nThank you for your comment and question. We have revised the quoted sentence to clarify the motivation, and we address your concerns below. \\n\\n**Motivation**: The central motivation of our work is to demonstrate that different optimizers exhibit distinct token selection properties, converging to different SVM solutions and thereby leading to varying generalization performances. While **MD is not yet widely adopted for training transformers, it has been extensively studied for its implicit bias properties in standard deep neural networks**; see Section 1.2 in [[Sun et al.](https://www.jmlr.org/papers/v24/23-0836.html)] for an overview as well as [[Azizan et al.](https://arxiv.org/abs/1906.03830), [Azizan et al.](https://openreview.net/pdf?id=HJf9ZhC9FX), [Gunasekar et al.](https://arxiv.org/abs/1802.08246)]. Notably, many adaptive algorithms, including AdaGrad (please refer to Section 1.1 in [[Duchi et. al](https://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf)] regarding connection between AdaGrad and MD), can be viewed as variants of MD with different potential functions. \\n\\nAs Reviewer HzdV pointed out, this insight opens up a novel direction for exploring alternative optimization algorithms tailored for attention mechanisms. The implicit bias of MD can significantly influence token selection properties, thereby impacting both generalization and interpretability.\\n\\n**Additional Experiment on Vision Transformers**: To address the reviewer\\u2019s concern regarding comparisons to widely used algorithms like Adam, we have added an experiment in Appendix H5 to further validate the performance of $\\\\ell_{1.1}$-MD. This experiment involves training a Vision Transformer (ViT) model with $\\\\ell_{1.1}$-MD and Adam over the first 1000 epochs. The results in Figure 11 shows that $\\\\ell_{1.1}$-MD achieves similar test accuracies to Adam, demonstrating that it can match the performance of state-of-the-art optimizers for transformer models. This finding underscores that $\\\\ell_{1.1}$-MD is not only competitive but also offers additional advantages, such as improved explainability, as discussed below.\\n\\n**Explainability Through Mirror Descent**: Explainability in attention mechanisms is often defined as the model\\u2019s ability to identify and prioritize the most influential tokens in the input sequence, thereby making its decision-making process more interpretable [[Klein et al.](https://arxiv.org/abs/2409.16756), [Ali et al.](https://arxiv.org/abs/2403.01590), [Abnar et al.](https://arxiv.org/abs/2005.00928)] . This aligns with the concept of feature selection in classical machine learning, where sparse and focused representations improve both interpretability and model robustness. In our work, $\\\\ell_{1.1}$-MD demonstrates superior explainability compared to other traditional optimization methods. Specifically, $\\\\ell_{1.1}$-MD produces sparser weight distributions and attention maps that more sharply highlight the most critical tokens. Figures 8 and 13 provide clear evidence of this property in the Stanford Large Movie Review Dataset. For instance, attention maps generated by $\\\\ell_{1.1}$-MD-trained models focus more on sentiment-revealing words, such as \\\"amazing\\\" or \\\"terrible,\\\" while models trained with GD display more diffuse attention patterns, potentially diluting interpretability. This ability to emphasize pivotal tokens directly contributes to the model's transparency and aligns with established literature emphasizing the importance of sparsity for interpretability [[Sun et al.](https://www.jmlr.org/papers/v24/23-0836.html), [Azizan et al.](https://arxiv.org/abs/1906.03830), [Azizan et al.](https://openreview.net/pdf?id=HJf9ZhC9FX)].\\n\\nFurthermore, the weight distributions in the key, query, and value matrices, shown in Figure 9, highlight that $\\\\ell_{1.1}$-MD encourages sparsity more effectively than GD, while the weight distribution in Figure 12 shows that $\\\\ell_{1.1}$-MD also induces more sparsity compared to Adam. This sparsity enhances interpretability by limiting the model's reliance on non-essential tokens. By aligning the optimization process with explainability objectives, $\\\\ell_{1.1}$-MD offers practical benefits for applications where transparency is crucial [[Klein et al.](https://arxiv.org/abs/2409.16756), [Ali et al.](https://arxiv.org/abs/2403.01590), [Abnar et al.](https://arxiv.org/abs/2005.00928)]. Thus, **while $\\\\ell_{1.1}$-MD achieves comparable generalization performance to Adam (as demonstrated in our experiments), its token selection precision and sparse representations establish it as an interpretable and explainable optimization method**. These findings underscore the potential of using MD variants to improve the performance and explainability of attention-based models.\"}", "{\"comment\": \"Thanks for the detailed answers and the additional experiments. I propose the following:\\n1. Limited empirical variety: This is a welcome experiment, but the empirical evaluation is still rather constrained, involving only synthetic data and two real-world datasets, CIFAR-10 and CelebA.\\n2. Dependence on Theoretical Assumptions: This aligns with previous research in that the analysis hinges on particular premises including restricted initialization and moderate step sizes.\\n3. Computational Overhead: Based on the data provided by the authors, it is possible to claim that the overhead is comparable to adaptive techniques like Adam, but more comparisons in the real training conditions would be beneficial.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you again for your time in reviewing our paper.\\n\\nIf our response has addressed your concerns, we would be grateful if you could re-evaluate our work. \\n\\nIf you have any additional questions or comments, we would be happy to have further discussions.\"}", "{\"summary\": \"This paper proposes to study the dynamics of a simplified one-layer attention network. The implicit bias of gradient descent for such a problem is already known, this paper proposes to study what happens when the training algorithm is mirror descent with p-norms. The paper extends previous results known for gradient descent to this setting, and then demonstrate that using mirror descent to train a transformer on a sentiment analysis dataset yields better generalization.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The paper is well written\", \"It is quite easy to understand\", \"The fact that MD achieves sparser weights and leads to better generalization is interesting.\"], \"weaknesses\": [\"The motivation is quite unclear. \\u201cA broader understanding of general descent algorithms, including the mirror descent (MD) family and their token selection properties, is essential.\\u201d why? In practice, nobody trains attention layers with mirror descent. The observation that it works better than gradient descent is not very strong in my opinion, because in practice transformers cannot be trained with gradient descent either. In the experiments, it would be worthwhile to compare the proposed methods to the widely used adam algorithm. Also, providing train losses values for the real data experiment would be enlightening.\", \"There are many unclear things about global and optimal tokens (def.3). First, the fact that there is a feasible solution is very important, precise conditions for it to happen must be put in the paper. The authors simply state that \\u201cunder mild overparameterization d \\u2265 max{T \\u2212 1, n}, the problem is almost always feasible (Tarzanagh et al., 2023, Theorem 1).\\u201d These conditions must be put in the paper. Also, it is unclear whether the statement is \\u201cfor all $\\\\alpha_i$, the problem is feasible\\u201d or \\u201cthe problem is feasible for at least one set of $\\\\alpha_i$.\", \"I cannot understand why Theorem 1 does not require those conditions: clearly, $W^{opt}_{mm}$ must exist for it to hold, and there is nothing in the assumptions guaranteeing it.\", \"It is never properly stated in the paper if \\u201cglobally optimal tokens\\u201d are also \\u201clocally optimal\\u201d. It seems not to be the case, since globally optimal tokens depend on $v$ while locally optimal tokens do not. This also causes issues in thm 1: globally optimal tokens do not seem to be always feasible, so W^{opt}_{mm} is not always defined, while the lp-AttRp always has a solution. For instance, it could very well have a global solution of fixed norm.\"], \"questions\": [\"Can the authors clarify the questions above regarding feasibility?\", \"Can the authors compare the training curves with that of adam?\", \"The model sizes do not seem to be specified. The architectural details should be put in the appendix\", \"**Minor remarks and typos**\", \"L68: ERM refers to an equation much later in the manuscript\", \"L71: W(k) has not bee introduced thus far\", \"L113: the softmax acts here on a matrix, but later in (2) it acts on vectors.\", \"L139: (Blair, 1985)\", \"L165: (Tarzanagh et al. (2024; 2023)).\", \"L165: the sense of nearby is unclear here.\", \"L169: it would be good to specify to which set the indices belong. Generally, it would be clearer to indicate in which space lives each newly introduced variable.\", \"L176: \\u201cthe \\u03b1i component has a significantly higher probability\\u201d : this statement should also involve the norm of W, right? Because if all values in the pre-softmax vector are high, then the probability vector will be almost constant.\", \"Eq 4b: typo W -> W\\u2019\", \"L280: $exp(2)$ could be $\\\\exp(2)$\", \"In eq.5, it would be good to recall that $D_\\\\psi$ depends on $p$.\", \"All figures should be in pdf format\", \"\\u201cIn the Appendix G, we show that W and v generated by \\u2113p-JointRP converge to their respective\\u2026\\u201d it would strengthen that section to put it in the main text.\", \"Fig4: a log scale would make things clearer, especially highlighting the practical convergence rates.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"I thank the authors for their response. It is nice to see that you can get improved sparsity at same accuracy than Adam. I have the following suggestions\", \"I believe this result warrants further exploration, as it now seems tangential to the main focus of the paper. It could strengthen the motivation on the benefits of the proposed algorithm compared to what is well-established in the field.\", \"Could the obtained sparsity be related to support vector sparsity in the SVM?\", \"At the same time, the current manuscript presents some exaggerated claims, such as \\\"we provide extensive numerical experiments on real and synthetic data [...] excelling in optimal token selection and suppressing non-optimal tokens\\\" in line 88, while the provided numerical experiments are not extensive and further evidence is required to claim that the proposed method is useful in token selection.\", \"Regarding previous work on analyzing attention through the lens of SVMs, I still encourage the authors to better explain the differences, as the cited works also deal with optimization problems. Of course, once fixed the SVM attention formulation, one could devise multiple solving algorithms with different properties, but then the contribution is unclear w.r.t. what is specific to attention or rather a general algorithm that is applied to this specific case.\"]}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you again for your time in reviewing our paper.\\n\\nIf our response has addressed your concerns, we would be grateful if you could re-evaluate our work. \\n\\nIf you have any additional questions or comments, we would be happy to have further discussions.\"}", "{\"title\": \"Response to Reviewer vf65--Part 3\", \"comment\": \"> **Weakness 2:** The synthetic experiments are too simplistic and lack the complexity needed to represent realistic attention training scenarios. The real-data experiments, while showing some improvements in generalization, are insufficient to support the claims. The demonstration of improved token selection is based on a handful of examples (Figure 8) without any rigorous quantitative evaluation. Comparisons with GD do not include other commonly used optimization algorithms like Adam, making it impossible to judge the relative merits of MD for attention training.\\n\\nWe have already provided real-data experiments on the Stanford Large Movie Review Dataset, showcasing the performance of $\\\\ell_{1.1}$-MD on a language-based task. **These include qualitative evaluations (Figures 8 and 13) and **quantitative results** (Table 1 and Figure 9), demonstrating that $\\\\ell_{1.1}$-MD achieves competitive generalization and superior explainability compared to GD**. \\n\\n**To address concerns about diversity, we have added experiments on Vision Transformers (ViTs) for a vision-based task (details in Appendix H5)**. Our new ViT experiment demonstrates that $\\\\ell_{1.1}$-MD achieves comparable test accuracy to Adam over the first 1000 epochs. Our result in Figure 11 confirms that $\\\\ell_{1.1}$-MD matches Adam's test accuracy, validating its effectiveness across both language and vision domains. In addition, Figure 12 provides a histogram of the weights in the models trained by Adam and $\\\\ell_{1.1}$-MD, showing sparser weight distribution from the one trained using $\\\\ell_{1.1}$-MD. **This finding highlights that $\\\\ell_{1.1}$-MD is not only competitive but also offers additional advantages, particularly improved explainability, as discussed below.**\\n\\nExplainability in attention mechanisms is often defined as the model\\u2019s ability to identify and prioritize the most influential tokens in the input sequence, thereby making its decision-making process more interpretable [[Klein et al.](https://arxiv.org/abs/2409.16756), [Ali et al.](https://arxiv.org/abs/2403.01590), [Abnar et al.](https://arxiv.org/abs/2005.00928)] . This aligns with the concept of feature selection in classical machine learning, where sparse and focused representations improve both interpretability and model robustness.\\n\\nIn our work, $\\\\ell_{1.1}$-MD demonstrates superior explainability compared to other gradient-based methods, such as standard (S)GD and Adam. Specifically, $\\\\ell_{1.1}$-MD produces sparser weight distributions and attention maps that more sharply highlight the most critical tokens. Figures 8 and 13 provide clear evidence of this property in the Stanford Large Movie Review Dataset. For instance, attention maps generated by $\\\\ell_{1.1}$-MD-trained models focus more on sentiment-revealing words, such as \\\"amazing\\\" or \\\"terrible,\\\" while models trained with GD display more diffuse attention patterns, potentially diluting interpretability. This ability to emphasize pivotal tokens directly contributes to the model's transparency and aligns with established literature emphasizing the importance of sparsity for interpretability. \\n\\nFurthermore, the weight distributions in the key, query, and value matrices, shown in Figure 9, highlight that $\\\\ell_{1.1}$-MD encourages sparsity more effectively than GD, while the weight distribution in Figure 12 shows that $\\\\ell_{1.1}$-MD also induces more sparsity compared to Adam. This sparsity enhances interpretability by limiting the model's reliance on non-essential tokens. By aligning the optimization process with explainability objectives, $\\\\ell_{1.1}$-MD offers practical benefits for applications where transparency is crucial [[Klein et al.](https://arxiv.org/abs/2409.16756), [Ali et al.](https://arxiv.org/abs/2403.01590), [Abnar et al.](https://arxiv.org/abs/2005.00928)]. **Thus, $\\\\ell_{1.1}$-MD achieves comparable generalization performance to Adam (as shown in Figure 11), and its token selection precision and sparse representations establish it as an interpretable and explainable optimization method (as shown in Figures 8, 9, 12, and 13)**. These findings underscore the potential of using MD variants to improve both the performance and explainability of attention-based models.\"}", "{\"title\": \"Response to Reviewer vf65--Part 1\", \"comment\": \"> **Weakness 1-1:** The core idea of connecting attention optimization to SVM-like objectives is not new and has been explored in prior work, notably in \\\"A Primal-Dual Framework for Transformers and Neural Networks\\\" by Nguyen et al. and related papers. These prior works establish the fundamental link between attention and SVMs, including the optimization perspective. While this paper extends the analysis to mirror descent, the incremental contribution feels minimal and lacks motivation.\\n\\n**Response:** Thank you for the feedback. However, we respectfully disagree with the assertion that our contribution is minimal or lacks novelty. While Nguyen et al. (2023) explore the connection between attention mechanisms and support vector regression (SVR)-like objectives, their focus is fundamentally different from ours. Below, we outline why our contributions are not only distinct but also advance the state-of-the-art in attention optimization significantly:\\n\\n1. **We provide the optimization dynamics, implicit bias, and convergence rate analysis**: Nguyen et al. focus primarily on **static** primal-dual formulations for attention mechanisms, connecting self-attention to support vector regression (SVR) through a primal-dual formulation. **They do not examine optimization dynamics, the role of descent algorithms, or the implications of implicit bias in training**. Our work explicitly targets the optimization process of attention mechanisms under mirror descent (MD). We provide a comprehensive analysis of convergence dynamics, implicit bias, and token selection properties, which are absent in prior work.\\n\\n2. **We show that different optimizers have different token selection properties**: Unlike Nguyen et al., we provide actionable insights into how different optimizers have different token selection properties, which in turn leads to different generalization performances, **even when training the same architecture on the same dataset**. \\n\\n3. Finally, we have added the following related works to the revised manuscript: \\n\\n \\\"*Nguyen et al. (2024) provided static primal-dual formulations for attention mechanisms, connecting self-attention to support vector regression (SVR) through a primal-dual framework. Nguyen et al. (2022) connects self-attention to kernel methods to enhance Transformers. Chen et al. (2024b) provided a novel attention mechanism that optimizes self-attention in Transformers using asymmetric Kernel Singular Value Decomposition (KSVD) in the primal representation, achieving improved efficiency and performance through low-rank approximations and regularization techniques. However, these works do not examine optimization dynamics, the role of descent algorithms, or the implications of implicit bias in training, which are the main focus of this work.*\\\"\", \"the_following_items_have_been_added_to_the_reference_list_in_the_revised_manuscript\": [\"*Nguyen et al. (2024)*: Nguyen, Tan Minh, Tam Nguyen, Nhat Ho, Andrea L. Bertozzi, Richard Baraniuk, and Stanley Osher. \\\"A Primal-Dual Framework for Transformers and Neural Networks.\\\" In The Eleventh International Conference on Learning Representations (ICLR), 2023. 2023.\", \"*Nguyen et al. (2022)*: Nguyen, Tam Minh, Tan Minh Nguyen, Dung DD Le, Duy Khuong Nguyen, Viet-Anh Tran, Richard Baraniuk, Nhat Ho, and Stanley Osher. \\\"Improving transformers with probabilistic attention keys.\\\" In International Conference on Machine Learning, pp. 16595-16621. PMLR, 2022.\", \"*Chen et al. (2024b)*: Chen, Yingyi, Qinghua Tao, Francesco Tonin, and Johan Suykens. \\\"Primal-attention: Self-attention through asymmetric kernel svd in primal representation.\\\" Advances in Neural Information Processing Systems 36 (2024).\"]}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Response to Reviewer kVFQ\", \"comment\": \"> train loss values\\n\\n**Response:** We are sorry that we missed your request for the train loss values, we will add them for the final submission of the manuscript. We provide below the training loss for the Adam and $\\\\ell_{1.1}$-MD algorithm when they trained the ViT model on CIFAR-10, as we have described in our [previous rebuttal](https://openreview.net/forum?id=9M5georQ9T&noteId=m9s4Vi7SoU). It is in the form of a table in this response, showing the loss every 50 epochs, but we will graph it in the final version.\\n\\n| Epochs | Adam | $\\\\ell_{1.1}$-MD |\\n| -- | -- | -- |\\n| 50 | 0.00306 | 0.00847 |\\n| 100 | 0.00091 | 0.00484 |\\n| 150 | 0.00051 | 0.00230 |\\n| 200 | 0.00038 | 0.00092 |\\n| 250 | 0.00028 | 0.00048 |\\n| 300 | 0.00022 | 0.00034 |\\n| 350 | 0.00017 | 0.00022 |\\n| 400 | 0.00018 | 0.00019 |\\n| 450 | 0.00014 | 0.00015 |\\n| 500 | 0.00013 | 0.00014 |\\n| 550 | 0.00011 | 0.00011 |\\n| 600 | 0.00009 | 0.00007 |\\n| 650 | 0.00010 | 0.00006 |\\n| 700 | 0.00007 | 0.00007 |\\n| 750 | 0.00008 | 0.00006 |\\n| 800 | 0.00008 | 0.00006 |\\n| 850 | 0.00008 | 0.00005 |\\n| 900 | 0.00008 | 0.00005 |\\n| 950 | 0.00005 | 0.00006 |\\n| 1000 | 0.00008 | 0.00003 |\\n\\n> Hypotheses\\n\\n**Response:** We thank the reviewer for pointing out the need for greater clarity regarding the added assumption. To address this concern and ensure the assumption is explicitly stated, we have now included it in the manuscript using a formal mathematical assumption environment, as follows:\\n\\n\\\"\\n\\n**Assumption B** [Assumption on Token Separability]\\nFor each input sequence $X_i$ and its locally optimal token index $\\\\alpha_i$, there exists a matrix $W$ such that, for all non-optimal tokens $t \\\\in [T] \\\\setminus \\\\{\\\\alpha_i\\\\}$,\\n$$\\n(X_{i\\\\alpha_i} - X_{it})^\\\\top W z_i \\\\geq 1, \\\\quad \\\\forall i \\\\in [n].\\n$$\\n\\n\\\"\\n\\nWe will add this assumption into the statements of Theorems 1\\u20135 to make the assumption more explicit. \\n\\nRegarding the overparameterization assumption, we respectfully disagree, as various forms of this assumption--where the number of trainable parameters exceeds the size of the dataset--are commonly employed across the optimization literature [[Tarzanagh et al.](https://arxiv.org/abs/2308.16898), [Azizan et al.](https://arxiv.org/abs/1906.03830), [Allen-Zhu et al.](https://proceedings.mlr.press/v97/allen-zhu19a.html)]. Specifically, our assumption is identical to that in [Tarzanagh et al.](https://arxiv.org/abs/2308.16898), which discusses overparameterization for attention models further in detail (Refer to Sections 4.1 and 4.2 of that paper).\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you again for your time in reviewing our paper.\\n\\nIf our response has addressed your concerns, we would be grateful if you could re-evaluate our work. \\n\\nIf you have any additional questions or comments, we would be happy to have further discussions.\"}", "{\"comment\": \"Dear authors,\\n\\nThanks for your rebuttal, and I'm sorry for my lack of reactivity. \\n\\n> train loss values\\n\\nUnless I have missed it, there is no train loss curve in the paper, while this would really broaden the picture of the paper. This paper studies an optimization algorithm that minimizes a function, it would be good to see how it does in practice. Would it be possible to add one next to the new figure 11?\\n\\n> Hypotheses \\n\\nI am still confused by the phrasing around the added assumption, is it \\\"for all $\\\\alpha_i$, there exists W such that ...\\\" ? Then, in my opinion, this should be written in the text.\\n\\nThe mild overparameterization hypothesis is quite strong in my opinion, the condition that $d\\\\geq n$ is quite unrealistic in modern scenarios; I would emphasize this more in the text. However, it seems like a standard assumption.\"}", "{\"title\": \"Response to Reviewer HzdV--Part 1\", \"comment\": \"> **Weakness 1:** As far as I know, mirror descent is not a popular optimization algorithm for training deep learning models. I agree that a simplified model (e.g., the one-layer model considered in this paper and previous work) could provide valuable insights on understanding transformers, but it is not clear what is the role/implication of the $\\\\ell_p$-norm for deep learning. If possible, it would be helpful if the authors could highlight a few practical mirror descent-based optimizers in the revision.\\n\\n**Response:** We appreciate the reviewer\\u2019s insightful comment regarding the role and implications of MD and its practical relevance for deep learning. Below, we address these points in detail.\\n\\n\\n- Our work demonstrates that **different optimization algorithms yield distinct solutions when training attention-based models**. This observation highlights the potential for exploring alternative optimization techniques specifically tailored for attention mechanisms. The implicit biases introduced by these algorithms, such as MD, can influence token selection properties, ultimately impacting generalization and interpretability. By showcasing these distinctions, our work paves the way for developing and understanding optimization algorithms that better align with the unique demands of attention layers.\\n\\n\\n- While **MD is not yet a widely adopted method for training transformers, it has been extensively studied for its implicit bias properties in standard deep neural networks**; see Section 1.2 in [[Sun et al.](https://www.jmlr.org/papers/v24/23-0836.html), [Azizan et al.](https://arxiv.org/abs/1906.03830), [Azizan et al.](https://openreview.net/pdf?id=HJf9ZhC9FX), [Gunasekar et al.](https://arxiv.org/abs/1802.08246)]. Notably, many adaptive algorithms, including AdaGrad (please refer to Section 1.1 in [[Duchi et. al](https://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf)] regarding connection between AdaGrad and MD), can be viewed as variants of MD with different potential functions. This connection underscores the broader applicability of MD-inspired methods in attention optimization. Our work provides a framework for understanding how MD with $\\\\ell_p$-regularization influences attention-based models, which could inform future designs of adaptive optimizers leveraging MD principles for better interpretability and generalization.\\n\\n\\n- To provide additional practical context, we have included new experiments (Appendix H5) **comparing $\\\\ell_{1.1}$-MD with Adam on Vision Transformers**. These experiments (Figure 11) demonstrate that $\\\\ell_{1.1}$-MD achieves comparable test accuracy to Adam while offering improved explainability. Specifically, as discussed in **response to Weakness 2**, we provide detailed qualitative (Figures 8 and 13) and quantitative (Figure 9 and 12) evaluations of the explainability benefits of $\\\\ell_{1.1}$-MD. Specifically, $\\\\ell_{1.1}$-MD produces sparser weight distributions and sharper attention maps, focusing on more critical tokens, which aligns with its potential to enhance both interpretability and performance in real-world scenarios; Please refer to our detailed response to Weakness 2 and discussion on explainability of MD.\\n\\n> **Question 1:** What does the comparison token $z_i$ mean in practice (Line 55)? For example, for a real-world application problem, how to set the comparison token $z_i$ given the input $X$ and label $y$?\\n\\n**Response:** Thank you for the question. In practice, there are two common ways to use attention: self-attention and cross-attention. \\n\\nIn cross-attention, $z_i$ is typically a token in the decoder module of the model, which is usually responsible for generating the output sequence in a sequence-to-sequence model, while $X_i$ is the sequence of input tokens to the model. We did not use cross-attention in our experiment on real datasets, but we did do so for the synthetic dataset experiment, where $z_i$ is randomly generated, independently from $X_i$.\\n\\nSimilarly, in self-attention, the token $z_i$ can represent any individual token or all tokens within the sequence $X_i$. In our experiment on the Stanford Movie Review dataset classification task and the CIFAR-10 classification task (vision transformer), $z_i$ is replaced with a matrix and it equals $X_i$.\\n\\nWe added this detail in Appendix H4.\"}", "{\"summary\": \"This paper investigates the optimization dynamics of mirror descent (MD) for training attention mechanisms, specifically focusing on $l_p$-AttGD. The authors claim that $l_p$-AttGD converges directionally to a generalized hard-margin SVM with an $l_p$ norm objective when applied to binary classification with a single-layer attention model. Some experiments on synthetic and real data are presented.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper attempts to provide a theoretical analysis of mirror descent for attention training, extending prior work focused on gradient descent. It derives convergence results to a generalized hard-margin SVM and establishes convergence rates. The use of $l_p$ norms offers a degree of generality in the theoretical analysis.\", \"weaknesses\": \"1. The core idea of connecting attention optimization to SVM-like objectives is not new and has been explored in prior work, notably in \\\"A Primal-Dual Framework for Transformers and Neural Networks\\\" by Nguyen et al. and related papers. These prior works establish the fundamental link between attention and SVMs, including the optimization perspective. While this paper extends the analysis to mirror descent, the incremental contribution feels minimal and lacks motivation. Other core ideas of analyzing the implicit bias of GD/MD algorithms for softmax attention is already present in the cited works (Tarzanagh et al., 2023, 2024; Vasudeva et al., 2024a; Sheen et al., 2024) without fundamentally changing the nature of the problem or leading to significantly different insights.\\n\\n2. The synthetic experiments are too simplistic and lack the complexity needed to represent realistic attention training scenarios. The real-data experiments, while showing some improvements in generalization, are insufficient to support the claims. The demonstration of improved token selection is based on a handful of examples (Figure 8) without any rigorous quantitative evaluation. Comparisons with GD do not include other commonly used optimization algorithms like Adam, making it impossible to judge the relative merits of MD for attention training.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer kVFQ--Part 3\", \"comment\": \"> **Question 3:** The model sizes do not seem to be specified. The architectural details should be put in the appendix.\\n\\n**Response:** Thank you for your feedback. \\n\\n**We have added the model architecture details for both the semantic analysis model in Appendix H4.** The architecture for the model used to perform the semantic analysis task on the Stanford IMDb Movie Review dataset follows the transformer encoder architecture [[Vaswani et al.](https://arxiv.org/abs/1706.03762)], with a linear classifier as the last layer.\\n\\nThe embedding layer has trainable token embedding $E$ and position encoding $P$. The model's vocabulary size is $30522$ with maximum token length of $512$ and embedding dimension $384$, so $E\\\\in\\\\mathbb{R}^{30522\\\\times384}$ and $P\\\\in\\\\mathbb{R}^{512\\\\times384}$. If a token $t$ is in position $i$, its embedding will be $X_i=E_t+P_i$, where $E_t$ and $P_i$ denote the $t^{th}$ and $i^{th}$ rows of $E$ and $P$, respectively.\\n\\nThen, the token features are passed through the encoding blocks, each of which consists of a multi-head self-attention layer $\\\\text{MultiHead}$, two layer normalization layers $\\\\text{LayerNorm}_1$ and $\\\\text{LayerNorm}_2$, and a Multilayer Perceptron layer (MLP) layer $\\\\text{MLP}$. If the sequence of input token features for the encoding block are $X_1,...,X_T$, and if we denote $\\\\text{MultiHead}(X_1,...,X_T)_i$ as the $i^{th}$ token feature from the multi-head self-attention, the output of the encoding block for the $i^{th}$ token is $\\\\text{LayerNorm}_2(X_i\\u2019+\\\\text{MLP}(X_i\\u2019))$, where $X_i\\u2019=\\\\text{LayerNorm}_1(X_i+\\\\text{MultiHead}(X_1,...,X_T)_i)$.\\n\\nWe experimented with having 3 encoding blocks with 3 attention heads each, 4 encoding blocks with 4 attention heads each, and 6 encoding blocks with 6 attention heads each. We then pass the feature vector of the first [CLS] token from the last encoding layer into a final linear classifier layer. For training this model, we applied a dropout of $0.2$.\\n\\n**For the newly added experiments using the Vision Transformer (ViT) architecture for the CIFAR-10 classification task, the model details are provided in Appendix H5**. Specifically, we ran an additional experiment to compare the $\\\\ell_{1.1}$-MD training algorithm with the Adam algorithm. The ViT architecture consisted of a patch size of $4$, $512$-dimensional token features, $6$ attention blocks with $8$ attention heads per block, and a two-layer GeLU network for the final classification layer on the [CLS] patch token, with a hidden layer size of $512$. The embedding layer followed the implementation of [[Dosovitskiy et al.](https://arxiv.org/abs/2010.11929)], learning [CLS] token embeddings, a linear map for image patches, and positional embeddings for patch positions. Key attention layer details mirrored the Stanford IMDb model, with layer normalization added before multihead attention and MLP layers, and a $0.1$ dropout during training.\\n\\nThe experiment results show that $\\\\ell_{1.1}$-MD matches Adam's test accuracy (Fig. 11), achieving SOTA performance with added benefits of sparsity and interpretability (Fig. 12).\\n\\n\\n**Minor remarks and typos**\\n\\n**Response:** Thank you for your remarks. Most concerns have been addressed in the revised version, with a few exceptions:\\n\\n- For L71, $W(k)$ has been introduced in the second paragraph of the introduction, though we have revised that part to be clearer.\\n\\n- In L176, we respectfully disagree because even when the pre-softmax are large, $(\\\\sigma(X_iWz_i))_{\\\\alpha_i}$ can still be significantly larger than the other components $(\\\\sigma(X_iWz_i))_t$, for all $t\\\\neq\\\\alpha_i$.\\n\\nSpecifically, even when the norm of $W$ is large, when we fulfill the condition $(X_{i\\\\alpha_i}-X_{it})^\\\\top Wz_i\\\\geq1$ for all $t\\\\neq\\\\alpha_i$ as it is in the paper, we would have $(X_iWz_i)_{\\\\alpha_i}-(X_iWz_i)_t\\\\geq1$. Per the definition of the softmax function, if we denote $s=\\\\sigma(X_iWz_i)$, then for any $t'\\\\in[T]$, we have\\n\\n$$s_{t'}= ( \\\\text{exp}((X_iWz_i)_{t'}) ) / ( \\\\text{exp}((X_iWz_i)_1) + \\\\text{exp}((X_iWz_i)_2) + \\\\cdots + \\\\text{exp}((X_iWz_i)_T) ). $$\\n\\nIn the above equation, by substituting $t'$ with $\\\\alpha_i$ and $t$, and dividing $s_{\\\\alpha_i}$ by $s_t$, we get\\n\\n$$s_{\\\\alpha_i}/s_t=\\\\text{exp}((X_iWz_i)_{\\\\alpha_i}-(X_iWz_i)_t)\\\\geq\\\\text{exp}(1),$$\\n\\nwhich implies that\\n\\n$$s_{\\\\alpha_i}\\\\geq s_t\\\\cdot\\\\text{exp}(1),$$\\n\\nfor all $t\\\\neq\\\\alpha_i$.\\n\\n- For Appendix G, as discussed in the motivation of this paper, our main goal is to demonstrate that different optimizers have distinct token selection properties, which is the primary focus of Theorems 1\\u20134. We believe that the joint analysis presented in Appendix G is a direct extension of these theorems and builds on the results from Tarzanagh (2023, 2024). To ensure our main contributions (Theorems 1\\u20134) are presented explicitly with sufficient discussion and detail in the main text, we decided to keep the joint analysis in Appendix G. This allows us to maintain clarity and focus on the core contributions of the paper.\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Thank the authors for the rebuttal. I do not have any further questions at this point.\"}", "{\"summary\": \"This paper examines how mirror descent (MD) algorithms approach convergence and exhibit bias when optimizing attention mechanisms in softmax attention, with potential functions using $\\\\ell_p$-norms as a basis for analysis. Key theoretical findings include demonstrating convergence towards hard margin Support Vector Machine (SVM) solutions and achieving convergence rates similar to gradient descent, in more straightforward models. The research builds upon findings related to descent by expanding the scope to encompass a wider range of optimization algorithms and shedding light on properties related to token selection.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper has a strong theoretical foundation, it provides rigorous mathematical analysis and proofs for the convergence properties of mirror descent in attention optimization, extending previous work on gradient descent to a more general framework.\\n2. This paper provides a novel algorithmic insight, the introduction of $\\\\ell_p$-AttGD generalizes both $\\\\ell_p$-GD and attention GD, offering new perspectives on attention optimization and token selection.\\n3. This work provides a complete theoretical treatment of the optimization dynamics by examining both fixed-head and joint optimization scenarios.\", \"weaknesses\": \"1. The empirical evaluation is limited, the paper includes experiments, and they are primarily focused on synthetic data and a single real-world dataset (Stanford Large Movie Review Dataset). More diverse real-world applications would strengthen the practical implications.\\n2. Theoretical results are highly dependent on assumption, Theorem 2, Theorem 3, and Theorem 4, rely on specific assumptions about initialization and step sizes, which may limit their practical applicability.\\n3. The paper does not thoroughly address the computational overhead of implementing mirror descent compared to standard gradient descent.\", \"questions\": \"Please refer to the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 7Bsg--Part 2\", \"comment\": \"> **Weakness 2:** Theoretical results are highly dependent on assumptions. Theorem 2, Theorem 3, and Theorem 4 rely on specific assumptions about initialization and step sizes, which may limit their practical applicability.\\n\\n**Response:** Thank you for your comment. Below, we provide detailed clarification for these assumptions:\\n\\n**The step size condition** aligns with well-established assumptions in the implicit bias literature, particularly for first-order optimization algorithms like gradient descent and mirror descent. Works such as Soudry et al. (2018), Gunasekar et al. (2018), Ji \\\\& Telgarsky (2018), Azizan & Hassibi (2018), Sun et al. (2022; 2023) have similar assumptions and highlight the need for a sufficiently small step size to guarantee convergence. Of note, our step size condition for MD is identical to that in the aforementioned papers. Thus, the step size assumption is not an additional restriction but rather a foundational aspect of theoretical analyses, especially in MD analysis.\\n\\n**The initialization assumption**, specifically $W(0)$ being constrained within the cone defined in Definition 5, is a distinct and critical feature of this work compared to broader implicit bias studies (Soudry et al. (2018), Gunasekar et al. (2018), Ji \\\\& Telgarsky (2018), and Azizan & Hassibi, 2018; Sun et al., 2022; 2023)). Unlike traditional implicit bias studies that often assume broader initializations, the localized initialization considered here is essential for studying convergence in the structured optimization problems encountered in attention mechanisms. Inspired by prior work such as Tarzanagh et al. (2023, 2024), this assumption allows the analysis to focus on the local convergence properties of the parameters towards the desired local max-margin solutions. The necessity of this assumption is further corroborated by Tarzanagh et al. (2024), which establishes that, without initialization within this constrained region, even more robust optimization methods utilizing full gradient information would fail to converge to the max-margin solution.\\n\\n**Assumption A**, which stipulates that the loss function must be strictly decreasing, differentiable, and have a bounded, Lipschitz continuous derivative, is a standard assumption in optimization theory and implicit bias studies. Widely-used loss functions, such as $l(x) = e^{-x}$, $l(x) = -x$, and $l(x) = \\\\log(1 + e^{-x})$, satisfy this assumption. These functions are not only theoretically convenient but also form the backbone of practical applications in machine learning, particularly in classification and attention-based tasks. \\n\\nThis assumption aligns with the framework of gradient descent dynamics analyzed in works such as Soudry et al. (2018) and Ji \\\\& Telgarsky (2018). It also supports the analysis of mirror descent (MD) algorithms, as demonstrated in Gunasekar et al. (2018), Azizan & Hassibi, 2018; Sun et al., 2022; 2023). Thus, Assumption A is neither restrictive nor limiting but rather a well-justified component of the theoretical framework.\"}", "{\"summary\": \"The paper introduces a novel approach to optimizing attention mechanisms using Mirror Descent (MD), specifically focusing on a generalized max-margin token selection strategy for softmax attention models. The authors propose a family of MD algorithms, termed $\\\\ell_{p}$-AttGD, which generalize traditional gradient descent by using the $p$-th power of the $\\\\ell_p$-norm as the potential function. The main contributions include proving the convergence of these algorithms to generalized max-margin Support Vector Machine (SVM) solutions for optimizing the attention mechanism, both for fixed and jointly optimized parameters (key-query matrix and decoder).\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Theoretical Contributions: The paper provides a solid theoretical foundation for understanding the convergence properties and implicit bias of MD in attention models. The extension to $\\\\ell_p$-norm objectives adds flexibility in modeling and opens up new avenues for optimizing attention mechanisms.\\n\\n2. Generalization of Attention Optimization: The approach generalizes previous work on attention models by using MD with a broad class of potential functions, allowing a deeper exploration of the optimization landscape beyond vanilla gradient descent.\", \"weaknesses\": \"1. As far as I know, mirror descent is not a popular optimization algorithm for training deep learning models. I agree that a simplified model (e.g., the one layer model considered in this paper and previous work) could provide valuable insights on understanding transformers, but it is not clear what is the role/implication the $\\\\ell_p$-norm for deep learning. If possible, it would helpful if the authors could highlight a few practical mirror descent-based optimizers in the revision.\\n\\n2. For Line431, for the $\\\\ell_{1,1}$/$\\\\ell_3$ optimizer, the authors provide code for implementing the $\\\\ell_{1,1}$/$\\\\ell_3$ optimizer for optimizing the transformer. In term of efficiency, would the $\\\\ell_{1,1}$/$\\\\ell_3$ optimizer be as efficient as popular optimizers like AdamW [LH2019]? Could the authors provide comparison with AdamW in Table 1?\\n\\n[LH2019] Decoupled Weight Decay Regularization. Ilya Loshchilov, Frank Hutter.\", \"questions\": \"1. What does the comparison token $z_i$ mean in practice (Line 55)? For example, for a real-world application problem, how to set the comparison token $z_i$ given the input $X$ and label $y$?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer HzdV--Part 2\", \"comment\": \"> **Weakness 2:** For Line 431, for the $\\\\ell_{1,1}/\\\\ell_3$ optimizer, the authors provide code for implementing the $\\\\ell_{1,1}/\\\\ell_3$ optimizer for optimizing the transformer. In terms of efficiency, would the $\\\\ell_{1,1}/\\\\ell_3$ optimizer be as efficient as popular optimizers like AdamW [LH2019]? Could the authors provide a comparison with AdamW in Table 1?\\n\\n**Response:** Thank you for your comment. As this was a common question among reviewers, we considered providing a comparison with standard Adam from the adaptive momentum family of algorithms: Specifically, we provide a comparison between the best performing MD algorithm in our paper, $\\\\ell_{1.1}$-MD, to the conventional Adam algorithm. We have added an experiment in Appendix H5 to further validate the performance of $\\\\ell_{1.1}$-MD. This experiment involves training a Vision Transformer (ViT) model with $\\\\ell_{1.1}$-MD and Adam over the first 1000 epochs. The results in Figure 11 shows that $\\\\ell_{1.1}$-MD achieves similar test accuracies to Adam, demonstrating that it can match the performance of state-of-the-art optimizers for transformer models. This finding highlights that $\\\\ell_{1.1}$-MD is not only competitive but also offers additional advantages, particularly improved explainability, as elaborated below.\\n\\nExplainability in attention mechanisms is often defined as the model\\u2019s ability to identify and prioritize the most influential tokens in the input sequence, thereby making its decision-making process more interpretable [[Klein et al.](https://arxiv.org/abs/2409.16756), [Ali et al.](https://arxiv.org/abs/2403.01590), [Abnar et al.](https://arxiv.org/abs/2005.00928)] . This aligns with the concept of feature selection in classical machine learning, where sparse and focused representations improve both interpretability and model robustness. In our work, $\\\\ell_{1.1}$-MD demonstrates superior explainability compared to other traditional optimization methods. Specifically, $\\\\ell_{1.1}$-MD produces sparser weight distributions and attention maps that more sharply highlight the most critical tokens. Figures 8 and 13 provide clear evidence of this property in the Stanford Large Movie Review Dataset. For instance, attention maps generated by $\\\\ell_{1.1}$-MD-trained models focus more on sentiment-revealing words, such as \\\"amazing\\\" or \\\"terrible,\\\" while models trained with GD display more diffuse attention patterns, potentially diluting interpretability. This ability to emphasize pivotal tokens directly contributes to the model's transparency and aligns with established literature emphasizing the importance of sparsity for interpretability [[Sun et al.](https://www.jmlr.org/papers/v24/23-0836.html), [Azizan et al.](https://arxiv.org/abs/1906.03830), [Azizan et al.](https://arxiv.org/pdf/1806.00952)].\\n\\nFurthermore, the weight distributions in the key, query, and value matrices, shown in Figure 9, highlight that $\\\\ell_{1.1}$-MD encourages sparsity more effectively than GD, while the weight distributions in Figure 12 shows that $\\\\ell_{1.1}$-MD also induces more sparsity compared to Adam. This sparsity enhances interpretability by limiting the model's reliance on non-essential tokens. By aligning the optimization process with explainability objectives, $\\\\ell_{1.1}$-MD offers practical benefits for applications where transparency is crucial [[Klein et al.](https://arxiv.org/abs/2409.16756), [Ali et al.](https://arxiv.org/abs/2403.01590), [Abnar et al.](https://arxiv.org/abs/2005.00928)]. Thus, **while $\\\\ell_{1.1}$-MD achieves comparable generalization performance to Adam (as demonstrated in Figure 11), its token selection precision and sparser representations establish it as a more interpretable and explainable optimization method (as shown in Figures 8, 9, 12, and 13)**. These findings underscore the potential of using MD variants to improve both the performance and explainability of attention-based models.\"}", "{\"comment\": \"We appreciate the reviewers' feedback, which has greatly improved our paper.\\n\\nWe understand that the response period has concluded and sincerely hope our responses have addressed all concerns. If so, we kindly request a re-evaluation of our work.\\n\\nBest,\\n\\nAuthors\"}" ] }
9LdJDU7E91
IRIS: LLM-Assisted Static Analysis for Detecting Security Vulnerabilities
[ "Ziyang Li", "Saikat Dutta", "Mayur Naik" ]
Software is prone to security vulnerabilities. Program analysis tools to detect them have limited effectiveness in practice due to their reliance on human labeled specifications. Large language models (or LLMs) have shown impressive code generation capabilities but they cannot do complex reasoning over code to detect such vulnerabilities especially since this task requires whole-repository analysis. We propose IRIS, a neuro-symbolic approach that systematically combines LLMs with static analysis to perform whole-repository reasoning for security vulnerability detection. Specifically, IRIS leverages LLMs to infer taint specifications and perform contextual analysis, alleviating needs for human specifications and inspection. For evaluation, we curate a new dataset, CWE-Bench-Java, comprising 120 manually validated security vulnerabilities in real-world Java projects. A state-of-the-art static analysis tool CodeQL detects only 27 of these vulnerabilities whereas IRIS with GPT-4 detects 55 (+28) and improves upon CodeQL's average false discovery rate by 5% points. Furthermore, IRIS identifies 4 previously unknown vulnerabilities which cannot be found by existing tools. IRIS is available publicly at https://github.com/iris-sast/iris.
[ "Neuro-Symbolic", "Program Analysis", "Security Vulnerability", "LLM" ]
Accept (Poster)
https://openreview.net/pdf?id=9LdJDU7E91
https://openreview.net/forum?id=9LdJDU7E91
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yuuTXNKKSs", "yhD7trZcyI", "r0Ff5aA9z7", "lwr7PR9V3m", "if8nqpSnwp", "hOjnCcBI4g", "h57hTfzcpj", "ednWa2RZNS", "UjCwCiDAqk", "R7JUTaLyD1", "Qn2nvFmvLE", "IhJZmfd5YT", "Fbte1sfX5E", "DZBHiJqfKD", "9lPyLU9Rzg", "6kLqaoDCVY" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review" ], "note_created": [ 1732639991496, 1733164011734, 1732337734210, 1730695140644, 1734756733556, 1729835224025, 1732337357193, 1732639834485, 1730681919886, 1732337202228, 1732639974994, 1732661670828, 1732337087166, 1732337375419, 1737523751470, 1730673620161 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6217/Authors" ], [ "ICLR.cc/2025/Conference/Submission6217/Authors" ], [ "ICLR.cc/2025/Conference/Submission6217/Authors" ], [ "ICLR.cc/2025/Conference/Submission6217/Reviewer_EvMB" ], [ "ICLR.cc/2025/Conference/Submission6217/Area_Chair_CrDA" ], [ "ICLR.cc/2025/Conference/Submission6217/Reviewer_nTbW" ], [ "ICLR.cc/2025/Conference/Submission6217/Authors" ], [ "ICLR.cc/2025/Conference/Submission6217/Authors" ], [ "ICLR.cc/2025/Conference/Submission6217/Reviewer_j2y3" ], [ "ICLR.cc/2025/Conference/Submission6217/Authors" ], [ "ICLR.cc/2025/Conference/Submission6217/Authors" ], [ "ICLR.cc/2025/Conference/Submission6217/Reviewer_3b2y" ], [ "ICLR.cc/2025/Conference/Submission6217/Authors" ], [ "ICLR.cc/2025/Conference/Submission6217/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6217/Reviewer_3b2y" ] ], "structured_content_str": [ "{\"comment\": \"We appreciate your valuable comments on our paper. We have prepared a rebuttal and tried our best to address your concerns. We are willing to answer any unresolved or further questions that you may have regarding our rebuttal if time is allowed. If our rebuttal has addressed your concerns, we would appreciate it if you would let us know if your final thoughts. Additionally, we will be happy to answer any further questions regarding the paper. Thank you for your time and consideration.\"}", "{\"comment\": \"As the rebuttal period concludes, we would like to summarize the key improvements and discussions regarding our submission.\\n\\nReviewers j2y3, 3b2y, and nTbW unanimously acknowledge the importance and challenge of whole-repository security vulnerability detection. They agree that IRIS, leveraging larger LLMs, significantly outperforms prior methods in both the number of vulnerabilities detected and overall F1 scores. Additionally, all reviewers commend the clarity and detail of our presentation. In response to specific requests for clarification on design decisions (e.g., context window size, types of dataflow nodes, and few-shot/zero-shot strategies), we provided detailed responses during the rebuttal and have incorporated these clarifications into the manuscript.\\n\\nTo address reviewer EvMB's concerns, we clarified that our approach does estimate false positives, as is essential for whole-repository vulnerability detection. Our contextual analysis significantly reduces the false discovery rate. We further clarify that a pure LLM-based approach will face challenges with extremely large repositories (up to 7M lines of code), leading to substantial computational and financial costs.\\n\\nRegarding reviewer EvMB's comment on the number of LLMs evaluated, we conducted additional experiments during the rebuttal period with two mid-sized LLMs\\u2014Gemma-2-27B and Qwen-2.5-Coder-32B-Instruct. Both models outperformed CodeQL in detecting security vulnerabilities, as evidenced by our metrics.\\n\\nWe hope our rebuttal has addressed most of your questions. Please let us know if there are any final comments that we can address.\"}", "{\"comment\": \"> **Q1: \\u201cCan any new package or Java module not processed by the LLMs cause detection or specification generation issues?\\u201d**\\n\\nWe acknowledge that LLMs may not have prior exposure to external APIs from private projects. However, they can still provide best-effort guesses based on the method information available, which may include JavaDoc documentation. This is a significant improvement over traditional methods, which rely solely on human labels and would be completely ineffective in such scenarios. That said, we acknowledge that there may be better methodologies for generalization to unseen cases, and we aim to explore these in future work.\\n\\n> **Q2: \\u201cwhat are all the possible values for the type of node to match in G and what is the upper bound for N from the N-tuple of F\\u201d**\\n\\nThe type of nodes could be\\n\\n- `argument[i]`, denoting the i-th argument of an external function call\\n- `argument[this]`, denoting the \\u201cthis\\u201d argument in an external function call (e.g. `parser` in the call `parser.parse(arg1, arg2)`)\\n- `parameter[i]`, denoting the i-th parameter of an internal function definition\\n- `parameter[this]`, denoting the \\u201cthis\\u201d argument of an internal function definition\\n- `return_value`, denoting the return value of an external function call\\n\\nThe upper bound for N corresponds to the maximum number of arguments of a given API, which, in our CWE-Bench-Java, is 32.\\n\\n> **Q3: \\u201cin the event that CodeQL does not detect a vulnerability even with correctly labeled specifications, will it be marked as negative? Can you give me any statistics on how many such instances there are? If I understand correctly, CodeQL is still a critical part of the vulnerability detection pipeline. LLM just ensures that the data fed into the static analyzer is of quality, and once detection is made, LLM helps weed out false positives?\\u201d**\\n\\nIf taint specifications are all correct and that the data-flow is capable of modeling the vulnerability, then CodeQL **will** be able to detect a vulnerability. In case that the data-flow cannot model the vulnerability, CodeQL will not be able to detect the vulnerability, and the result **will** be marked as **negative**. Therefore, CodeQL is still a critical part of the vulnerability detection pipeline.\\n\\nIn our dataset, we observe at least 12 such cases where normal taint data-flow cannot model the vulnerability. However, the general statistics relevant to our evaluation is very hard to retrieve. This is due to the missing ground-truth source and sink labels as well as the sheer size of our projects.\\n\\n> **Q4: \\u201cHow did they come up with the \\\"\\u00b15\\\" value? Can it be arbitrarily large, bounded by the LLM's token limitation?\\u201d**\\n\\nIn our experiments, we chose \\u00b15 as a balanced approach to provide sufficient context while maintaining performance and cost. While it is technically possible to use a larger window that encompasses the entire function or even the class definition, we found that too much context can overwhelm the LLM, leading to reduced accuracy. Furthermore, increasing the context size substantially raises computational costs\\u2014especially given the large number of candidate APIs and paths that must be queried.\\n\\nLooking ahead, we plan to explore more targeted methods for context selection. Instead of using a fixed number of surrounding lines, we could incorporate only the most relevant variable definitions, class and function signatures, and other critical elements. However, achieving this balance between including relevant information and maintaining manageable context sizes will require further investigation, which we leave for future work.\\n\\n> **Q5: \\u201cHas it ever happened that LLM made a mistake and marked a positive case as negative even after the positive detection of a vulnerable path by CodeQL?\\u201d**\\n\\nYes, this occurs occasionally. For example, there was a path traversal vulnerability (CWE-22) caused by an insufficient Regex pattern used to sanitize user input. While CodeQL successfully reported this path, the LLM failed to recognize the vulnerability during contextual analysis. This was because the Regex pattern was defined as a static global variable outside the provided context, making it inaccessible to the LLM. As a result, the LLM incorrectly assumed the input was properly sanitized and flagged the path as non-vulnerable.\\n\\nAs mentioned in our response to Q4, we plan to explore a more targeted method for context retrieval, to improve the precision of contextual analysis.\"}", "{\"summary\": \"This paper presents a hybrid approach that combines static analysis (CodeQL) with LLMs to detect vulnerabilities. Specifically, IRIS uses LLMs to find taint specification of external APIs and use CodeQL to compute paths as context and feed into the prompts to LLMs. The authors evaluated their work on 120 manually validated Java vulnerabilities. The results show that their approach can detect 56 vulnerabilities, 28 more than the ones reported by CodeQL\", \"soundness\": \"2\", \"presentation\": [\"How subset of paths are selected when paths are too long for the prompt? It's not clearly presented\", \"We typically don't call such an approach neural-symbolic\"], \"contribution\": \"- This paper makes incremental contributions \\n(1) Using LLMs for getting taint information. Typically in static analysis, we provide a list of taint APIs, it can be done precisely by the domain experts.\\n(2) Using paths reported by CodeQL as context when prompting LLM. There have been work that uses output of the static analysis tools like infer as prompts\", \"strengths\": [\"IRIS demonstrated the improved results over CodeQL\", \"IRIS found unknown vulnerabilities\", \"This paper is well written and contains details.\", \"It's interesting to learn that the LLMs can infer taint specifications with 70% accuracy\"], \"weaknesses\": \"Soundness:\", \"this_paper_contains_a_problematic_evaluation\": [\"The authors curated 120 examples. All of them are vulnerable labels. The results focus on vulnerabilities detected. The false positive rates are not evaluated?\", \"There are many vulnerability datasets available, such as PrimeVul, Sven datasets. Why do you not run your tool on other datasets?\", \"The baselines of CodeQL (QL), Infer (Infer), Spotbugs (SB), and Snyk are all static analysis tools? Should you also compare any AI based approaches? e.g., what about feeding the entire function into a prompt and see how LLMs perform?\", \"only 5 models are experimented\"], \"questions\": [\"How are the subset of paths selected when paths are too long?\", \"There are many vulnerability datasets available, such as PrimeVul, Sven datasets. Why do you not run your tool on these datasets?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes IRIS, a neuro-symbolic approach for security vulnerability detection that integrates large language models with static analysis. Overall, it is a borderline submission with mixed reviews. While most reviewers provide positive feedback, reviewer EvMB holds a differing opinion.\", \"the_main_concerns_raised_include\": \"(1)Problematic evaluation (e.g., all data labeled as vulnerable, absence of a false positive rate);\\n(2)Lack of experiments on existing datasets;\\n(3)Comparison with AI-based approaches;\\n(4)Presentation issues, and \\n(5)Incremental contribution. \\n\\nBecause reviewer was not heavily involved in the rebuttal discussion, ACr (AC) carefully reviewed all feedback and the rebuttal. The AC agrees with reviewer EvMB's concerns and recognizes the value of all comments. However, after evaluating the rebuttal, the AC believes that the authors have addressed these concerns sufficiently. Therefore, the AC recommends acceptance of this paper, with the expectation that the authors will revise it according to the reviewers\\u2019 comments.\", \"additional_comments_on_reviewer_discussion\": \"AC carefully read the reviews and rebuttals. The main concerns from reviewer EvMVare (1) problematic evaluation (e.g., all data are vulnerable label, no false positive rate), (2) no experiment on existing dataset, (3) comparison with AI based approach, (4) presentation problem and (5) incremental contribution. After reading the rebuttal, the AC believes that most of these concerns relate to presentation issues and have been addressed sufficiently. Because this reviewer did not participate in the discussion, the AC places a lower weight on this score.\"}", "{\"summary\": \"The paper claims to be the first work to combine LLMs with static analysis to detect application-level security vulnerability via whole-project analysis. They propose a contextual analysis technique with LLMs that reduces false positive alarms and minimizes the triaging effort for developers. Their insight is that encoding the code context and path-sensitive information in the prompt elicits more reliable reasoning from LLMs. They curated a benchmarking dataset of real-world JAVA programs that contain 120 security vulnerabilities. They evaluate IRIS on the Java dataset using eight diverse open- and closed-source LLMs. Doing an extensive evaluation, they show that their method IRIS obtains the best results by detecting 55 vulnerabilities, which is 28 (103.7%) more than CodeQL, the existing best-performing static analyzer.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper addresses the long-standing security problem of decreasing the false positive rate of vulnerability detection while maintaining high enough accuracy.\\n2. They have done a thorough evaluation using both open\\u2014and close-source LLMs and different static analysis baselines, such as Facebook Infer, SpotBugs, and Snyk.\\n3. They introduced a new dataset with important characteristics such as containing vulnerability metadata, being compilable, demonstrating real work, and being validated.\", \"weaknesses\": \"1. Tested only on Java codebase.\\n2. They have excluded code written in other languages that may flow to the Java components in the project during runtime or via compilation.\\n3. An evaluation of how many detection misses were due to CodeQL's fault and how many were due to LLM's incorrect specification or filtering fault.\", \"questions\": \"1. They hypothesize that since LLMs are pre-trained on internet-scale data, they know about the behavior of widely used libraries and their APIs. So, can any new package or Java module not processed by the LLMs cause detection or specification generation issues?\\n2. I want more details on the specification tuples, such as what are all the possible values for the type of node to match in G and what is the upper bound for N from the N-tuple of F?\\n3. The authors mentioned including \\u00b15 lines surrounding the exact source and sink location and the enclosing function and class. How did they come up with the \\\"5\\\" value? Can it be arbitrarily large, bounded by the LLM's token limitation?\\n4. I would like to know if, in the event that CodeQL does not detect a vulnerability even with correctly labeled specifications,will it be marked as negative? Can you give me any statistics on how many such instances there are? If I understand correctly, CodeQL is still a critical part of the vulnerability detection pipeline. LLM just ensures that the data fed into the static analyzer is of quality, and once detection is made, LLM helps weed out false positives?\\n5. Has it ever happened that LLM made a mistake and marked a positive case as negative even after the positive detection of a vulnerable path by CodeQL? I am trying to understand how accurate the LLM-based filtering method is and how drastic the change in explanation for detection will be based on the different specifications provided.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"> **Q1: \\u201cWhat are the causes of the undetected vulnerabilities (among the 120)?\\u201d**\", \"We thank the reviewer for the question, we will include the following error analysis for false negatives into our paper.\", \"_Vulnerability cannot be modeled by simple taint dataflows_: for instance, the vulnerability CVE-2020-11977 (CWE-94) is manifested by an unexpected `exit(1)` call, with no direct taint dataflow going into it. In fact, the taint dataflow goes into the condition of an \\u201cif\\u201d statement surrounding the exit. In this case, the model of sink API cannot capture the vulnerability.\", \"_Missing data-flow edge due to side-effects_: for instance a taint information is written to a temporary file, and is later read from the same file, subsequently causing a vulnerability. However, the dataflow edge is carried through side-effects, which is not captured by CodeQL.\", \"_Missing data-flow edge due to unspecified library usage_: The vulnerability can only be manifested through a concrete usage of the library; but within the library itself there is no possible dataflow to connect the source and the sink.\", \"In general, we view the above as general limitations for the static analysis. We can hopefully resolve them with query synthesis, and we leave these for future work. In terms of LLM induced false negatives, here are the two main failure modes:\", \"_Missing taint-propagator labels from LLM_: this would cause missing data-flow edges stopping source to flow to sink.\", \"_Missing source/sink specifications from LLM_: if there is no relevant source/sink specification then the static analysis tool would not have the anchor for analysis.\", \"> **Q2: Discussion on \\u201cvulnerability is considered detected when the vulnerable path passes through some crucial program points\\u201d**\", \"According to [1], the function-level metric is not enough, and the calling context surrounding the vulnerability matters. We adopt the strategy based on human understanding of vulnerability, where we look at the path between the places where taint is initialized and manifested. This path-based structural coverage criteria is widely adopted in software testing [2]. In the empirical study [3], the authors look at the evaluation metric which uses the fix of the vulnerability as the crucial program point.\", \"[1]: Top Score on the Wrong Exam: On Benchmarking in Machine Learning for Vulnerability Detection, Risse et. al., 2024\", \"[2]: Introduction to software testing, P Ammann, 2008\", \"[3]: How Many of All Bugs Do We Find? A Study of Static Bug Detectors, Habib et. al., ASE 2018\", \"> **Q3: \\u201cA more rigorous discussion or evaluation should be conducted beyond random sampling of 50 alarms to argue that the actual false discovery rate should be much lower (line 397-399).\\u201d**\", \"We thank the reviewer for raising this important concern. Our manual analysis primarily focuses on evaluating the potential attack surface and the manifestation of vulnerabilities, which aligns with the key factors used to determine severity according to the Common Vulnerability Scoring System (CVSS) [4]. For example, in our analysis, we label an alarm as a vulnerability if it presents a \\\"local\\\" attack vector, even if the resulting CVSS score may be relatively low.\", \"We agree that a more thorough, CVSS-based evaluation of the alarms reported by IRIS would provide additional rigor. However, it is outside the scope of our current study. We acknowledge this as a valuable direction for future work. In our revised version, we will elaborate on the detailed criteria for our manual analysis.\", \"[4]: Vulnerability Metrics, National Vulnerability Database, https://nvd.nist.gov/vuln-metrics/cvss\"]}", "{\"comment\": \"We appreciate your valuable comments on our paper. We have prepared a rebuttal with an updated manuscript and tried our best to address your concerns. We are willing to answer any unresolved or further questions that you may have regarding our rebuttal if time is allowed. If our rebuttal has addressed your concerns, we would appreciate it if you would let us know if your final thoughts. Additionally, we will be happy to answer any further questions regarding the paper. Thank you for your time and consideration.\"}", "{\"summary\": \"The paper proposed IRIS, a neuro-symbolic approach that leverages LLMs to infer taint specifications and perform contextual analysis for security vulnerability detection in combination with static analysis tools like CodeQL. The evaluation comes with a newly curated dataset CWE-Bench-Java, comprising 120 manually validated security vulnerabilities in real-world Java projects. The experiment result shows that IRIS with GPT-4 detects 28 more vulnerabilities than CodeQL alone.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Originality: The paper proposed a new method that combined LLM with static analysis tool CodeQL to infer taint specifications and perform contextual analysis.\\n2. Quality: The experiment shows huge improvement on bug detection effectiveness\\n3. Clarity: The paper is well-written and implementation details like prompts and QL scripts are provided in appendix.\\n4. Significance: The paper addressed challenges of static taint analysis like false negatives due to missing taint specifications, contributing to the field in the long term.\", \"weaknesses\": \"1. The paper doesn't address potential data leakage and memorization by LLMs, since they are asked to infer sources/taints purely based on method name and signature instead of implementation. The method may work well for projects where the APIs (sources and taints) are already publicly known, but not for private projects or less well-known projects.\\n2. The comparison with baseline CodeQL may not be fair enough as the LLM-based method will always have advantages on bug detection numbers due to more source/sinks. It might be more convincing if LLM-based approach can be compared with heuristics shown in prompts like \\\"Taint source APIs usually return strings or custom object types.\\\" to see that LLM's advantage against heuristics like name and signature type matching.\", \"questions\": \"1. As mentioned in the limitation section, \\\"IRIS makes numerous calls to LLMs for specification inference and filtering false positives, increasing the potential cost of analysis\\\" I would like to know the actual cost of LLM call for IRIS on CWE-Bench-Java in total and on average.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> **Q1: \\u201cpotential data leakage and memorization by LLMs, since they are asked to infer sources/taints purely based on method name and signature instead of implementation\\u201d**\\n\\nWhile we acknowledge that our method relies on the LLM\\u2019s existing knowledge, we argue that the risk of data leakage is minimal. The labels for source/sink APIs and their specific types are typically not publicly available on the internet. Moreover, the paths summarized by CodeQL are even less familiar to LLMs, making it highly unlikely that these were seen during training. This approach thus represents a genuine challenge for the LLM\\u2019s capability in knowledge transfer and logical reasoning.\\n\\nWe would also like to clarify that while LLMs may not have prior exposure to external APIs from private projects, they can still provide best-effort guesses based on the information available. This is a significant improvement over traditional methods, which rely solely on human labels and would be completely ineffective in such scenarios. That said, we acknowledge that there may be better methodologies for generalization to unseen cases, and we aim to explore these in future work.\\n\\nRegarding the sole reliance on method names and signatures, we emphasize that this was a deliberate design decision made after carefully balancing accuracy and cost. While including full function implementations might improve accuracy, the vast number of candidates requiring labeling (Table 11) makes it impractical to query LLMs for specification inference with complete implementation details. Further, analyzing the entire function implementation is itself a challenging task for LLMs.\\n\\n> **Q2: \\u201cThe comparison with baseline CodeQL may not be fair enough as the LLM-based method will always have advantages due to more source/sinks. \\u2026 LLM's advantage against heuristics like name and signature type matching.\\u201d**\\n\\nWe thank the reviewer for raising this insightful question. While it is true that LLMs can propose more source/sink candidates, two critical factors must be considered: (1) the increased number of paths generated by CodeQL, which significantly impacts the cost of subsequent contextual analysis, and (2) the higher likelihood of false positives that may result.\\n\\nTo address this concern, we conducted a small-scale experiment on a relatively small project, zt-zip, where both CodeQL and IRIS+GPT4 successfully detected CVE-2018-1002201. Using the **signature-based heuristics** suggested by the reviewer, the pipeline generated 4,441 alarms (compared to 8 for CodeQL and 15 for IRIS+GPT-4), resulting in a False Discovery Rate (FDR) of 98.2%. In contrast, IRIS+GPT4 achieved an FDR of 63.8%, and CodeQL achieved an FDR of 60% on this project. These results demonstrate that the heuristics-based approach yields extremely low precision.\\n\\nGiven that IRIS+GPT-4 outperforms CodeQL on both the number of vulnerabilities detected and the FDR, we maintain that the comparison between IRIS+GPT-4 and CodeQL is fair and reflective of their respective strengths.\\n\\n> **Q3: \\u201cI would like to know the actual cost of LLM call for IRIS on CWE-Bench-Java in total and on average\\u201d**\\n\\nWe thank the reviewer for the question. On average, we observe 388.4 LLM calls per project, though this number varies significantly based on the project size. For example, Perwendel Spark, a project with 10K lines of code, required only 43 LLM calls to detect the CVE present within it.\\n\\nFor the final evaluation of IRIS on CWE-Bench-Java, across the 7 evaluated LLMs (including 2 added during the rebuttal) and the 120 projects in our dataset, we made approximately 320,000 LLM calls. While it is challenging to provide an exact cost breakdown, we hope this information offers valuable perspective on the scale and effort involved in our work.\"}", "{\"comment\": \"We appreciate your valuable comments on our paper. We have prepared a rebuttal and tried our best to address your concerns. We are willing to answer any unresolved or further questions that you may have regarding our rebuttal if time is allowed. If our rebuttal has addressed your concerns, we would appreciate it if you would let us know if your final thoughts. Additionally, we will be happy to answer any further questions regarding the paper. Thank you for your time and consideration.\"}", "{\"comment\": \"Thank you for the author's clarification. Overall, it is a great work, and I particularly like the neuro-symbolic approach of combining LLM and static taint analysis tool like CodeQL. It is also great to see that IRIS can uncover previously unknown vulnerabilities.\\n\\nMy main concern is still on the high false discovery rate and low F1 score of IRIS, which don't show much improvement from the CodeQL baseline. There is no strong evidence showing that the actual false discovery rate should be much lower. This may indicate that manual efforts are needed to triage through many potentially false security alerts.\\n\\nTherefore, I would like to maintain my current score.\"}", "{\"comment\": \"> **Q1: Comparison of CWE-Bench-Java with existing datasets like PrimeVul and SVEN**\\n\\nIn prior datasets such as PrimeVul and SVEN, the input consists of a single function. In contrast, our task involves analyzing an **entire repository**, which averages 300K lines of code per project. Identifying inter-procedural vulnerability paths within such extensive repositories is akin to finding a \\\"needle in a haystack.\\\" Table 5 highlights other key differences between these datasets, underscoring the challenges addressed by CWE-Bench-Java.\\n\\nFrom our experience, function-level vulnerability detection is insufficient for tackling real-world vulnerabilities, which typically span multiple functions rather than being isolated to a single one. This perspective is widely shared within the research community [1, 2]. As Reviewer 3b2y also noted, the ability to perform project-level vulnerability detection is a strength of our work.\\n\\n- [1]: Top Score on the Wrong Exam: On Benchmarking in Machine Learning for Vulnerability Detection, Risse et. al. 2024\\n- [2]: Data Quality for Software Vulnerability Datasets, Croft et. al. ICSE 2023\\n\\n> **Q2: \\u201cFalse positive rates are not evaluated?\\u201d**\\n\\nWe do estimate the False Discovery Rate (FDR), which is similar to the false positive rate that the reviewer mentioned but provides a more appropriate measure in our context (Section 3.6). Since we are performing whole-repository analysis, there is no need for an explicit mention of \\u201cnegative examples\\u201d in our dataset\\u2014any non-vulnerable paths in the repository naturally serve as negative examples, far outnumbering the vulnerable ones. \\n\\n> **Q3: \\u201cShould you also compare any AI based approaches? e.g., what about feeding the entire function into a prompt and see how LLMs perform?\\u201d**\\n\\nOur dataset expects whole-repository analysis, which is fundamentally different from function-level vulnerability detection, as seen in datasets like PrimeVul and SVEN (Table 5). For this reason, we do not use pure neural approaches as baselines, since incorporating the entire repository (which could have up to 7M lines of code as shown in Table 11) as context for a large language model is computationally intractable.\\n\\n> **Q4: \\u201cHow subset of paths are selected when paths are too long for the prompt?\\u201d**\\n\\nWe set a hyperparameter $S$ to control the number of intermediate steps in the prompt.For paths with more than $S$ intermediate steps, we divide the path into $S$ equal segments and select one step from each. This selection prioritizes function calls, as they may indicate sanitizations. If no function call is present, we randomly pick one node in the segment. In our experiments, we observe that setting $S$ to 10 provides a good balance between the cost and accuracy so that the prompt contains enough context and would not be too long. We will add this description to the revised version of our paper.\\n\\n> **Q5: \\u201cTypically in static analysis, we provide a list of taint APIs, it can be done precisely by the domain experts\\u201d**\\n\\nAs discussed in lines 48-53, it is well-known that existing taint API lists are often insufficient for identifying new, real-world vulnerabilities. These labels are typically created retroactively\\u2013after a CVE is discovered, a human expert assigns the corresponding label. This approach introduces significant limitations, as tools relying solely on human-generated labels struggle to keep up with evolving vulnerabilities. Moreover, these labels tend to be rigid and require continuous maintenance to adapt to the dynamic nature of modern software development. In Table 8 in our paper, we present the number of unique APIs presented in our dataset. Manually labeling over thousands of libraries with APIs often appearing in different contexts is prohibitive [3].\\n\\n- [3]: Scalable Taint Specification Inference with Big Code, Chibotaru et. al. PLDI 2019\\n\\n> **Q6: \\u201conly 5 models are experimented\\u201d**\\n\\nFollowing the reviewer\\u2019s suggestion, we have extended the evaluation with two more popular open-source LLMs, Qwen-2.5-Coder-32B-Instruct and Gemma-2-27B, and we hereby show the high-level performance comparison in the same format as our Table 1. In general, we see that IRIS+LLMs still consistently outperform all traditional baselines. We will incorporate this in the revised version of our paper and we plan to explore more models in the future.\\n\\n**Table 1 Extended:**\\n\\n| Method | #Detected | Detection Rate (%) | Avg FDR (%) | Avg F1 Score |\\n|------------|-------------|------------|-------|---------------|\\n| CodeQL | 27 | 22.50 | 90.03 | 0.076 |\\n| IRIS + GPT-4 | 55 | 45.83 | 84.82 | 0.177 |\\n| (**NEW**) IRIS + Qwen-2.5-Coder-32B-Instruct | 47 | 39.17 | 92.38 | 0.097 |\\n| (**NEW**) IRIS + Gemma-2-27B | 45 | 37.50 | 91.23 | 0.100 |\"}", "{\"comment\": \"> **Q4: \\u201cCan you explain the design choices in Section 3?\\u201d**\\n\\nWe thank the reviewer for pointing this out. We will add the following descriptions to our paper:\\n\\n**few-shot for external API**: Since the LLM is tasked with labeling source, sink, and taint-propagator APIs, we provide one example for each category, along with a negative example of an API that does not fall into any of these categories. We typically select examples from the Java standard libraries because they are widely used and their labels are readily available.\\n\\n**zero-shot for internal API**: As labels for internal APIs are not available, we rely on the zero-shot capabilities of the language model. To mitigate potential performance loss, we include additional information, such as documentation associated with important internal APIs.\\n\\n**5 lines surrounding the source and sink location**: We chose \\u00b15 lines as a balanced approach to provide sufficient context while managing performance and cost. While technically possible to use a larger window, we observed that excessive context can overwhelm the language model, leading to reduced accuracy. Additionally, a larger context increases computational costs significantly, particularly given the large number of candidate APIs and paths that must be queried.\\n\\n**a subset of nodes are selected**: We use a hyperparameter $S$ to control the number of intermediate steps included in the prompt. For paths with more than $S$ intermediate steps, we divide the path into $S$ equal segments and select one step from each. This selection prioritizes function calls, as they may indicate sanitizations. If no function call is present, a node is randomly selected from the segment. In our experiments, we observe that setting $S$ to 10 provides a good balance between the cost and accuracy so that the prompt contains enough context and would not be too long.\\n\\n> **Q5: \\u201cLine 377: \\u2018IRIS's superior performance compared to CodeQL:\\u2019 Should be more careful with the language here, as Table 1 doesn't show its superiority on Avg FDR and F1 metrics.\\u201d**\\n\\nWe thank the reviewer for pointing out this oversight. We will revise the sentence as follows:\\n\\n\\\"The results in Table 1 compare IRIS and CodeQL, highlighting IRIS's superior performance specifically when paired with GPT-4.\\\"\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper proposes using a neuro-symbolic approach, combining LLM and static taint analysis tool CodeQL, for whole-repository level, Java vulnerability detection. The authors curate CWE-Bench-Java, a vulnerability dataset of Java projects. Evaluation results show that IRIS is able to detect significantly more vulnerabilities compared with the baselines and are able to uncover previously unknown vulnerabilities.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"IRIS includes a LLM filtering step via contextual analysis, as the use of LLM for sources and sinks predictions may incur many spurious alerts. The ablation study shows that this step can greatly improve Avg F1 score for larger models.\", \"IRIS significantly outperforms prior methods in terms of the number of vulnerabilities detected and is able to uncover previously unknown vulnerability.\", \"IRIS focus on project-level vulnerability detection on CWE-Bench-Java, which is inherently a challenging task as the projects are of large sizes.\"], \"weaknesses\": [\"The false discovery rate of IRIS is very high (~90%) and the F1 score is low (around 0.1). The Avg FDR and Avg F1 scores for IRIS + GPT-3.5 / Lamma-3 / DeepSeekCoder are either worse or comparable to the CodeQL baseline.\", \"What are the causes of the undetected vulnerabilities (among the 120)?\", \"A more rigorous discussion or evaluation should be conducted beyond random sampling of 50 alarms to argue that the actual false discovery rate should be much lower (line 397-399).\", \"Models other than GPT-4 are over-approximating the specifications (Figure 7). The paper would benefit more from method designs to improve LLM-inferred specifications to lower the false discovery rate to save manual efforts of triaging through the alerts.\", \"Presentation\", \"Line 377: \\\"IRIS's superior performance compared to CodeQL:\\\" Should be more careful with the language here, as Table 1 doesn't show its superiority on Avg FDR and F1 metrics.\"], \"questions\": \"1. Can you add related works where the vulnerability is considered detected when the vulnerable path passes through some crucial program points (line 297-298)?\\n2. Can you explain the design choices in Section 3? It seems to me that these may affect LLM's taints specification inference and predictions of false positives.\\n 1. Few-shot (3-shot) for external APIs and zero-shot for internal APIs: does the number of shots affect the precision of LLM-inferred specifications?\\n 2. $\\\\pm$ 5 lines surrounding the source and sink location (line 261), \\n 3. A subset of nodes (line 263 - 264): how are they selected?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
9LZna4ryFH
A Tailored Framework for Aligning Diffusion Models with Human Preference
[ "Jie Ren", "Yuhang Zhang", "Dongrui Liu", "XIAOPENG ZHANG", "Qi Tian" ]
The direct preference optimization (DPO) method has shown success in aligning text-to-image diffusion models with human preference. Previous approaches typically assume a consistent preference label between final generated images and their corresponding noisy samples at intermediate steps, and directly apply DPO to these noisy samples for fine-tuning. However, we identify a significant issue with this consistency assumption, as directly applying DPO to noisy samples from different generation trajectories based on final preference order may disrupt the optimization process. We first demonstrate the issues inherent in previous methods from two perspectives: *gradient direction* and *preference order*, and then propose a **Tailor**ed **P**reference **O**ptimization (TailorPO) framework for aligning diffusion models with human preference, underpinned by some theoretical insights. Our approach directly ranks the preference order of intermediate noisy samples based on their step-wise reward, and effectively resolves the optimization direction issues through a simple yet efficient design. Additionally, to the best of our knowledge, we are the first to consider the distinct structure of diffusion models and leverage the gradient guidance in preference aligning to enhance the optimization effectiveness. Experimental results demonstrate that our method significantly improves the model's ability to generate aesthetically pleasing and human-preferred images.
[ "RLHF", "Diffusion models", "Direct preference optimization" ]
Reject
https://openreview.net/pdf?id=9LZna4ryFH
https://openreview.net/forum?id=9LZna4ryFH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y5tByhKPEH", "vrJdpFoQgr", "s4asxtMU4J", "kJN61FEZl7", "ixXvGSMUkx", "h78PfBAe9G", "gfOfQceIzG", "edK43rNlot", "e7aAASTozn", "d7Zqrx6Eml", "cfXmPCbNFI", "cR60xRDRZb", "bsrls8fKnR", "ZbqzxCPjXf", "ZUwi1iQjEA", "YsJLJT7sHx", "VGhHZqblbE", "TAH6YUz1Fg", "SFDfse4hso", "PxldxKdI4T", "O6vj7TKZfw", "Nc1WM1bgEH", "KGeKXXqBlc", "Jxww4KuRcW", "J2tWOpaPUg", "AxYZTELVCD", "8EfUxLO8Co", "8Bk4FFeuLY", "85mEPZShUH", "4dBA77XgcF", "3UIfn3Bz6d", "3JAO9Gxdan", "1mzeaLwZ3t", "1l2OKmNDT1", "1AZY6PA0kS", "0vx2PH77XO" ], "note_type": [ "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision" ], "note_created": [ 1732126824859, 1732267436927, 1732670191688, 1735222979148, 1730420221833, 1732128011332, 1732128814458, 1732559566646, 1733198700169, 1732128426260, 1732128082883, 1732468532309, 1730644239187, 1732783293984, 1732128543872, 1732148050736, 1732128332699, 1730700209117, 1732206153754, 1732128272859, 1732542129999, 1732128741722, 1733189167648, 1732782460883, 1730629598412, 1732128236683, 1732126878269, 1732128666316, 1732525624884, 1732782312813, 1732783052982, 1730436417056, 1732782100318, 1732549279722, 1732509123518, 1737523466034 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Reviewer_nzop" ], [ "ICLR.cc/2025/Conference/Submission1717/Reviewer_ZmZ1" ], [ "ICLR.cc/2025/Conference/Submission1717/Area_Chair_pWm6" ], [ "ICLR.cc/2025/Conference/Submission1717/Reviewer_ZmZ1" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Reviewer_nzop" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Reviewer_nzop" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Reviewer_zqb2" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Reviewer_zd3f" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Reviewer_F2pk" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Reviewer_zd3f" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Reviewer_zqb2" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Reviewer_F2pk" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Authors" ], [ "ICLR.cc/2025/Conference/Submission1717/Reviewer_F2pk" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer zd3f (part 1)\", \"comment\": \"Thank you for your valuable comments. We will try our best to answer all your concerns. Please let us know if you still have further concerns, so that we can further update the response ASAP.\\n\\n**Q1: About user study. \\\"A larger and more diverse user base would provide more robust evidence of the framework's effectiveness.\\\"**\\n\\n**A1:** Thank you for the valuable comments. We have followed your suggestion to *conduct a broader user study* by including more users. Beyond five users included in our original manuscript, we ask another five users to compare the preference of images generated by different methods. Following the settings in Lines 469-475 and Appendix D, we collect feedback from five more users and report the results obtained from a total of 10 users as follows.\\n\\n| | TailorPO vs. DDPO | TailorPO vs. SPO | TailorPO-G vs. DDPO | TailorPO vs. SPO |\\n| ------------------------ | ----------------- | ---------------- | ------------------- | ---------------- |\\n| TailorPO/TailorPO-G win | 59.33 | 54.22 | 59.24 | 53.90 |\\n| Draw | 22.22 | 27.56 | 25.17 | 28.95 |\\n| TailorPO/TailorPO-G lose | 18.45 | 18.22 | 15.59 | 17.15 |\\n\\nThe result shows that TailorPO and TailorPO-G better align the model generations with human preference. We have accordingly revised the description and the result of the user study in Figure 7 in the revised manuscript.\\n\\n---\\n\\n**Q2: About comparison with SOTA methods. \\\"While the paper compares TailorPO with other DPO-style methods, it would be strengthened by including comparisons with the current state-of-the-art methods.\\\" \\\"How does TailorPO perform compared to the current state-of-the-art methods ...\\\"**\\n\\n**A2:** Thank you. **First**, this study aims to align a base generative model with human preference via fine-tuning, instead of training a new base model, so we compared our methods with existing fine-tuning methods. **Second**, we have actually included both PPO-style method (DDPO) and DPO-style methods (D3PO and SPO) for comparison. These methods are all proposed in 2023 and 2024, and they represent the current state-of-the-art methods in this direction. Table 2 and Figure 3 have shown that our methods achieved higher reward values than these methods. \\n\\n**Third**, we have followed your suggestion to *conduct a new experiment* to compare TailorPO with the state-of-the-art offline method, Diffusion-DPO (Wallace et al., 2024). Diffusion-DPO finetuned SD-v1.5 on the Pick-a-Pic dataset in an offline manner. Therefore, we also finetune SD-v1.5 using TailorPO on prompts in the Pick-a-Pic training set, taking the aesthetic scorer and ImageReward as the reward model, respectively. Then, we evaluate the performance using 500 prompts in the Pick-a-Pic validation set, as done by (Liang et al., 2024). The results of Diffusion-DPO in the following table are from (Liang et al., 2024).\\n\\n| | Aesthetic score | ImageReward |\\n| ------------- | --------------- | ----------- |\\n| Diffusion-DPO | 5.505 | 0.1115 |\\n| TailorPO | 6.050 | 0.3820 |\\n| TailorPO-G | 6.242 | 0.3791 |\\n\\nThe above table shows that our methods achieve higher reward values than Diffusion-DPO in both aesthetic score and ImageReward score. We have added this experiment in Appendix E.1 of the revised manuscript.\"}", "{\"title\": \"Reply to rebuttal\", \"comment\": [\"**Context:**\", \"Regardless of whether it is implemented in DDIM, DDPM, or other schedulers, given a dataset $\\\\mathcal{D}$, a trained diffusion model $\\\\epsilon_\\\\theta$ is trained to fit $\\\\nabla_{x_t}\\\\log p_t(x_t)$ of the dataset under some rescheduling (SDE by Song et al., EDM by Karras et al.).\", \"Theoretical assumption: I assume the base model $\\\\epsilon_\\\\theta$ perfectly implements $\\\\nabla_{x_t}\\\\log p_t(x_t)$.\", \"The evolution of $p_t(x_t)$ can be derived from the Fokker-Planck equation with the forward SDE. The form is easier if formulated in EDM, i.e., as a mixture of Gaussians.\", \"I acknowledge the author's effort in the additional formulation of EDM and TD\\\\($\\\\lambda$). However, I was expecting the author to fundamentally formulate within this framework (starting from Equation 1). The current presentation in Appendix E appears to be a redundant add-on. Since I understand the required work is formidable, I will not decrease my rating without including them.\"], \"my_main_concern_remains_an_unsound_technical_flaw\": \"1. I believe that $r_t(c, x_t) \\\\approx r(c, \\\\hat{x}(x_t))$ if you implement $\\\\hat{x}(x_t)$ using $\\\\epsilon_\\\\theta$.\\n2. I'm not convinced that you use $\\\\epsilon_{\\\\theta'}$ to implement $r_t(c, x_t) \\\\approx r(c, \\\\hat{x}(x_t))$. Specifically, \\\"To this end, both $\\\\epsilon_{\\\\theta}$ and $\\\\epsilon_{\\\\theta'}$ in the diffusion model represent an estimation for the term $\\\\nabla_{x_t}\\\\log p_t(x_t)$\\\" (I assume $\\\\nabla_{x_t}p_t(x_t) \\\\to \\\\nabla_{x_t}\\\\log p_t(x_t)$ is a typo). Once $\\\\theta$ is modified to $\\\\theta'$, it does **not** perfectly implement $\\\\nabla_{x_t}\\\\log p_t(x_t)$.\\n3. As explained by my theoretical argument, I've already predicted that this method will be effective **only at the beginning of the training**. Figure 9 further fortifies my confidence in this technical flaw.\"}", "{\"comment\": \"The authors have successfully addressed my concerns. I especially appreciate the new results with Diffusion-DPO. The proposed method clearly outperforms offline methods such as Diffusion-DPO.\"}", "{\"metareview\": \"The paper introduces the Tailored Preference Optimization (TailorPO) framework designed to align diffusion models with human preferences by optimizing for preference at each denoising step. This method tackles a key limitation (i.e., assuming the final generated images and intermediate noisy samples sharing consistent preference labels.) in existing direct preference optimization (DPO) techniques by ranking the intermediate samples based on their step-wise reward. The work provides the theoretical justification, and the proposed method TailorPO and TailorPO-G achieves better results than previous state-of-the-art. Most reviewers acknowledge the theoretical contributions. They also recognize the novelty and the superior performance of the proposed TailorPO-G. However, the reviewer zqb2 raises the issue of academic integrity for the resemblance of some materials of the paper with SPO. In addition, other reviewers request more experiments on baseline comparisons and further analyses (zd3f, nzop, zqb2, F2pk, ZmZ1), more discussions about the pros and cons of the proposed TailorPO (zd3f), and elaboration on the difference between TailorPO and TailorPO-G (F2pk). During the discussions, the authors provide detailed responses and address most of the concerns of (zd3f, nzop, F2pk, ZmZ1), except zqb2. The paper receives 4 borderline accept and 1 strong reject, leading to 5.0 on average.\\n\\nHowever, other reviewers also acknowledge that there exists a strong correlation between SPO and the proposed method. Although the authors did add some paragraphs to describe the difference from SPO, I agree with reviewer zqb2 that the claims of the revised paper about its contributions in the abstract, introduction, and other parts of the paper remain a cause of confusion and need be further revised to acknowledge SPO more clearly and properly at the right places. For example, in the abstract Ln 23 to 24, \\\"we are the first to consider the distinct structure of diffusion models and leverage the gradient guidance in preference aligning to enhance the optimization effectiveness.\\\" The first half may cause some confusion and should be revised by adding more context to better reflect the contributions of the paper while properly acknowledging SPO. Similarly, the authors should also properly polish the other parts of the paper to more specifically to reflect this, such as Ln 98 in the introduction, etc. Additionally, the claims of contributions should focus on the theoretical part and TailorPO-G. \\n\\nConsidering the current status of the manuscript, we do not think it can be accepted to ICLR'2025. However, we do encourage the authors to revise the manuscript thoroughly and address the aforementioned concerns, such as describing the correlation between Fig 1 and Fig 3 of SPO, and resubmit the paper to a future venue.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion period, the authors provided detailed responses to address most of the concerns raised by reviewers zd3f, nzop, F2pk, and ZmZ1. Their responses included further explanations, additional experimental results, and in-depth analyses of the proposed TailorPO and TailorPO-G methods. As a result, these reviewers increased their ratings from 5 to 6. Although the authors added new paragraphs and sentences elaborating on the differences between their method and SPO, some concerns raised by reviewer zqb2 remain. I agree with zqb2 that the paper\\u2019s claims regarding its contributions\\u2014particularly in the abstract, introduction, and other sections\\u2014could lead to confusion. These parts should be revised to acknowledge SPO clearly and properly before the paper gets published\"}", "{\"summary\": \"This paper identifies a key weakness in the formation of existing framework that aligns diffusion model with human preference. The authors noted that if the winning and losing samples are in a linear subspace, it is possible for the gradient updates to take the wrong direction. To address this issue, the authors propose a novel tailored framework which ensures the correct update. Experiments show that the proposed modification improves the performance of alignment algorithms on a variety of reward functions,\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The presentation is clear and easy to follow. The problem is well motivated, and concrete derivations are provided.\\n2. The authors conducted extensive experiments on a wide range of prompts and reward target to highlight the effectiveness of the method\\n3. The authors provided user study results, which are helpful to contextualize the implications of qualitative results.\", \"weaknesses\": \"1. The details of user study are not disclosed and results are not carefully analyzed. While the author provide some overviews in Appendix B, they did not disclose how many user responses are collected from the 225 images generated. This is important because it determines the standard error and confidence interval of the user study, which is crucial to judge the significance of the results. The author also did not disclose the instructions provided to the user, and if proper mitigations are applied to reduce user biases (e.g. randomize image order of different models).\\n\\n2. While the author showed that the proposed method works well in comparison with respect, to other online methods, but fails to compare against state-of-the art offline methods such as Diffusion-DPO. Online methods are costly in that they require ad-hoc sampling from the diffusion model. Such complexity should be justified. In particular, Diffusion-DPO also uses Pick-a-Pic dataset to align for human preference.\", \"questions\": \"See weakness\", \"addition_question_that_did_not_affect_the_decision\": \"1. What are the performance on in-distribution prompts? In particular, Table 3 shows that TailorPO-G loses to TailorPO on HPSv2, which is surprising because one would expect with direct gradient guidance from the reward model, TailorPO-G should perform better. Is this because TailorPO-G overfits to the training prompts?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer nzop (part 1)\", \"comment\": \"Thank you for your insightful comments. We will try our best to answer all your concerns. Please let us know if you still have further concerns, so that we can further update the response ASAP.\\n\\n**Q1: About the estimation of $r_t$ during training. \\\"whether $r_t$ can still be accurately estimated using either $\\\\epsilon_\\\\theta$ or $\\\\epsilon_{\\\\theta'}$ after the model parameter is updated from $\\\\theta$ to $\\\\theta'$... whether the optimization method remains effective after the parameter update, where $\\\\theta'$ is significantly different from $\\\\theta$ ...\\\"**\\n\\n**A1:** This is a good question. During training, $r_t$ can still be estimated by Eq. (12). This is because the approximation of $r_t(c, x_t) \\\\triangleq \\\\mathbb{E}[r(c,x_0)|c, x_t] \\\\approx r(c, \\\\hat{x}_0(x_t))$ is derived based on (1) the proof of $\\\\mathbb{E}[x_0|c,x_t]=\\\\hat{x}_0(x_t)$ and (2) the estimation of $\\\\mathbb{E}[r(c,x_0)|c, x_t]\\\\approx r(c, \\\\mathbb{E}[x_0|c,x_t])$ in Proposition 1. Both proofs are not limited to a specific parameter $\\\\theta$ and can be extended to $\\\\theta'$ after training. \\n\\nFirst, the proof of $\\\\mathbb{E}[x\\\\_0|c,x\\\\_t]=\\\\hat{x}\\\\_0(x\\\\_t)$ is based on Tweedie\\u2019s formula, and the formula itself has no dependence on the model parameters. Specifically, Chung et al., (2023) provided the following proof in Appendix A of their paper. Given a noisy latent representation $x_t$, the conditional probability of $x_0$ can be written as $p(x\\\\_0|x\\\\_t)=p_0(x_t)\\\\exp(x_0^TT(x_t)-\\\\varphi(x_0))$, where $p\\\\_0(x\\\\_t)=\\\\frac{1}{(2\\\\pi (1-\\\\bar{\\\\alpha}\\\\_t))^{d/2}} \\\\exp(-\\\\frac{\\\\Vert x\\\\_t \\\\Vert^2}{2(1-\\\\bar{\\\\alpha}\\\\_t)}), T(x\\\\_t)=\\\\frac{\\\\sqrt{\\\\bar{\\\\alpha}\\\\_t}}{1-\\\\bar{\\\\alpha}\\\\_t}x\\\\_t$, and $\\\\varphi(x\\\\_0)=\\\\frac{\\\\bar{\\\\alpha}\\\\_t \\\\Vert x\\\\_0 \\\\Vert ^2}{2(1-\\\\bar{\\\\alpha}\\\\_t)}$.\\nAccording to this equation and Tweedie\\u2019s formula, we have $\\\\mathbb{E}[x\\\\_0|c,x\\\\_t]=\\\\frac{1}{\\\\sqrt{\\\\bar{\\\\alpha}\\\\_t}}(x\\\\_t+(1-\\\\bar{\\\\alpha}\\\\_t) \\\\nabla\\\\_{x\\\\_t} \\\\log p\\\\_t (x\\\\_t))$.\\nTo this end, both $\\\\epsilon_{\\\\theta}$ and $\\\\epsilon_{\\\\theta'}$ in the diffusion model represent an estimation for the term $\\\\nabla_{x_t}p_t(x_t)$. Therefore, we can still estimate $\\\\mathbb{E}[x_0|c,x_t]$ in the current model.\\n\\nSecond, the estimation of $\\\\mathbb{E}[r(c,x_0)|c, x_t]\\\\approx r(c, \\\\mathbb{E}[x_0|c,x_t])$ in Proposition 1 is proven based on the Jensen gap upper bound [1] given the reward function $r(\\\\cdot)$, and it is agnostic to the model parameter of the diffusion model.\\n\\n[1] Gao et al., Bounds on the Jensen Gap, and Implications for Mean-Concentrated Distributions. arXiv:1712.05267.\\n\\n---\\n\\n**Q2: About best quality. \\\"... whether it can practically achieve this score. The authors should demonstrate how their method can achieve the best quality of DDPO/D3PO on ANY of the rewards.\\\"**\\n\\n**A2:** Thank you for the helpful suggestion. First, Figure 4 has shown that given the JPEG compressibility as the reward, out method can achieve the best quality of DDPO (>-10) within 4k paired samples, while DDPO needs more than 20k paired samples (according to Figure 4 of DDPO (Black et al., 2024)).\\n\\nSecond, we have followed your suggestion to *conduct a new experiment*, where we take the aesthetic scorer as the reward model to finetune SD-v1.5 using DDPO, D3PO, and TailorPO on 40k paired samples. We report the change of the reward scorer during the training process in Figure 9 of the revised manuscript (the fine-tuning of D3PO is still in progress and we will update the result as soon as possible). We observe three phenomena from this figure. \\n\\n(1) TailorPO increases the aesthetic scorer the most effectively with less than 20k paired samples. This means that we can use fewer samples than other methods to achieve a good performance.\\n\\n(2) Although DDPO reaches the highest aesthetic score at 40k samples, we observe the severe reward hacking problem with the generated images. We provide some examples in Figure 9.\\u00a0 All these images are unnatural with the same color, same style, and similar background (yellow leaves). Therefore, instead of fine-tuning the diffusion model with too many samples to achieve an extremely high reward score, we would suggest controlling the number of samples to strike a balance between the good image quality and a high reward score.\\n\\n(3) D3PO is less effective than both DDPO and TailorPO, and this conclusion is consistent with Figure 3 of D3PO's original paper. This phenomenon also supports our discovery of its inherent issues about preference order and gradient direction.\"}", "{\"title\": \"Response to Reviewer ZmZ1\", \"comment\": \"Thank you for your positive feedback and valuable comments. We will try our best to answer all your concerns. Please let us know if you still have further concerns, so that we can further update the response ASAP.\\n\\n**Q1: About details of user study. \\\"The details of user study are not disclosed and results are not carefully analyzed.\\\"**\\n\\n**A1:** Thank you. We would like to provide more details about the user study, and we have followed your suggestion to add these details in Appendix D of the revised manuscript.\\n\\n- Participant: We collect feedback from five annotators in the original manuscript, and we include five more annotators during the rebuttal phase. All annotators acknowledge that their efforts will be used to evaluate the performance of different methods in this paper.\\n- Task instruction: The human annotators are given several triplets of ($c, x^{(a)}_1, x^{(b)}_0$), where $c$ is the text prompt and $x^{(a)}_1$ and $x^{(b)}_0$ represent the image generated by the model finetuned by method $a$ and method $b$, respectively. Then, the annotator is asked to compare the two images from the perspective of alignment, aesthetics, and visual pleasantness. If both images in a pair look very similar or are both unappealing, then the annotator should label \\u201cdraw\\u201d for them. Otherwise, they label the \\\"win\\\" and \\\"lose\\\" tag for each image. In this way, for each pair of comparing methods, we have 225 triplets of ($c, x^{(a)}_1, x^{(b)}_0$) and each annotator label 225 \\\"win/lose\\\" or \\\"draw\\\" tags.\\n- Mitigation: In order to avoid user bias, we hide the source of $x^{(a)}_1$ and $x^{(b)}_0$ and randomly place their order to annotators.\\n\\n---\\n\\n**Q2: About comparison with Diffusion-DPO. \\\"fails to compare against state-of-the art offline methods such as Diffusion-DPO.\\\"**\\n\\n**A2:** Thank you. We have followed your suggestion to *conduct a new experiment* to compare TailorPO with Diffusion-DPO (Wallace et al., 2024). Diffusion-DPO finetuned SD-v1.5 on the Pick-a-Pic dataset in an offline manner. Therefore, we also finetune SD-v1.5 using TailorPO on prompts in the Pick-a-Pic training set and evaluate the performance using prompts in the Pick-a-Pic validation set. We use aesthetic scorer and ImageReward as the reward model, respectively.\\n\\n| | Aesthetic score | ImageReward |\\n| ------------- | --------------- | ----------- |\\n| Diffusion-DPO | 5.505 | 0.1115 |\\n| TailorPO | 6.050 | 0.3820 |\\n| TailorPO-G | 6.242 | 0.3791 |\\n\\nThe results of Diffusion-DPO in the above table are from (Liang et al., 2024). The above table shows that our methods achieve higher reward values than Diffusion-DPO in both aesthetic score and ImageReward score.\\n\\n---\\n\\n**Q3: About performance on in-distribution prompts. \\\"What are the performance on in-distribution prompts? Table 3 shows that TailorPO-G loses to TailorPO on HPSv2 ... Is this because TailorPO-G overfits to the training prompts?\\\"**\\n\\n**A3:** I would like to confirm whether the \\\"in-distribution prompts\\\" refer to prompts used in training. If so, then Table 2 reports the performance on in-distribution prompts. Furthermore, we have *conducted new experiments* to finetune and evaluate SD-v1.5 on complex prompts in the Pick-a-Pic dataset. In this case, prompts used for evaluations are also in-distribution prompts.\\n\\n| | Aesthetic score | ImageReward |\\n| ---------- | --------------- | ----------- |\\n| SD-v1.5 | 5.69 | -0.04 |\\n| TailorPO | 6.05 | 0.38 |\\n| TailorPO-G | 6.24 | 0.38 |\\n\\nOn the other hand, In Table 3, the model is finetuned on 45 simple prompts of animals and tested on 500 complex prompts in the Pick-a-Pic dataset. In this case, the testing prompts contain many descriptions of objects, scenes, light, and style, and these descriptions are unseen in finetuning. Therefore, TailorPO-G may only strengthen the ability of the model to generate high-quality animal-related objects but cannot cover all these complex scenes. Nevertheless, both TailorPO and TailorPO-G have outperformed previous methods in most cases.\"}", "{\"title\": \"Reply\", \"comment\": \"## Reply to the Soundness\\n\\n### On Empirical Results\\nGiven the importance of empirical results in this domain, I can overlook theoretical unsoundness if supported by **substantial** experimental evidence. \\nThe goal of the arguments is to refute mine by providing empirical results rather than characterizing $\\\\nabla_x \\\\log p_t(x_t)$ after training $\\\\theta$. Writing a consistent story with theoretical and empirical results would be better.\", \"i_still_question_the_empirical_results\": \"1. Error bars are missing from both tables, as commented on November 25, 2024, at 01:15.\\n2. What is the maximum value of $t$? The author should include this value in the table.\\n\\n### On Theoretical Results\\nHaving fixed condition $c$, let $\\\\mathcal{D}'$ denote the image distribution generated by $x_0 | c, x_t, \\\\theta'$, obtained from your trained model $\\\\theta'$. Let $\\\\epsilon_\\\\phi$ be the unique diffusion model that perfectly fits $\\\\mathcal{D}'$. Your argument introduces the hypothesis (H) that $\\\\epsilon_\\\\phi \\\\equiv \\\\epsilon_{\\\\theta'}$, with the rest following from Tweedie's formula. We should question the validity of H.\\n\\nMy intuition, supported by Figure 9, suggests that H is **false**, as $\\\\epsilon_\\\\phi$ is constrained by the Fokker-Planck equation, while $\\\\epsilon_{\\\\theta'}$ is not. Therefore, I cannot accept this method on a theoretical basis.\\n\\nFor better understanding, I'd like to ask if you can briefly explain Tweedie's formula **in the context of the paper**, including the specific conditions under which the formula applies.\\n\\n## Other Concerns\\n\\nI raised my rating to 6 but lowered my confidence to 2; here's why:\\n\\nHaving read Reviewer zqb2's comments and the paper SPO, I cannot ignore the similarities between the components. Given SPO's contributions, I perceive the remaining contributions on the table to be:\\n1. Theoretical contributions\\n2. TailorPO-G\\n\\nI have already expressed my concerns about the theoretical contributions. If I were to assign a score of 6 (marginally above the acceptance threshold), it would be solely due to TailorPO-G. However, considering the contributions of TailorPO-G, **it still falls short of my expectations for ICLR papers**. This score is **conditional** on the resolution of the integrity issue raised by zqb2.\"}", "{\"comment\": \"Thank you for taking the time to review our response and raising your score. We sincerely appreciate your insightful suggestions, and we are glad to hear your previous concerns are addressed.\\n\\nRegarding the concern for the similarity between our work and SPO, we would like to clarify our similarities and differences. The similarity between our study and SPO only lies in that both of us evaluate the reward of each denoising step given the same input. To this end, we have cited, discussed, and empirically compared with SPO in our paper. Beyond this similarity, we would like to further emphasize the distinct contribution of our paper, which lies in the following perspectives.\\n\\nFirst, we discover distinct **theoretical findings** to support the design of our training framework, while SPO is mainly based on empirical intuitions.\\n\\n- We formulate the reward of each denoising step as the action value function in MDP, and **derive its theoretical formulation** $\\\\mathbb{E}[r(c, x_0)|c, x_t]$ (Section 3.2).\\n- We **discover and theoretically prove the gradient issue** caused by previous methods for the first time (Section 3.2). Such analysis inspires us to sample from the same $x_t$, and we theoretically prove that this simple operation could address the gradient issue and align the optimization direction with preferences (Section 3.3).\\n\\nSecond, we have the following **technical contributions**, which significantly differ from SPO.\\n\\n- We **directly evaluate** the preference quality of noisy samples based on the estimation for the action value function (Section 3.3), instead of training a new reward model based on uncertified assumptions.\\n- We **incorporate the gradient guidance** of reward models to enlarge the gap between paired samples to boost the aligning effectiveness for the first time. We also theoretically prove that this guidance pushes the model optimization towards high reward values from the perspective of gradient (Section 3.4 and Appendix B).\\n\\nFinally, experimental results demonstrate that TailorPO achieved **better performance and generalization ability than previous methods** including SPO (Section 4.1 and Section 4.2). This is because the estimation in Eq. (12) for the step-wise reward provides us with a more accurate and reliable preference label. Furthermore, TailorPO-G further improved the aligning effectiveness.\\n\\nThank you again for your reply and we hope this could answer your further concerns.\"}", "{\"title\": \"Response to Reviewer F2pk (part 1)\", \"comment\": \"Thank you for your valuable comments. We will try our best to answer all your concerns. Please let us know if you still have further concerns, so that we can further update the response ASAP.\\n\\n**Q1: About ablation study. \\\"ablation study on the contribution of each component in TailorPO to its effectiveness.\\\"**\\n\\n**A1:** Thank you. We have followed your suggestion to *conduct a new experiment* on the contribution of each component in TailorPO and TailorPO-G. There are three key components: (1) step-level preference ranking, (2) the same input condition at each step, and (3) gradient guidance of reward models. Therefore, we fine-tune SD-v1.5 based on the aesthetic scorer using (1), (1)+(2), (1)+(2)+(3). Here we set the same random seed for a fair comparison, so the results of (1)+(2) and (1)+(2)+(3) are slightly different from Table 2 (where we averaged the results of three runs under different random seeds). The following table shows that all these components improve the aligning effectiveness. We have added this experiment to Appendix F of the revised manuscript.\\n\\n| | Aesthetic scorer | ImageReward |\\n| ------------------------------------------------------------ | ---------------- | ----------- |\\n| SD-v1.5 | 5.79 | 0.65 |\\n| (1) step-level preference ranking | 6.40 | 0.98 |\\n| (1) step-level preference ranking + (2) same input condition at each step | 6.69 | 1.16 |\\n| (1) step-level preference ranking + (2) same input condition at each step + (3) gradient guidance | 6.78 | 1.25 |\\n\\n---\\n\\n**Q2: About generalization on different fine-tuning methods and base models. \\\"verification of generalization on fine-tuning methods, such as LoRA and full fine-tuning ... expanding evaluation to include a broader range of base models would strengthen the results.\\\"**\\n\\n**A2:** (1) Generalization on different fine-tuning methods. In this study, we use LoRA because almost all fine-tuning methods for aligning diffusion models used LoRA, including DPOK, DDPO, D3PO, SPO, and DenseReward. For a fair comparison, we also fine-tuned the model using LoRA. On the other hand, due to the limited resources, we are not able to fully fine-tune a diffusion model in an acceptable period of time. Nevertheless, we are positive about the generalization ability of our method on different fine-tuning methods, given that it has demonstrated effectiveness on LoRA.\\n\\n(2) Generalization on different base models. We have followed your suggestion to *conduct a new experiment* on Stable Diffusion-v2.1-base (SD-v2.1-base, https://huggingface.co/stabilityai/stable-diffusion-2-1-base). We fine-tune SD-v2.1 on the set of animal-related prompts by taking the aesthetic scorer as the reward model, and then evaluate the model using the same prompts. After fine-tuning with TailorPO, we improve the aesthetic score of generated images from 5.95 to 6.21. In comparison, DDPO only reaches 6.02.\\n\\n---\\n\\n**Q3: About user study. \\\"Figure 5 indicates an evident over-saturation issue ... Conducting a user study could better substantiate this claim.\\\"**\\n\\n**A3:** Thank you. We have conducted a user study on these generation results as you requested, as stated in Lines 469-473 of the original manuscript (Lines 469-475 of the revised manuscript). Results in Figure 7 show that TailorPO and TailorPO-G receive higher preference than previous methods. Moreover, in the revised manuscript, we extend the user study and collect feedback from a total of ten human annotators. The result in Figure\\u00a07 shows that our method indeed generated human-preferred images.\\n\\nOn the other hand, the over-saturation issue in Figure 5 is caused by the overoptimization of the model towards the preference bias of the reward model. We provide a detailed discussion about this problem in the answer to your Q7. When we used other reward models, Figure 10 in Appendix E.4 of the revised manuscript demonstrates that the over-saturation issue does not appear.\"}", "{\"title\": \"Response to Reviewer nzop (part 2)\", \"comment\": \"**Q3: About novelty and soundness.**\\n\\n**A3:** Thank you for the comment. We would like to discuss the novelty issue here and answer your concerns about soundness in the corresponding questions (Q1-Q2 and Q4-Q7).\\n\\nFirst, this study is not a simple A+B-styled work. Although we use a similar method with GAE [1] to introduce the value function for reward evaluation in aligning diffusion models, we are not simply combining these works. Instead, **we have a complete analysis framework tailored for aligning diffusion models**. Specifically, we start by rethinking the existing aligning framework of diffusion models and identifying the mismatch between the trajectory-level preference ranking and step-level optimization. This issue is substantiated by our theoretical analysis of the (1) inaccurate preference order and (2) disturbed gradient direction. Then, we propose methods to address these issues based on the theoretical analysis. Beyond introducing the value function for reward evaluation to address the inaccurate preference order, we revise the training framework of D3PO to align the gradient direction in optimization, and this is also a major contribution of this study. Moreover, we notice the potential impact of TailorPO on the similarity of paired samples, and novelly design TailorPO-G tailored for diffusion models to further improve the effectiveness. Finally, we conduct various experiments to demonstrate the effectiveness and generalization ability of our methods.\\n\\nSecond, this study has a distinctive contribution to the community of generative models, especially the alignment of generative models. We identify potential issues in the existing DPO-styled aligning framework and provide a new framework tailored for the diffusion pipeline. Our framework can also be extended to various scenarios including aligning the generation of videos and 3D objects based on diffusion models.\\n\\n---\\n\\n**Q4: Typos and the descriptions. \\\"The DPO method is NOT first proposed to fine-tune large language models to align with human preferences unless the \\\"preferences\\\" refer to preference datasets.\\\"**\\n\\n**A4:** Thank you for your careful review. We have followed your suggestions to correct the typo and clarify the description of DPO in Line 155: \\\"The DPO method is originally proposed to fine-tune large language models to align with human preferences based on paired datasets.\\\"\\n\\n---\\n\\n**Q5: \\\"Can we use a trained policy to generate deterministically like probability flow ODE?\\\"**\\n\\n**A5:** Thank you, but I do not fully understand this question, so I would like to first confirm it with you. You mentioned that \\\"DDPO/D3PO-related methods cannot retain the Fokker-Planck equation of the original model,\\\" and I am not sure what component of the FK equation is broken by DDPO/D3PO-related methods. Furthermore, in my understanding, although DDPO and D3PO change the model parameter, the model is still learned to approximate the score function $\\\\nabla_x \\\\log p_t(x)$ in the probability flow ODE, constrained by a KL regularization term. Therefore, I am confused about which operation prevents the model from deterministically generating. Could you kindly explain more about this concern? I hope we can discuss more on this question together during the rebuttal period, so as to help us understand and address your concern.\"}", "{\"comment\": \"Thank you for your feedback. We sincerely appreciate your suggestions on rigorous formulations. We have modified the formulation of the value function with $TD(\\\\lambda)$, and added the formulation of EDM in the Appendix. We agree that formulation within the EDM will significantly improve the applicability of our method, and we plan to use this formulation in our future explorations of diffusion models. In this paper, considering that this study focuses on identifying potential flaws in existing works and then proposing a new training framework, we follow previous studies to use a simpler formulation (DDIM) in the main text for better comparison and readability.\\n\\nRegarding the concern about the estimation of the step-wise reward, we have conducted several new experiments to answer your concern. First, we compare the estimated value $r(c,\\\\hat{x}\\\\_0(x\\\\_t))$ with $r\\\\_t(c,x\\\\_t)\\\\triangleq \\\\mathbb{E}[r(c,x\\\\_0)|c,x\\\\_t]$ at different checkpoints to verify the reliability of using $\\\\theta'$ after training for estimation. For the fine-tuned model $\\\\epsilon_{\\\\theta'}$, we sample 100 pairs of $(c,x\\\\_t)$ at each timestep $t\\\\in\\\\\\\\{12,8,4,1\\\\\\\\}$. Give each pair of $(c,x\\\\_t)$, we sample 100 images $x\\\\_0$ based on $x\\\\_t$, query reward values of all $x\\\\_0$, and then compute $r\\\\_t(c,x\\\\_t)=\\\\mathbb{E}[r(c,x\\\\_0)|c,x\\\\_t]$ as the ground truth of the step-wise reward. Then, we compute the estimated value $r(c,\\\\hat{x}\\\\_0(x\\\\_t))$ based on the fine-tuned parameters $\\\\theta'$. The following tables report the average relative error $\\\\mathbb{E}[\\\\vert\\\\frac{r\\\\_t(c,x\\\\_t) - r(x, \\\\hat{x}\\\\_0(x\\\\_t))}{r\\\\_t(c,x\\\\_t)}\\\\vert]$ at different timesteps $t$ in different models (we use the aesthetic scorer and JPEG compressibility as the reward model, respectively).\\n\\n\\ud83d\\udc47Average relative error of aesthetic score.\\n\\n| timestep $t$ | 12 | 8 | 4 | 1 |\\n| -------------------------------------------------- | ------ | ------ | ------ | ------ |\\n| Pre-trained model $\\\\epsilon_\\\\theta$ | 0.0545 | 0.0378 | 0.0132 | 0.0047 |\\n| $\\\\epsilon_{\\\\theta'}$ after training on 10k samples | 0.0353 | 0.0176 | 0.0106 | 0.0033 |\\n| $\\\\epsilon_{\\\\theta'}$ after training on 40k samples | 0.1330 | 0.0283 | 0.0132 | 0.0070 |\\n\\n\\ud83d\\udc47Average relative error of JPEG compressibility\\n\\n| timestep $t$ | 12 | 8 | 4 | 1 |\\n| -------------------------------------------------- | ------ | ------ | ------ | ------ |\\n| Pre-trained model $\\\\epsilon_\\\\theta$ | 0.2263 | 0.1259 | 0.0390 | 0.0070 |\\n| $\\\\epsilon_{\\\\theta'}$ after training on 10k samples | 0.2492 | 0.1440 | 0.0425 | 0.0074 |\\n| $\\\\epsilon_{\\\\theta'}$ after training on 40k samples | 0.1566 | 0.0341 | 0.0113 | 0.0066 |\\n\\nThese results demonstrate that after fine-tuning, the model $\\\\epsilon_{\\\\theta'}$ achieves a small error as the pre-trained model $\\\\epsilon_\\\\theta$ does. Moreover, our DPO-based loss function **does not require an accurate reward value, but only needs the preference order of samples**. Even if there is a small estimation error for the step-wise reward, it does not affect the preference order between paired samples, thus having little effect on training. Therefore, the modified parameter $\\\\theta'$ can still be utilized to reliably estimate the step-wise reward.\\n\\nSecond, we would like to discuss the reason for using the fine-tuned parameter $\\\\theta'$ instead of the pre-trained parameter $\\\\theta$. In our scenario, we aim to estimate the value function of $x_t$ in the current model during training, and this requires the expectation of images *generated by the current model* $\\\\epsilon_{\\\\theta'}$ given $x_t$, *i.e.,* $\\\\mathbb{E}[x_t|c,x_t,\\\\theta']$. To this end, the pre-trained parameter $\\\\theta$ yields the expectation of images generated by the pre-trained model, $\\\\mathbb{E}[x_t|c,x_t,\\\\theta]$, rather than the current model. In other words, the pre-trained parameter $\\\\theta$ can only estimate the value function of $x_t$ in the pre-trained model, but cannot be used in models after training. Therefore, we choose to use the fine-tuned parameter $\\\\theta'$. Considering the proof based on Tweedie's formula in our previous response, this can be considered that we use a shifted approximation for $\\\\nabla_{x_t}\\\\log p_t(x_t)$ to estimate images $x_0$ in a shifted distribution with high reward values.\\n\\nThird, we are conducting new experiments to investigate the performance of our methods on more reward models and base models, after training on more than 10k samples. This experiment takes more time and we will update the result as soon as possible.\"}", "{\"summary\": \"A. The paradigm of formulating the backward diffusion process in reinforcement learning, as contributed by DDPO, Diffusion-DPO, and D3PO, is well explored.\\n\\nB. In reinforcement learning, reducing fluctuations in advantage estimation by introducing value function is unanimously critical and well-explored in GAE and its related works. A more accurate estimation of advantages benefits reinforcement learning algorithms by applying policy optimization methods like PPO.\\n\\nThis work, TailorPO and TailorPO-G, is an instance of A + B among all possible formulations of similar ideas. The loss function used to train the diffusion model (policy) is modified by fixing the intermediate noisy sample $x_t$ to reduce fluctuations in estimating the effect of actions. Similar to GAE, $r_t$ serves as a value function that helps judge the quality of an action (referred to as the preference order in this paper). Backpropagating through the reward function $r$ helps identify the optimal $x_t$ (best/worst) as a data point for preference training.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper claims to be the first to combine A and B, which I regard as a factual novelty under the formulations of D3PO and DDIM.\", \"The paper introduces the backpropagation of the reward function to generate data points for RL training, which may be useful for generating data points for other preference-learning algorithms.\", \"The efficacy of TailorPO(-G) is demonstrated with fewer than 10k samples.\", \"I believe the derivation of $\\\\nabla_{\\\\theta} \\\\mathcal{L}$ is correct in Equation (11). The neat form of Equation (11) is inspiring to me.\", \"Equation (12) provides insight into estimating reward expectations. I believe the property is correct and valuable for other applications if the parameters remain intact.\", \"I acknowledge that the paper is worth publishing, but it falls below the standard expected for ICLR (see below).\"], \"weaknesses\": \"1. In Equation (12), the fact that $r_t$ can be approximated by $\\\\epsilon_\\\\theta$ **during training** is questionable. It is unclear whether $r_t$ can still be accurately estimated using either $\\\\epsilon_\\\\theta$ or $\\\\epsilon_{\\\\theta'}$ after the model parameter is updated from $\\\\theta$ to $\\\\theta'$, as prior work is proven under the initial $\\\\theta$. This raises concerns about whether the optimization method remains effective after the parameter update, where $\\\\theta'$ is significantly different from $\\\\theta$. The absence of a theoretical argument hinders the soundness.\\n2. The aesthetic score using DDPO has reached its best quality (>8) with 40k reward queries. Based on point 1., I doubt whether it can practically achieve this score. The authors should demonstrate how their method can achieve the best quality of DDPO/D3PO on ANY of the rewards.\\n3. I assessed this work as an interpolation anchored from A and B; the novelty is, therefore, upper-bounded. Even if they reach the state of the art, predictable if the technique in B is well-implemented, they bring limited new knowledge to the community. Given the paper's novelty, I have set its upper bound for evaluation marginally above the acceptance threshold. Due to unresolved soundness issues, the paper stands slightly below the acceptance threshold.\", \"typos_i_found\": [\"Line 144.\", \"Line 312.\"], \"factual_error\": \"1. Line 154: The DPO method is NOT first proposed to fine-tune large language models to align with human preferences unless the \\\"preferences\\\" refer to preference datasets. In this case, the authors should clarify.\", \"questions\": \"1. DDPO/D3PO-related methods cannot retain the Fokker-Planck equation of the original model, where probability flow ODE relies on this property. Can we use a trained policy to generate deterministically like probability flow ODE? Although challenging, this feature could add value if developed.\\n2. I would consider raising the rating (even above the upper bound) if the paper includes a more thorough formulation. For example, in reinforcement learning, the contribution to the value function is a relatively small extension of GAE (2015); I recommend incorporating at least a TD(\\u03bb) formulation. Additionally, as DDIM, DDPM, and other schedulers are subclasses of EDM, reformulating your framework under EDM could enable greater adaptability to alternative schedulers. Although EDM is not yet widely adopted in this community, such a formulation may enhance the paper\\u2019s applicability.\\n3. See Weaknesses\", \"flag_for_ethics_review\": \"['Yes, Responsible research practice (e.g., human subjects, data release)']\", \"details_of_ethics_concerns\": \"While the user study by issuing questionnaires appears to be low-risk and may be exempt from formal IRB review, the inclusion of a brief statement on ethical considerations or participant consent would enhance transparency.\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your reply! We sincerely appreciate your valuable comments that have helped us improve our paper.\"}", "{\"title\": \"Response to Reviewer F2pk (part 2)\", \"comment\": \"**Q4: \\\"The framework may not be data-efficient, as it seems unable to leverage existing dataset including preference pairs of candidates across different conditions.\\\"**\\n\\n**A4:** This is a good question. Although our method cannot directly leverage existing datasets in an offline manner, the online learning of our method has additional advantages, including good performance and good generalization ability.\\n\\nFirst, many studies [1-5] have shown that online strategies of learning algorithms significantly outperform their offline counterparts, while offline strategies face the challenges of OOD samples and gradient issues [4]. The following table also shows that our methods outperform the state-of-the-art offline method, Diffusion-DPO\\u00a0 (Wallace et al., 2024).\\u00a0The results of Diffusion-DPO in the following table are from (Liang et al., 2024).\\n\\n| | Aesthetic score | ImageReward |\\n| ------------- | --------------- | ----------- |\\n| Diffusion-DPO | 5.505 | 0.1115 |\\n| TailorPO | 6.050 | 0.3820 |\\n| TailorPO-G | 6.242 | 0.3791 |\\n\\nSecond, our method has good generalization ability. The online TailorPO is applicable to open-vocabulary scenarios, not limited by prompts in the dataset. In addition, Table 3 and Figure 8 have shown that our methods exhibit good generalization ability over different prompts. Besides, TailorPO-G incorporates the gradient guidance of reward models in the framework, which supports the injection of different conditions in training.\\n\\nFinally, if we want to leverage an existing dataset for TailorPO, the most straightforward approach is to first train a reward model on the given dataset and then align the diffusion model towards the reward model. In fact, the training of the reward model is also effective. For example, [6] has found that a simple linear model on the top of CLIP ViT/14 is sufficient to produce satisfying results.\\n\\n[1] Liu et al., Statistical Rejection Sampling Improves Preference Optimization. ICLR 2024.\\n\\n[2] Xu et al., Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study. ICML 2024.\\n\\n[3] Xiong et al., Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-constraint. arXiv: 2312.11456.\\n\\n[4] Feng et al., Towards Analyzing and Understanding the Limitations of DPO: A Theoretical Perspective. arXiv: 2404.04626.\\n\\n[5] Dong et al., RLHF Workflow: From Reward Modeling to Online RLHF. TMLR 2024.\\n\\n[6] Schuhmann et al., LAION-5B: an open large-scale dataset for training next generation image-text models. NeurIPS 2022.\\n\\n---\\n\\n**Q5: \\\"reflect the preference selection procedure in Eq.10\\\"**\\n\\n**A5:** Thank you for the kind suggestion. We did not explicitly include the preference selection procedure in Eq. (10) because Eq. (10) followed the classic formulation of DPO (Eq. (3)) and we thought it was easy to be understood by readers. On the other hand, the suggestion of \\\"reflecting the preference selection procedure in Eq.10\\\" is also insightful, and we rewrite the loss function in Eq. (10) as follows. \\n\\n$\\\\mathcal{L}(\\\\theta) = -\\\\mathbb{E}\\\\_{(c,x\\\\_{t},x^{(0)}\\\\_{t-1}, x^{(1)}\\\\_{t-1})} \\\\left[\\\\log \\\\sigma \\\\left((-1)^{\\\\mathbb{1}(r\\\\_t(c,x^{(0)}\\\\_{t-1})<r\\\\_t(c,x^{(1)}\\\\_{t-1}))} \\\\cdot \\\\left[\\\\beta\\\\log \\\\frac{\\\\pi\\\\_\\\\theta(x^{(0)}\\\\_{t-1}|x\\\\_{t}, c)}{\\\\pi_\\\\text{ref}(x^{(0)}\\\\_{t-1}|x\\\\_{t}, c)} - \\\\beta\\\\log \\\\frac{\\\\pi\\\\_\\\\theta(x^{(1)}\\\\_{t-1}|x\\\\_{t}, c)}{\\\\pi\\\\_\\\\text{ref}(x^{(1)}\\\\_{t-1}|x\\\\_{t}, c)} \\\\right]\\\\right)\\\\right]$\\n\\nwhere $\\\\mathbb{1}(\\\\cdot)$ is the indicator function. The term $(-1)^{\\\\mathbb{1}(r\\\\_t(c,x^{(0)}\\\\_{t-1})<r\\\\_t(c,x^{(0)}\\\\_{t-1})}$ represents the step-level preference ranking procedure. We have added this form of the loss function in Appendix B of the revised manuscript for clarification.\"}", "{\"title\": \"Figure 1(b) is almost identical to Fig. 3 (c) in SPO (Liang et al. 2024)\", \"comment\": \"Hi Authors,\\n\\nThanks for your response. I would like to point out that Figure 1(b), which is a main contribution of the submission, is almost identical to Fig. 3 (c) in SPO (Liang et al. 2024). \\n\\nIt is indicated the submission that \\\"In contrast, we generate noisy samples from the same input xt and directly rank their preference order for optimization.\\\" This is the exactly the same as SPO. \\n\\nThis paradigm has been proposed in SPO. Indeed you cited SPO, and you know SPO very well. But you didn't discuss how pipeline drawn in Fig. 3(c) differs from SPO (In fact there is no difference). To me, this difference has been intentionally ignored.\\n\\nI still hold strong reject and am skeptical about this paper's academic integrity. \\n\\nRegards, \\nReviewer\"}", "{\"title\": \"Response to Reviewer zqb2\", \"comment\": \"Thank you for your review. Your overall reviews lie in the similarity between our method and SPO, as well as the doubts about the experimental results regarding SPO. We will now provide detailed responses to these questions.\\n\\nFirst, the similarity between our study and SPO (Liang et al., 2024) only lies in that both of us observe the inconsistency in the preference order between the intermediate-step outputs and final generations, and both of us compare the preference of intermediate-step outputs to address this inconsistency, although we **have different motivations and propose different solutions**. To this end, we have clearly and respectfully cited SPO (Liang et al., 2024) and discussed our differences in Lines 126-130 and Lines 204-208, and we have compared our method with SPO in experiments, as shown in Table 2, Table 3, and Figure 7. For example, we have mentioned that \\\"SPO (Liang et al., 2024) also pointed out the problematic assumption ...\\\" \\\"Liang et al. (2024) demonstrated that ...\\\" Therefore, we have never deliberately ignored the contribution of SPO. \\n\\nSecond, beyond the above similarity, there are many differences between our study and SPO.\\n\\n- **The motivation is different.** Our study is motivated by the **theoretical discovery** of the mismatch between the trajectory-level ranking and step-level optimization. In comparison, SPO observed the inconsistency between intermediate-step outputs and final outputs from **visual demonstrations**. Specifically, we conduct detailed theoretical analysis and identify the following two issues of existing training framework: (1) *inaccurate preference order* and (2) *disturbed gradient direction*. These theoretical analysis motivates us to propose a new training framework of DPO tailored for diffusion models.\\n- We use **totally different method to tackle the inaccurate preference order issue**. In SPO, the authors trained a step-wise reward model based on another assumption of the consistency between $x_t$ and images. In comparison, **we do not train a new model for reward evaluation**. Instead, we formulate the denoising process as MDP and utilize the value function as the measurement of step-level reward.\\n- We propose **different method and implementation to address the gradient issue**. First, we look back to the original formulation of DPO and ensure the same conditional input accordingly, which is very straightforward based on our theoretical analysis. Second, we notice that this operation potentially causes pairwise samples to be similar, so we introduce the gradient guidance of reward model in the training framework. This is one of the major contributions of this study, and it significantly improves the performance. In comparison, SPO chose to sample multiple outputs at each step for comparison.\\n\\nTherefore, we are the first to explicitly derive the theoretical flaws of previous DPO implementations in diffusion models based on distinct characteristics of diffusion models. Moreover, we are the first to leverage the gradient guidance technique of diffusion models in preference aligning to enhance the performance. \\n\\nThird, we obtain the results of SPO in Table 2 by running the officially released training code in [https://github.com/RockeyCoss/SPO](https://github.com/RockeyCoss/SPO/) using 45 animal-related prompts and evaluating the model on these prompts, following DDPO and D3PO for a fair comparison. The difference in prompts causes the difference in the reported results. We have introduced the detailed experimental settings in Lines 404-414 of the original manuscript and we will emphasis this difference for clarification. \\n\\nFurthermore, we also conduct a *new experiment* to compare the performance of TailorPO and SPO using the same prompts with SPO (Liang et al., 2024), *i.e.,* 4k prompts in the pick-a-pick dataset. We finetune SD-v1.5 using 4k prompts in the Pick-a-Pic training set, and evaluate the performance on 500 prompts in the Pick-a-Pic validation set. Results in the following table show that our methods still outperform SPO. \\n\\n| | Aesthetic score | ImageReward |\\n| ---------- | --------------- | ----------- |\\n| SPO | 5.887 | 0.1712 |\\n| TailorPO | 6.050 | 0.3820 |\\n| TailorPO-G | 6.242 | 0.3791 |\\n\\nBesides the difference in prompts, the implementation of SPO in our study has slightly difference from (Liang et al., 2024). Due to the limit of resources, we finetuned SD-v1.5 with a small batch size of 2 on one V100 GPU for 10k samples, while Liang et al. (2024) finetuned SD-v1.5 with a large batch size of 40 on 4$\\\\times$ A100 GPUs for 40k samples.\"}", "{\"summary\": \"The paper presents a novel framework, TailorPO, aimed at aligning diffusion models with human preferences by directly optimizing for preference at each denoising step. This approach addresses a significant issue in existing direct preference optimization (DPO) methods, which often assume consistency between preferences for intermediate noisy samples and final generated images. The authors argue convincingly that this assumption can disrupt the optimization process, and their proposed solution is both innovative and timely, given the growing interest in controllable generative models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Originality:\\nThe paper introduces TailorPO, a novel framework that addresses a specific limitation in the application of direct preference optimization (DPO) to diffusion models. This represents a creative advancement in the field of generative modeling. The approach of generating noisy samples from the same input at each denoising step is innovative and directly addresses the issue of inconsistent preference ordering in intermediate steps of image generation.\\n\\n2. Quality:\\nThe paper is technically rigorous, providing a detailed theoretical analysis of the issues with existing DPO methods and a comprehensive empirical evaluation of TailorPO. The experiments are well-designed and demonstrate significant improvements over other methods in generating aesthetically pleasing images that align with human preferences.\\n\\n3. Clarity:\\nThe paper is well-organized, with a clear structure that logically progresses from problem formulation to solution proposal, followed by experimental validation.\\n\\n4. Significance:\\nThe work has high significance as it tackles a critical challenge in aligning generative models with human preferences, which is essential for the practical application of these models. The proposed TailorPO framework has the potential to influence future research in controllable generative modeling.\", \"weaknesses\": \"1. The user study, while valuable, is limited in scope with a small number of participants. A larger and more diverse user base would provide more robust evidence of the framework's effectiveness.\\n\\n2. While the paper compares TailorPO with other DPO-style methods, it would be strengthened by including comparisons with the current state-of-the-art methods.\\n\\n3. The paper could provide a more detailed discussion on the limitations of TailorPO, such as specific types of images or preferences that the model may struggle with, or scenarios where the framework might not be applicable.\", \"questions\": \"1. How does TailorPO perform compared to the current state-of-the-art methods in generative modeling, particularly in terms of image quality and alignment with human preferences? Benchmarking against state-of-the-art methods can provide a clearer picture of TailorPO's performance and its potential advantages.\\n\\n2. Are there specific scenarios or types of images where TailorPO underperforms or fails to align with human preferences? If so, could the authors discuss these limitations and potential avenues for improvement?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your feedback. Your concern focuses on the similarity between the pipeline in our Figure 1 and SPO\\u2019s Figure 3(c). In fact, the only similarity is that both of us sample noisy outputs from the same $x_t$ to compare their quality. Nevertheless, we are motivated by a distinct theoretical discovery about the gradient issue in previous methods. Moreover, beyond this, there are many differences between our pipeline and SPO.\\n\\nFirst, our motivation for using the same $x_t$ is different. While SPO aims to make the quality comparison of paired images \\u201creflects the denoising performance of this step alone,\\u201d we focus on the gradient issue in the optimization. Specifically, starting from the loss function of DPO and previous studies (D3PO), we notice their difference in the formulation of conditional inputs of the generative probability. Then, we consider how this setting affects the training. To this end, we derive the gradient of the previous loss function and identify that the gradient direction may be disturbed by different inputs $x_t$. Therefore, we consider the setting of different inputs to be problematic and it is straightforward to follow DPO to use the same $x_t$ for sampling and training. Furthermore, we also prove that using the same $x_t$ yields a correct and stable optimization direction.\\n\\nSecond, beyond Figure 1, Figure 3 of our paper provides a more detailed introduction to our method, which also demonstrates the difference between our method and SPO. There are several major differences. (1) We compare the preference of intermediate-step outputs by directly estimating the step-wise reward, instead of training an additional reward model. (2) Given a pair of outputs from the same $x_t$, we consider that the similarity between them may affect the training effectiveness. Therefore, we propose to use the gradient of reward models to guide the generation process. In this way, we enlarge the difference between the two samples and boost the training effectiveness. (3) The output which is generated using the gradient guidance to achieve a high reward is utilized for sampling in the following steps.\"}", "{\"title\": \"Response to Reviewer nzop (part 4)\", \"comment\": [\"**Q7: About user study. \\\"the inclusion of a brief statement on ethical considerations or participant consent would enhance transparency.\\\"**\", \"**A7:** Thank you. We would like to provide more details about the user study, and we have followed your suggestion to add these details and discuss the ethical considerations in Appendix D of the revised manuscript.\", \"Participant: We collect feedback from five annotators in the original manuscript, and we include five more annotators during the rebuttal phase. All annotators acknowledge that their efforts will be used to evaluate the performance of different methods in this paper.\", \"Task instruction: The human annotators are given several triplets of ($c, x^{(a)}_1, x^{(b)}_0$), where $c$ is the text prompt and $x^{(a)}_1$ and $x^{(b)}_0$ represent the image generated by the model finetuned by method $a$ and method $b$, respectively. Then, the annotator is asked to compare two images from the perspective of alignment, aesthetics, and visual pleasantness. If both images in a pair look very similar or are both unappealing, then they should label \\u201cdraw\\u201d for them. Otherwise, they label the image with the \\\"win\\\" and \\\"lose\\\" tags. In this way, for each pair of comparing methods, we have 225 triplets of ($c, x^{(a)}_1, x^{(b)}_0$) and each annotator label 225 \\\"win/lose\\\" or \\\"draw\\\" tags.\", \"Mitigation: In order to avoid user bias, we hide the source of $x^{(a)}_1$ and $x^{(b)}_0$ and randomly place their order to annotators.\"]}", "{\"comment\": \"Thank you for your comprehensive revision. I have adjusted my score accordingly. However, as noted in my previous review and other reviewers, a dedicated section discussing the differences would further enhance the clarity of your work. For this reason, I will maintain my confidence score at 3.\"}", "{\"title\": \"Response to Reviewer F2pk (part 4)\", \"comment\": \"**Q8: About the baseline for comparison. \\\"The baseline for comparison seems relatively weak ... share results using the typical choice of 50 timesteps for DDIM or 25 timesteps for other advanced schedulers\\\"**\\n\\n**A8:** Thank you for the suggestion. We have followed your suggestion to *conduct a new experiment* to compare TailorPO with more baselines. We measure the reward values of images generated by the base model using 50 and 100 DDIM timesteps and 50 timesteps for PDNM and DPM++ schedulers, respectively. We also fine-tune and evaluate the model using DDPO and D3PO with 50 timesteps for DDIM, respectively. The following table reports the results of the base model using different settings, as well as the results of our methods using 20 timesteps for DDIM. Our methods still outperform the strengthened baselines.\\n\\n| | Aesthetic Scorer | ImageReward | HPSv2 | PickScore | Compressibility |\\n| ----------------------------------- | --------------- | ----------- | --------- | --------- | --------------- |\\n| SD-v1.5, 20 timesteps for DDIM | 5.79 | 0.65 | 27.51 | 20.20 | -105.51 |\\n| SD-v1.5, 50 timesteps for DDIM | 5.81 | 0.80 | 27.69 | 20.24 | -108.67 |\\n| SD-v1.5, 100 timesteps for DDIM | 5.79 | 0.83 | 27.73 | 20.18 | -111.46 |\\n| SD-v1.5, 50 timesteps for PNDM | 5.64 | 0.67 | 27.51 | 20.12 | -123.68 |\\n| SD-v1.5, 50 timesteps for DPM++ | 5.64 | 0.70 | 27.57 | 20.10 | -123.71 |\\n| DDPO, 50 timesteps for DDIM | 6.65 | -- | -- | -- | -- |\\n| D3PO, 50 timesteps for DDIM | 6.37 | -- | -- | -- | -- |\\n| TailorPO | 6.66 | 1.20 | **28.37** | 20.34 | **-6.71** |\\n| TailorPO-G | **6.96** | **1.26** | 28.03 | **20.68** | - |\"}", "{\"comment\": \"The authors have addressed my previous concerns. I would like adjust my scores. But based on reviewer zqb2's comments, I still have some concerns for the similarities of the main ideas between this work and SPO.\"}", "{\"title\": \"Response to follow-up comments (part 3)\", \"comment\": \"**3. About other concerns.**\\n\\nWe appreciate your efforts and acknowledge that our work shares certain similarities with SPO. However, we would like to clarify that our research was conducted entirely independently. Although we have cited, discussed, and empirically compared with SPO in our paper, we would like to further emphasize the distinct contribution of our paper, which lies in the following perspectives.\\n\\nFirst, we discover distinct **theoretical findings** to support the design of our training framework, while SPO is mainly based on empirical intuitions.\\n\\n- We formulate the reward of each denoising step as the action value function in MDP, and **derive its theoretical formulation** $\\\\mathbb{E}[r(c, x_0)|c, x_t]$ (Section 3.2).\\n- We **discover and theoretically prove the gradient issue** caused by previous methods for the first time (Section 3.2). Such analysis inspires us to sample from the same $x_t$, and we theoretically prove that this simple operation could address the gradient issue and align the optimization direction with preferences (Section 3.3).\\n\\nSecond, we have the following **technical contributions**, which significantly differ from SPO.\\n\\n- We **directly evaluate** the preference quality of noisy samples based on the estimation for the action value function (Section 3.3), instead of training a new reward model based on uncertified assumptions.\\n- We **incorporate the gradient guidance** of reward models to enlarge the gap between paired samples to boost the aligning effectiveness for the first time. We also theoretically prove that this guidance pushes the model optimization towards high reward values from the perspective of gradient (Section 3.4 and Appendix B).\\n\\nFinally, experimental results demonstrate that TailorPO achieved **better performance and generalization ability than previous methods** including SPO (Section 4.1 and Section 4.2). This is because the estimation in Eq. (12) for the step-wise reward provides us with a more accurate and reliable preference label. Furthermore, TailorPO-G further improved the aligning effectiveness.\"}", "{\"summary\": \"DPO is very useful for LLMs. Its use in diffusion models is still under exploration. This paper studies how DPO can be used in the context of T2I diffusion models. The important consideration is that existing methods assign the same win/lose pair to all the intermediate steps. This is problematic. This paper thus generated two images at each step and compares the two images to obtain win/lose labels. The authors propose a way to compute such preference at later steps.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This paper's writing looks fine. The problems are presented in a right way.\", \"weaknesses\": [\"The proposed framework in Fig. 1(b) is almost the same as SPO (Liang et al., 2024). It is different in that SPO has a hyperparameter defining the number of images generated at each step, but this is very minor.\", \"Authors claim that this is the first framework that explicitly considers the properties of diffusion models for DPO. This is very wrong, because the same has been done in SPO.\", \"I understand that ArXiv papers may not be required to be cited, but it is important to acknowledge people's contribution in a proper way. It's very inappropriate to propose an identical method while claiming you are the first.\", \"The only difference, if I'm correct, is enhancing the diversity of noisy samples by increasing their reward gap. However, this is not claimed as the major contribution of this work. In fact, if this is the main contribution, this paper is not as problematic as now. The only critique would be incremental novelty etc.\", \"The comparison results of SPO look problematic too. There is almost no improvement of SPO over D3PO etc. This is very different from the report from SPO. Authors must carefully check the SPO paper and their open-source implementation.\"], \"questions\": \"This paper should properly claim previous works.\", \"flag_for_ethics_review\": \"['Yes, Research integrity issues (e.g., plagiarism, dual submission)', 'Yes, Other reasons (please specify below)']\", \"details_of_ethics_concerns\": \"Dear AC, SAC, and PCs,\\n\\nThis paper is problematic because the proposed method is almost identical to SPO. https://arxiv.org/abs/2406.04314\\n\\nAlthough SPO is not published yet, it is not appropriate to say this paper is the first to taylor DPO to diffusion models. The authors use almost the same motivation, problem identification, and pipeline without clarifying that the same has already been fully introduced in SPO. Moreover, the reported results of SPO are not correct. \\n\\nI'm strongly concerned about the academic integrity of this paper. \\n\\nBest regards, \\nReviewer\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer nzop (part 3)\", \"comment\": \"**Q6: \\\"would consider raising the rating (even above the upper bound) if the paper includes a more thorough formulation.\\\"**\\n\\n**A6:** Thank you for the insightful suggestion. We would like to provide a more thorough formulation of the value function and the diffusion framework. We have added the formulation of the value function in Section 3.2 of the revised manuscript. For the formulation of diffusion models based on EDM, due to the page limit and considering the readability of the paper, we add it to Appendix C of the revised manuscript.\\n\\n**Value function.** By formulating the denoising process of the diffusion model as MDP in Eq. (6), we aim to maximize the action value function at time $t$, *i.e.,* $Q(s,a)=\\\\mathbb{E}[G_t|S_t=s,A_t=a]$, where $G_t$ represents the cumulative return at step $t$. We define\\u00a0 $G_t$ in the general form of $TD(\\\\lambda)$, $G_t^{\\\\lambda}=(1-\\\\lambda)\\\\sum_{n=1}^{T-t-1}\\\\lambda^{n-1}G_t^{(n)}+\\\\lambda^{T-t-1}G_t^{(T-t)}$, where $G_t^{(n)}=\\\\sum_{i=1}^n \\\\gamma^{i-1}R_{t+i}+\\\\gamma^nV(S_{t+i})$ denotes the estimated return at step $t$ based on $n$ subsequent steps. Here, we simplify the analysis to $TD(1)$ and it degrades to Monte Carlo method. In other words, we have $G_t^\\\\lambda=G_t^{(T-t)}=\\\\sum_{i=1}^{T-t} \\\\gamma^{i-1}R_{t+i}+\\\\gamma^nV(S_{t+i})$. In the scenario of diffusion models, there is no intermediate feedback $R_t$ for intermediate steps and we assume $R_t=0$ for $t<T$. Therefore, the cumulative return can be further simplified as $\\\\gamma^n V(S_{T})$. By setting $\\\\gamma=1$ and $V(S_T)=R_T=r(c, x_0)$, which is the reward value of generated images, we have $Q(s,a)=\\\\mathbb{E}[r(c,x_0)|S_t=(c,x_{T-t}, A_t=x_{T-t-1})]=\\\\mathbb{E}[r(c,x_0)|c, x_{T-t-1}]$.\\n\\n**Diffusion framework**. Diffusion models contain a forward process and a reverse denoising process. Given an input $x_0$ sampled from the real distribution $p_\\\\text{data}$, the forward process can be formulated as follows (EDM, [1]), which is a uniform formulation for DDPM, DDIM, and other methods.\\n$$\\nx_t=s_tx_0+s_t\\\\sigma_t\\\\epsilon\\n$$\\nwhere $x_t$ is the noisy sample at timestep $t$. $s\\\\_t$ represents a scale schedule coefficient and $\\\\sigma_t$ represents a noise schedule coefficient. At timestep $t$, we have $p(x\\\\_t|x\\\\_0)\\\\sim\\\\mathcal{N}(s\\\\_t x\\\\_0, s\\\\_t^2\\\\sigma\\\\_t^2 I)$. $s\\\\_t$ and $\\\\sigma\\\\_t$ are usually selected to ensure the final output $x\\\\_T$ follows a certain Gaussian distribution.\\n\\nThe reverse process aims to recover the distribution of original inputs $x\\\\_0$ from a Gaussian noise $x\\\\_T$. According to [1], the reverse ODE process is given as follows.\\n$$\\ndx=[\\\\frac{\\\\dot{s}\\\\_t}{s\\\\_t}x-s\\\\_t^2\\\\dot{\\\\sigma}\\\\_t\\\\sigma\\\\_t\\\\nabla\\\\_x\\\\log p(\\\\frac{x}{s\\\\_t};\\\\sigma\\\\_t)]dt\\n$$\\nwhere $\\\\dot{s\\\\_t}$ and $\\\\dot{\\\\sigma}\\\\_t$ denote the time derivative. $\\\\nabla_x\\\\log p(\\\\frac{x}{s_t};\\\\sigma_t)$ is the score function, which is usually approximated by a neural network, denoted by $s_\\\\theta(\\\\cdot)$. Replacing this term in the above equation, we can solve the reverse process. For a set of discrete timesteps, we can obtain a sequence $[x_T, x_{T-1}, \\\\ldots, x_t, \\\\ldots, x_0]$, and our study focuses on the optimization of $s_\\\\theta(\\\\cdot)$ at each timestep to generate $x_0$ with better image quality.\\n\\nSubsequently, the predicted $\\\\hat{x}\\\\_0$ at the step $t$ can be represented $\\\\hat{x}\\\\_0(x\\\\_t)=\\\\frac{1}{s\\\\_t}(x\\\\_t+s\\\\_t^2\\\\sigma\\\\_t^2 s\\\\_{\\\\theta}(x\\\\_t))$. Then, the step-wise reward value of $x\\\\_t$ can be estimated based on $\\\\hat{x}\\\\_0(x\\\\_t)$. Similarly, the conditional score function used in our gradient guidance can be rewritten as $\\\\nabla_x\\\\log p (\\\\frac{x}{s_t}|r_\\\\text{high};\\\\sigma_t)=\\\\nabla_x\\\\log p (\\\\frac{x}{s_t};\\\\sigma_t)+\\\\nabla_x\\\\log p (r_\\\\text{high}|\\\\frac{x}{s_t};\\\\sigma_t)$. The first term is estimated by the neural network $s_\\\\theta(\\\\cdot)$, and the second term can be approximated following Eq.(13) of our paper.\\n\\n[1] Karras et al., Elucidating the Design Space of Diffusion-Based Generative Models. NeurIPS 2022.\"}", "{\"title\": \"Response to Reviewer zd3f (part 2)\", \"comment\": \"**Q3: About limitations. \\\"could provide a more detailed discussion on the limitations of TailorPO,\\\" \\\"could the authors discuss these limitations and potential avenues for improvement?\\\"**\\n\\n**A3:** Like other methods based on an explicit pre-trained reward model, including DDPO, D3PO, and SPO, TailorPO has the potential of being prone to reward hacking [1], if we fine-tune the model on very simple prompts for too many iterations. It means that the generative model is overoptimized to improve the score of the reward model but fails to maintain the original output distribution of natural images. We provide some examples in Figure 12 of the revised manuscript to demonstrate this phenomenon. For example, when we take JPEG compressibility as the reward, DDPO, D3PO, and our methods all generate images with a blank background. \\n\\nThe problem of reward hacking is related to the quality of reward models. Given the fact that these pre-trained reward models are usually trained on a finite training set, they cannot *perfectly* fit the human preference for natural and visually pleasing images. Therefore, the optimization of generative models towards these reward models may lead to an unnatural distribution of images.\\n\\nIn order to alleviate the reward hacking problem, TailorPO can be further improved from the following perspectives.\\n\\n- Using a better reward model that well captures the distribution of natural and visually pleasing images. A better reward model can avoid guiding model optimization towards unnatural images.\\n- Utilizing the ensemble of multiple reward models to alleviate the bias of a single reward model. While each single reward model has its own preference bias, considering multiple reward models altogether may be able to alleviate the risk of falling into a single model. To this end, [2] has shown that the reward model ensembles can effectively address the reward hacking in RLHF-based fine-tuning of language models. Therefore, we are hopeful that the reward model ensembles are also effective for diffusion models.\\n- Searching for a better setting of the hyperparameter $\\\\beta$ in the loss function to strike a balance between natural images and high reward scores. In DPO-style methods, the coefficient $\\\\beta$ controls the deviation from the original generative distribution (the KL regularization). In this way, we can search for a better value of $\\\\beta$ to avoid the model being fine-tuned far away from the original base model. For example, [3] has provided a method to dynamically adjust the value of $\\\\beta$.\\n\\nWe have followed your suggestion to add all these discussions in Appendix G of the revised manuscript.\\n\\n[1] Skalse et al., Defining and Characterizing Reward Hacking. NeurIPS 2022.\\n\\n[2] Coste et al., Reward Model Ensembles Help Mitigate Overoptimization. ICLR 2024.\\n\\n[3] Wu et al., $\\\\beta$-DPO: Direct Preference Optimization with Dynamic $\\\\beta$. NeurIPS 2024.\"}", "{\"title\": \"Response to Reviewer F2pk (part 3)\", \"comment\": \"**Q6: \\\"provide insights into the differences between TailorPO and TailorPO-G\\\"**\\n\\n**A6:** The key difference between TailorPO and TailorPO-G is that the gradient guidance better aligns the optimization direction with the reward model. We will elaborate on this by analyzing the gradient in TailorPO and TailorPO-G. In Eq.(11) of the original manuscript, we have shown that the gradient of the TarilorPO loss function can be written as follows.\\n$$\\n\\\\nabla_\\\\theta \\\\mathcal{L}(\\\\theta) = -\\\\mathbb{E}\\\\left[(f_t/{\\\\sigma^2_{t}})\\\\cdot\\\\nabla^T_\\\\theta\\\\mu_\\\\theta(x_{t})(x^w_{t-1} - x^l_{t-1})\\\\right]\\n$$\\nFor TarlorPO-G, the term $x^w_{t-1}$ is modified by adding the gradient term $\\\\nabla_{x^w_{t-1}}\\\\log p(r_{\\\\text{high}}|x^w_{t-1})$. Therefore, we can derive its gradient term as follows.\\n$$\\n\\\\begin{array}{ll}\\n\\\\nabla\\\\_\\\\theta \\\\mathcal{L}\\\\_{TailorPO-G}(\\\\theta) & = -\\\\mathbb{E}\\\\left[(f\\\\_t/{\\\\sigma^2\\\\_{t}})\\\\cdot\\\\nabla^T\\\\_\\\\theta\\\\mu\\\\_\\\\theta(x\\\\_{t})((x^w\\\\_{t-1} + \\\\nabla\\\\_{x^w\\\\_{t-1}}\\\\log p(r\\\\_\\\\text{high} | x^w\\\\_{t-1}))- x^l\\\\_{t-1})\\\\right] \\\\\\\\\\n&= -\\\\mathbb{E}\\\\left[(f\\\\_t/{\\\\sigma^2\\\\_{t}})\\\\cdot\\\\nabla^T\\\\_\\\\theta\\\\mu\\\\_\\\\theta(x\\\\_{t})(\\\\underbrace{\\\\nabla\\\\_{x^w\\\\_{t-1}}\\\\log p(r\\\\_\\\\text{high} | x^w\\\\_{t-1})}\\\\_{\\\\text{pushing towards high reward values}} + (x^w\\\\_{t-1} - x^l\\\\_{t-1})\\\\right]\\n\\\\end{array}\\n$$\\nThe gradient term pushes the model towards the high-reward regions in the reward models. Therefore, TarlorPO-G further improves the effectiveness of TailorPO. Thank you for your insightful comment and we have added this discussion in Appendix B of the revised manuscript. \\n\\n---\\n\\n**Q7: \\\"I am concerned about the method's performance in real-world applications. Why do TailorPO and TailorPO-G lead to over-saturation issues in Figures 5 and 6?\\\"**\\n\\n**A7:** To address your concern, we first evaluate the method's performance in real-world prompts. We design and sample several prompts from [1], and generate images using the model fine-tuned by our methods. Please refer to Figure 11 of the revised manuscript for visual demonstrations. These results show that on real-world prompts, the generated images are natural and have good quality, not exhibiting the over-saturation issue.\\n\\nSecond, we would like to discuss the over-saturation issue in Figure 5/6. This issue is caused by that the generative model is overoptimized to improve the score of the reward model on a given training set of prompts, but a high reward score may cause the generation distribution shift. We provide some examples in Figure 12 of the revised manuscript to demonstrate this phenomenon. For example, when taking JPEG compressibility as the reward model, DDPO, D3PO, and our methods all generate images with a blank background. In Figure 5 and Figure 6, the reward models are ImageReward and aesthetic scorer, which are trained on human preference rankings and potentially prefer images with bright colors (as shown in Figure 1 of (Xu et al., 2023)). In contrast, if we use other reward models (Figure 10), or use real-world prompts not in the training set (Figure 11), the over-saturation issue does not appear.\\n\\nNevertheless, the over-saturation issue in Figure 5 and Figure 6 is in an acceptable range. These figures are colorful and contain more details, but have no distortion. The user study in Figure 7 also validates that our method generates more preferred images.\\n\\n[1] https://openai.com/index/dall-e-3/\"}", "{\"comment\": \"Thank you very much for your helpful suggestion. We would like to first clarify that our difference is not limited to the \\\"reward function\\\". First, we have distinct motivations based on our theoretical analysis. Second, we propose a different and novel method to evaluate the step-level reward. Third, we are directly inspired by the difference between Eq. (3) and Eq. (5) to set the same input $x_t$ at each step, and we prove this operation could align the optimization direction with preferred samples. Fourth, we introduce the gradient guidance of reward models into the aligning framework to further improve the effectiveness.\\n\\nSecond, besides the discussions about our difference from SPO in Lines 126-130 and Lines 204-209 of the original manuscript, we have followed your suggestion to add more discussions to further explain our differences in the introduction, Section 3.2, 3.3, 4.1, as follows.\\n\\n- In Introduction: \\\"Most close to our work, Liang et al. (2024) also noticed the inconsistency of the preference order between intermediate-step outputs and final images, and they proposed to train an additional step-wise reward model to address this issue. In comparison, we are the first to explicitly derive the theoretical flaws of previous DPO implementations in diffusion models, and we propose distinct solutions to address these issues. Experiments also demonstrate that our framework outperforms SPO on various reward models.\\\"\\n\\n- In Section 3.2: \\\"In this section, we have conducted a detailed analysis of the denoising process based on MDP, and the optimization gradient of diffusion models. In this way, we reveal the potential theoretical issues in previous methods beyond visual discoveries of (Liang et al., 2024). To address these issues, we propose distinct solutions in Section 3.3\\\"\\n\\n- In Section 3.3: \\\"Different from (Liang et al., 2024), we aim to address the gradient issue in Section 3.2 and it is straightforward to sample from the same $x_t$ based on our theoretical analysis.\\\" \\\"To this end, Liang et al. (2024) proposed to train a step-wise reward model based on another uncertified assumption, *i.e.*, 'the preference order between pair of images can be kept when adding the same noise.' In comparison, we directly evaluate the preference quality of noisy samples $x_t$ without training a new model. \\\"\\n\\n- In Section 4.1: \\\"Notably, our methods outperform SPO as we directly estimate the step-level reward without training another reward model based on an uncertified assumption, and we incorporate the gradient guidance to further improve the effectiveness.\\\"\\n\\nIn addition, we would like to clarify that the contribution of gradient guidance is actually larger than 0.09. According to results in Table 2 and Figure 4, the gradient guidance improves the aesthetic score of images from 6.66 to 6.96, and it improves the PickScore from 20.34 to 20.68, which is averaged on three runs. This improvement is related to the reward model and is affected by randomness in the experiment, so the result in our rebuttal (Table 8) demonstrates a smaller effect.\\n\\nWe hope that our response could adequately address all your concerns, and we sincerely hope you can reconsider the rating accordingly.\"}", "{\"title\": \"Response to follow-up comments (part 2)\", \"comment\": \"**2. About concerns on theoretical results.**\\n\\nThank you for the insightful comment. We would like to first demonstrate the formulation and conditions of the Tweedie's formula. Then, we discuss how to apply Tweedie's formula in diffusion models. We will add these discussions in our paper for better understanding.\\n\\n*Tweedie's formula.* Let $p(y|\\\\eta)$ denote any distribution that belongs to the *exponential family of probability distributions*, which is defined as those distributions whose density can be written as the following form.\\n$$\\np(y|\\\\eta)=p_0(y)\\\\exp(\\\\eta^T T(y)-\\\\varphi(\\\\eta))\\n$$\\nwhere $\\\\eta$ is a canonical parameter of the family, $T(y)$ is a function of $y$, $\\\\varphi(\\\\eta)$ is the cumulant generating function which makes $p(y|\\\\eta)$ integrate to 1, and $p_0(y)$ is the density up to a scale factor when $\\\\eta=0$. Notably, the Gaussian distribution is a typical class of the above exponential family distribution. Then, the posterior mean $\\\\hat\\\\eta=\\\\mathbb{E}[\\\\eta|y]$ should satisfy $(\\\\nabla_y T(y))^T\\\\hat\\\\eta=\\\\nabla_y \\\\log p(y)-\\\\nabla_y \\\\log p_0(y)$. Please refer to [1,2] for more details.\\n\\n*Application in diffusion models.* Given a distribution of natural images $D$, let $x_0$ denote the image in $D$, and $x_t=\\\\sqrt{\\\\bar{\\\\alpha}_t}x_0+\\\\sqrt{1-\\\\bar{\\\\alpha}_t}\\\\epsilon$ denote the corresponding noisy sample. $p_t(x_t)$ denotes the distribution of $x_t$ obtained on images $x_0\\\\sim D$. The conditional distribution of $x_t$ given $x_0$ is a Gaussian distribution, belonging to the *exponential family distribution*, and it can be written as follows.\\n$$\\n\\\\begin{array}{ll}\\np(x_t|x_0)&=\\\\frac{1}{(2\\\\pi (1-\\\\bar{\\\\alpha}_t))^{d/2}} \\\\exp(-\\\\frac{\\\\Vert x_t - \\\\sqrt{\\\\bar{\\\\alpha}_t}x_0 \\\\Vert^2}{2(1-\\\\bar{\\\\alpha}_t)})\\\\\\\\ &=p_0(x_t)\\\\exp(x_0^TT(x_t)-\\\\varphi(x_0))\\n\\\\end{array}\\n$$\\nwhere $p_0(x_t)=\\\\frac{1}{(2\\\\pi (1-\\\\bar{\\\\alpha}_t))^{d/2}} \\\\exp(-\\\\frac{\\\\Vert x_t \\\\Vert^2}{2(1-\\\\bar{\\\\alpha}_t)}), T(x_t)=\\\\frac{\\\\sqrt{\\\\bar{\\\\alpha}_t}}{1-\\\\bar{\\\\alpha}_t}x_t$, and $\\\\varphi(x_0)=\\\\frac{\\\\bar{\\\\alpha}_t \\\\Vert x_0 \\\\Vert ^2}{2(1-\\\\bar{\\\\alpha}_t)}$. According to Tweedie's formula, we have $\\\\frac{\\\\sqrt{\\\\bar{\\\\alpha}\\\\_t}}{1-\\\\bar{\\\\alpha}\\\\_t}\\\\hat{x}\\\\_0=\\\\nabla\\\\_{x\\\\_t}\\\\log p\\\\_t(x\\\\_t) + \\\\frac{1}{1-\\\\bar{\\\\alpha}\\\\_t}x\\\\_t$. This equation can be rewritten as $\\\\hat{x}\\\\_0=\\\\frac{1}{\\\\sqrt{\\\\bar{\\\\alpha}\\\\_t}}(x\\\\_t+(1-\\\\bar{\\\\alpha}\\\\_t)\\\\nabla\\\\_{x\\\\_t}\\\\log p\\\\_t(x\\\\_t))$, and the pre-trained model $\\\\epsilon\\\\_\\\\theta$ approximates the $\\\\nabla\\\\_{x\\\\_t}\\\\log p\\\\_t(x\\\\_t)$ term.\\n\\nThe aligning of the diffusion model can be considered from a similar perspective. Given another distribution $D'$ of natural images that *have high reward values*, we are actually fitting a new distribution $p'_t(x'_t)$ where $x'_t$ is obtained on images $x'_0\\\\sim D'$. Similar to the above equation, the conditional distribution of $x'_t$ given $x'_0$ is also a Gaussian distribution. Therefore, based on Tweedie's formula, the posterior mean of $x'_0$ can be derived as $\\\\hat{x}'\\\\_0=\\\\frac{1}{\\\\sqrt{\\\\bar{\\\\alpha}\\\\_t}}(x'\\\\_t+(1-\\\\bar{\\\\alpha}\\\\_t)\\\\nabla\\\\_{x'\\\\_t}\\\\log p'\\\\_t(x'\\\\_t))$. Here, we assume that the $\\\\nabla\\\\_{x'\\\\_t}\\\\log p'\\\\_t(x'\\\\_t)$ term can be approximated by the model $\\\\epsilon\\\\_{\\\\theta'}$ after training, and the estimation based on $\\\\epsilon\\\\_\\\\theta'$ has been verified by experiments in our last response. We will clarify all these analyses in the paper.\\n\\n\\n[1] Kim and Ye. Noise2Score: Tweedie\\u2019s Approach to Self-Supervised Image Denoising without Clean Images. NeurIPS 2021.\\n\\n[2] Chung et al. Diffusion Posterior Sampling for General Noisy Inverse Problems. ICLR 2023.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWe sincerely appreciate your efforts and insightful comments, which has helped us to improve our paper. We have followed your suggestions to conduct new experiments and to discuss the limitations in our response. We would like to confirm whether our response has addressed all your concerns. Please let us know if you have any further questions, and we will respond as soon as possible.\\n\\nBest regards,\\nAuthors\"}", "{\"summary\": \"This paper proposed TailorPO, a DPO framework tailored for diffusion model based on previous D3PO method. The framework includes three key improvement: 1) Turn the preference ranking on step-level instead of final image level; 2) The preference is only considered on the same condition; 3) Use gradient guidance to increase the difference on the reward between each pair.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and effectively organized. The authors provide a thorough analysis of the limitations in existing DPO methods, including challenges with inaccurate preference ordering and gradient direction, and subsequently propose a method that addresses these issues effectively.\", \"The proposed method is straightforward to understand, with clear an accesible theoretical analysis, particularly in its comparison of formulations with prior methods such as D3PO.\", \"The experiments examine the generalization capability of the proposed method across different prompts and reward models, a crucial aspect for fine-tuning approaches.\"], \"weaknesses\": [\"The paper lacks a crucial ablation study on the contribution of each component in TailorPO to its effectiveness. Specifically, it would be valuable to evaluate the individual impact of (1) step-level preference, (2) preference indication only under the same condition, and (3) gradient guidance. This analysis is essential to substantiate the contribution of the overall framework beyond that of its individual components.\", \"The paper lacks verification of generalization on fine-tuning methods, such as LoRA and full fine-tuning. Additionally, the experiments appear to be conducted solely on SD 1.5; expanding evaluation to include a broader range of base models would strengthen the results.\", \"Figure 5 indicates an evident over-saturation issue for both TailorPO and TailorPO-G, which does not align with the caption\\u2019s description of producing \\\"more visually pleasing images.\\\" Conducting a user study could better substantiate this claim.\", \"The framework may not be data-efficient, as it seems unable to leverage existing dataset including preference pairs of candidates across different conditions.\"], \"questions\": [\"It may be beneficial to reflect the preference selection procedure in Eq.10, as this equation is regarded as the formulation of the TailorDPO framework.\", \"Could you provide insights into the differences between TailorOP and TailorOPG?\", \"I am concerned about the method's performance in real-world applications. Why do TailorPO and TailorPO-G lead to over-saturation issues in Figures 5 and 6?\", \"The baseline for comparison seems relatively weak (with timesteps set to 20 for DDIM, which is uncommon in practice). If available, could you share results using the typical choice of 50 timesteps for DDIM or 25 timesteps for other advanced schedulers?\", \"I'm willing to increase my rating if the concerns can be addressed.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to follow-up comments (part 1)\", \"comment\": \"Thank you for your kind feedback and engagement in the discussion. Below, we provide answers to all your questions.\\n\\n**1. About concerns on empirical results.**\", \"q\": \"\\\"Error bars are missing from both tables.\\\"\\\"What is the maximum value of $t$?\\\"\", \"a\": \"Thank you. We have followed your suggestion to compute and report the standard deviation in the following tables.\\n\\n\\ud83d\\udc47Average relative error of aesthetic score.\\n\\n| timestep $t$ | 12 | 8 | 4 | 1 |\\n| -------------------------------------------------- | ------------------ | ------------------ | ------------------ | ------------------ |\\n| Pre-trained model $\\\\epsilon_\\\\theta$ | 0.0545 $\\\\pm$0.0427 | 0.0378 $\\\\pm$0.0287 | 0.0132 $\\\\pm$0.0089 | 0.0047 $\\\\pm$0.0051 |\\n| $\\\\epsilon_{\\\\theta'}$ after training on 10k samples | 0.0353 $\\\\pm$0.0345 | 0.0176 $\\\\pm$0.0160 | 0.0106 $\\\\pm$0.0080 | 0.0033 $\\\\pm$0.0029 |\\n| $\\\\epsilon_{\\\\theta'}$ after training on 40k samples | 0.1330 $\\\\pm$0.0320 | 0.0283 $\\\\pm$0.0231 | 0.0132 $\\\\pm$0.0084 | 0.0070 $\\\\pm$0.0047 |\\n\\n\\ud83d\\udc47Average relative error of JPEG compressibility\\n\\n| timestep $t$ | 12 | 8 | 4 | 1 |\\n| -------------------------------------------------- | ------------------ | ------------------ | ------------------ | ------------------ |\\n| Pre-trained model $\\\\epsilon_\\\\theta$ | 0.2263 $\\\\pm$0.0524 | 0.1259 $\\\\pm$0.0333 | 0.0390 $\\\\pm$0.0101 | 0.0070 $\\\\pm$0.0039 |\\n| $\\\\epsilon_{\\\\theta'}$ after training on 10k samples | 0.2492 $\\\\pm$0.0390 | 0.1440 $\\\\pm$0.0279 | 0.0425 $\\\\pm$0.0071 | 0.0074 $\\\\pm$0.0016 |\\n| $\\\\epsilon_{\\\\theta'}$ after training on 40k samples | 0.1566 $\\\\pm$0.0925 | 0.0341 $\\\\pm$0.0221 | 0.0113 $\\\\pm$0.0077 | 0.0066 $\\\\pm$0.0016 |\\n\\nThe maximum value of $t$ is the number of denoising steps $T=20$.\\n\\n---\\n\\nBesides the above experiment, we have also conducted new experiments on more different reward models and base models to demonstrate that the effectiveness of TarilorPO is not limited to early iterations. First, we fine-tune SD-v1.5 using DDPO, D3PO, and TailorPO to improve the JPEG compressibility, respectively. Figure 10 (left) shows that all methods converge in later epochs, and TailorPO exhibits the **fastest** learning efficiency and achieves the **highest reward value**. Furthermore, we also fine-tune SD-v2.1-base using DDPO and TailorPO by taking the aesthetic scorer as the reward model (other settings including learning rate are the same as Section 4). Figure 10 (right) shows that TailorPO significantly outperforms DDPO after training on 40k samples, and it is effective throughout the learning process.\\n\\nRegarding the phenomenon in Figure 9, the learning of TailorPO is a bit slower after 30k samples because of the constraint in the implementation of the loss function. Following D3PO (Yang et al., 2024a), we constrain values of $\\\\frac{\\\\pi_\\\\theta(x^w_{t-1}|x_t,c)}{\\\\pi_\\\\text{ref}(x^w_{t-1}|x_t,c)}$ and $\\\\frac{\\\\pi_\\\\theta(x^w_{t-1}|x_t,c)}{\\\\pi_\\\\text{ref}(x^w_{t-1}|x_t,c)}$ to be within the range of $[1-\\\\delta, 1+\\\\delta]$. This constraint helps avoid the model being led too far away from the reference model. During the training process, the probability in the reference model ($\\\\pi_\\\\text{ref}(x^w_{t-1}|x_t,c)$ and $\\\\pi_\\\\text{ref}(x^l_{t-1}|x_t,c)$) keeps decreasing because both $x^w_{t-1}$ and $x^l_{x-1}$ are sampled from the fine-tuned model. Therefore, values of $\\\\frac{\\\\pi_\\\\theta(x^w_{t-1}|x_t,c)}{\\\\pi_\\\\text{ref}(\\\\cdot|x_t,c)}$ and $\\\\frac{\\\\pi_\\\\theta(x^l_{t-1}|x_t,c)}{\\\\pi_\\\\text{ref}(\\\\cdot|x_t,c)}$ increase during the training process. Upon their values reach the constraint, they are clipped into the range of $[1-\\\\delta,1+\\\\delta]$. In this case, the optimization on this pair of samples is restricted. Therefore, in late training process, there could be more samples reaching the constraint and the optimization probably gets slower. We will clarify this setting in our revised manuscript.\"}", "{\"comment\": \"Thank you very much! We would consider your suggestion to better clarify the differences in the main text and the appendix.\"}", "{\"comment\": \"Thank you for the detailed discussion and the extensive experimental results. Most of my concerns have been addressed. I also reviewed the feedback provided by other reviewers. Based on the ablation study, it is evident that the contribution preference among the components is *step-awared preference* (0.61 $\\\\uparrow$) > *same conditions* (0.29 $\\\\uparrow$) > *gradient guidance to enlarge difference* (0.09 $\\\\uparrow$).\\n\\nI understand that you chose a different reward function compared to SPO. However, this distinction should be emphasized and explicitly reflected in your main paper, particularly in Section 3.2 which serves as the main motivation for the whole method. Given that the theoretical analysis of the necessity of step-aware optimization is presented as one of your most significant contributions, it would be beneficial to include a separate discussion comparing your approach with SPO. Specifically, you should explain your improvements over SPO based on your analysis (e.g., why 'directly estimating the step-wise reward' is better than 'using an additional step-awared reward model'). This would provide a more comprehensive understanding of the novelty and significance of the proposed framework while acknowledging the contribution from previous work.\\n\\nI strongly recommend adding this discussion in introduction / section 3.2 / appendix to improve the overall clarity and impact of the paper and address the integrity concern from other reviewers.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
9LHr33MQh2
ADDITIVE SEPARABLE GRAPHON MODELS
[ "Xinyuan Fan", "Feiyan Ma", "Chenlei Leng", "Weichi Wu" ]
The graphon function is fundamental to modeling exchangeable graphs, which form the basis for a wide variety of networks. In this paper, we introduce the additive separable model as a parsimonious representation of the graphon, capable of generating a low-rank connection probability matrix for network data. This model effectively addresses the well-known identification challenges associated with graphon functions. We develop an efficient estimation approach that leverages subgraph counts to estimate the low-rank connection matrix and uses interpolation to recover the graphon functions, achieving the minimax optimal estimation rate. We provide the convergence rate of our method, and validate its computational efficiency and estimation accuracy through comprehensive simulation studies.
[ "graphon", "subgraph counts", "low-rank connecting probability matrix", "nonparametric statistics", "network analysis" ]
Reject
https://openreview.net/pdf?id=9LHr33MQh2
https://openreview.net/forum?id=9LHr33MQh2
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w3LnpTNVss", "vxLPiKOpoC", "tu5JcrqekP", "rIrXSUbkwL", "n1fH3ICyh0", "eJnceoR3x4", "cULwbz14tU", "aheHwCbpBi", "Zv4PyJ7ZwB", "Y26j5568Lu", "WiIsfLsuff", "WUkliXlf65", "V3Lo04dpEr", "TYX4EMDWw2", "QXwOt04e0y", "O9er016iGd", "O1RP9AD7un", "Nvt8jC1yyy", "MtLVJZKKEP", "M3oFVSyd0z", "KcVgMZwjyR", "HoZ1mDfdUs", "H6UbswH7fi", "ALalAd74N1", "45F1pqawss", "326OYKs4Uh", "1Lb7HNX1nC", "0iGCHWAI6l" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732831155490, 1733096399169, 1732476601723, 1731936276796, 1732622560254, 1732687240006, 1732308788407, 1731935216698, 1732889261019, 1737524065823, 1730492562091, 1731936582964, 1731935314490, 1731931863895, 1732894099329, 1732622654313, 1732618240289, 1733072352404, 1739445798608, 1732307665281, 1732307068486, 1730423591439, 1734665251799, 1730587959072, 1733058265580, 1733070441592, 1731935878807, 1732696288456 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10611/Reviewer_3djw" ], [ "ICLR.cc/2025/Conference/Submission10611/Authors" ], [ "ICLR.cc/2025/Conference/Submission10611/Reviewer_LuY1" ], [ "ICLR.cc/2025/Conference/Submission10611/Authors" ], [ "ICLR.cc/2025/Conference/Submission10611/Authors" ], [ "ICLR.cc/2025/Conference/Submission10611/Reviewer_whmb" ], [ "ICLR.cc/2025/Conference/Submission10611/Reviewer_3djw" ], [ "ICLR.cc/2025/Conference/Submission10611/Authors" ], [ "ICLR.cc/2025/Conference/Submission10611/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10611/Reviewer_LuY1" ], [ "ICLR.cc/2025/Conference/Submission10611/Authors" ], [ "ICLR.cc/2025/Conference/Submission10611/Authors" ], [ "ICLR.cc/2025/Conference/Submission10611/Authors" ], [ "ICLR.cc/2025/Conference/Submission10611/Authors" ], [ "ICLR.cc/2025/Conference/Submission10611/Authors" ], [ "ICLR.cc/2025/Conference/Submission10611/Authors" ], [ "ICLR.cc/2025/Conference/Submission10611/Reviewer_3djw" ], [ "~Xinyuan_Fan1" ], [ "ICLR.cc/2025/Conference/Submission10611/Reviewer_3djw" ], [ "ICLR.cc/2025/Conference/Submission10611/Reviewer_3djw" ], [ "ICLR.cc/2025/Conference/Submission10611/Reviewer_whmb" ], [ "ICLR.cc/2025/Conference/Submission10611/Area_Chair_6xEE" ], [ "ICLR.cc/2025/Conference/Submission10611/Reviewer_3djw" ], [ "ICLR.cc/2025/Conference/Submission10611/Reviewer_3djw" ], [ "ICLR.cc/2025/Conference/Submission10611/Authors" ], [ "ICLR.cc/2025/Conference/Submission10611/Authors" ], [ "ICLR.cc/2025/Conference/Submission10611/Authors" ] ], "structured_content_str": [ "{\"comment\": \"I thank the authors for their response and discussions. I believe the modification they introduce have improved their paper.\\n\\nIn light of this new version, I have a couple of extra comments: please review that you are not using 'new model'. I found a one instance of this in line 48. I think removing all this give a more transparent view on this model.\\n\\nAdditionally, could the authors clarify how they use the power iteration in the following two respects:\\n1) How many iterations did they use?\\n2) How they estimate the eigenvectors for ranks larger than one? Did they use the deflation technique?\\n3) Did they apply the same clipping to 1 in the other estimators they tested, apart from the one they proposed? In my initial comment, I pointed out: \\\"Discounting the clipping on 1, which one can always do knowing that the matrix to be estimated has entries below 1\\\". Given that their proposed estimator explicitly leverages this additional information (that the matrix has entries in $[0,1]$), it seems that any general-purpose algorithm could also be adapted to incorporate this knowledge.\"}", "{\"comment\": \"Thank you very much! We have added this insight as a comment to the final version of the paper.\"}", "{\"comment\": \"I appreciate the effort the authors have made in addressing my concerns, and I have updated my score in light of these improvements.\"}", "{\"title\": \"Response letter to Reviewer 3djw (part 2)\", \"comment\": [\"*``Difficulty in Extending Results: Although the authors discuss potential extensions, constructing estimators for lower-rank graphons appears challenging as the equations grow complex quickly. Furthermore, the order (in powers of E) of the quantities needed increases with the rank, complicating the computation.''*\"], \"answer\": \"Thank you for your nice comments! In the revision, we have extended our method to address the model with a general $r$. For Assumption 1, admittedly, our condition rules out $\\\\int_0^1G_k(u)du=0$ for some $k$. We note that similar assumptions has made in other works, such as in Theorem 3 of [1]. Exploring how to relax this condition is an important and influential future work.\\n\\n[1] Bickel, P. J., Chen, A., & Levina, E. (2011). The method of moments and degree distributions for network models. The Annals of Statistics, 39(5), 2280\\u20132301.\"}", "{\"title\": \"Replies to Reviewer 3djw\", \"comment\": \"Thank you for discussions. Based on your discussions, we have made some additions and modifications. Below, we address your questions point by point.\\n\\n- *''Statements such as ''...we introduce the additive separable model as a new, parsimonious representation of the graphon...\\\" or ``...introduces our new low-rank graphon model, termed the Additive Separable Graphon Model (ASG)...\\\" seem, in my view, to overstate the novelty of the model. These claims could benefit from greater precision to avoid suggesting that the model itself is entirely new when it appears to build on established concepts.''*\", \"answer\": \"Thank you for your suggestion. We have observed that the power iteration method performs similarly to our approach for dense graphons, but underperforms for sparse ones. For more details, please refer to Remark 1. We would also like to emphasize that, unlike these numerical-based methods, our approach is fundamentally rooted in statistical principles. It not only estimates the graphon itself but also provides an estimate of the graphon evaluated at the observed nodes.\"}", "{\"title\": \"Thanks for your comments. I've updated my score\", \"comment\": \"Based on the authors' responses to both my feedback and that of other reviewers, I have increased my score.\"}", "{\"title\": \"Discussion of your response (part 3)\", \"comment\": \"**Regarding Alternative Equations for Eigenvalue Estimation** The newly proposed method adequately addresses my earlier question. However, the cycles appearing in equation (5) should, in principle, be expressible in terms of the spectrum of the graph's adjacency matrix. Considering this and my prior comment regarding the complexity, I find it challenging to identify a clear advantage over spectral methods, particularly for the estimation of $P$. Further elaboration on the specific improvements or distinct benefits of the proposed approach in this context would strengthen its positioning.\"}", "{\"title\": \"Response letter to Reviewer whmb (part 1)\", \"comment\": \"Thank you for your interest in our paper and for your thoughtful review. We appreciate your positive feedback on the clarity of our method, as well as our theoretical results and simulation results. We summarize the main changes according to the valuable suggestions from reviewers as follows.\\n\\n1. We provide a complete theoretical framework for general $r$, updated in Lines 203\\u2013338 on pages 4\\u20137. \\n2. Computationally, we introduce a new approximation approach for computations, updated in Lines 339\\u2013381 on pages 7\\u20138. None of the theoretical results are affected, while it achieves a time complexity matching that of matrix multiplication ($ O(n^{2.373})$), eliminating the need for subsampling techniques. \\n3. In Appendix A.3, we provide a method for selecting $r$ when it is unknown. We also point out this in Remark 2 of the main article.\\n4. We update the simulation results and included an additional simulation for $ r = 3 $.\\n5. In Appendix A.1, we include a real data analysis, demonstrating the use of our method for selecting $r$, as well as our algorithms for estimating the connection matrix and the graphon function. \\n6. We update the mathematical proofs for all theoretical results.\\n\\nIn response to your questions, we have made revisions and improvements to the paper. Below, we address your points one by one.\\n\\n- *``Empirically, as shown in the comparison tables, the proposed method yields only marginal improvement over existing methods.''*\", \"answer\": \"We would like to clarify that the improvement in performance is influenced by the sparsity of the graphon. For relatively dense graphons (see Table 2), our method performs similarly to USVT, as both achieve minimax optimal rates for estimating the connection probability matrix. However, for sparse graphons (see Table 3), our method significantly outperforms USVT, achieving MSE reductions of over 50\\\\%. This underscores the versatility of our approach in effectively addressing both dense and sparse networks.\\n\\nImportantly, this superior performance is accompanied by significantly lower computational requirements, with a complexity of $O(n^{2.373})$ (see Table 2). A further advantage of our method is that it is tuning-parameter-free, unlike the SAS method (Chan and Airoldi, 2014) and the network histogram method (Olhede and Wolfe, 2014), both of which require careful selection of tuning parameters. Consequently, these methods can be sensitive to the choice of parameters, potentially impacting their performance (see tables below).\\n\\nAdditionally, our method directly leverages the low-rank structure of the graphon to estimate the graphon function itself, not just on discrete grids (as represented by connection probability matrices), but across the entire domain of $[0,1] \\\\times [0,1]$. In contrast, neither the USVT method nor the neighborhood smoothing method can estimate the full graphon function. \\n\\nThus, our method offers a computationally efficient (tuning-free with optimal complexity), theoretically optimal (achieving minimax rates), and comprehensive framework for graphon estimation, outperforming its competitors in these key aspects.\\n\\n### The MSE of the SAS (Chan and Airoldi, 2014) under different parameter selections.\\n| K | 190 | 210 | 230 | 250 (default) | 270 | 290 | 310 |\\n|-----|-------|-------|-------|-------|-------|-------|-------|\\n| MSE | 0.00155 | 0.00192 | 0.00235 | 0.00283 | 0.00340 | 0.00398 | 0.00462 |\\n\\n### The MSE of the Nethist (Olhede and Wolfe, 2014) under different parameter selections.\\n| h | 30 | 40 | 50 (default) | 60 | 70 |\\n|------|-------|-------|-------|-------|------- |\\n| MSE | 0.000873 | 0.000713 | 0.000617 | 0.000582 | 0.000525 |\"}", "{\"comment\": \"Dear reviewer 3djw,\\n\\n&nbsp; \\n\\n---\\n\\nThank you for your response. Based on your feedback, we have addressed your questions point by point, as outlined below.\", \"general_question\": \"*``please review that you are not using 'new model'. I found a one instance of this in line 48. I think removing all this give a more transparent view on this model.''*\", \"answer\": \"For power iteration, we set the maximum number of iterations to $500$ and the convergence threshold to $10^{-6}$ (measured by the norm of the difference between successive vectors, see the algorithm in the response to the next question). For all scenarios discussed in our paper, the actual number of iterations is listed in the tables below, and we have added the tables in the appendix.\\n\\nThe results indicate that in most scenarios, the number of iterations required for convergence is relatively small. However, there are exceptions in which the power iteration does not achieve convergence within 500 iterations. \\n\\n| ID | $\\\\hat \\\\lambda_1$ iteration | Std.Dev. | $\\\\hat \\\\lambda_2$ iteration | Std.Dev. | $\\\\hat \\\\lambda_3$ iteration | Std.Dev. |\\n|:---:|:--------------------------:|:--------:|:--------------------------:|:--------:|:--------------------------:|:--------:|\\n| 1 | 6 | 0 | | | | |\\n| 2 | 5 | 0 | | | | |\\n| 3 | 5.01 | 0.0995 | | | | |\\n| 4 | 9 | 0 | 17.99 | 0.5744 | | |\\n| 5 | 6 | 0 | 500 | 0 | | |\\n| 6 | 15.34 | 0.6200 | 8 | 0 | | |\\n| 7 | 27.83 | 1.8871 | 15.1 | 0.7416 | 8 | 0 |\\n\\n**Table1:** The number of iterations for dense settings across 100 independent trials.\\n\\n\\n| ID | $\\\\hat \\\\lambda_1$ iteration | Std.Dev. | $\\\\hat \\\\lambda_2$ iteration | Std.Dev. |\\n|:---:|:--------------------------:|:--------:|:--------------------------:|:--------:|\\n| 2 | 13 | 0 | | |\\n| 3 | 17.15 | 0.3571 | | |\\n| 4 | 31.91 | 1.8713 | 500 | 0 |\\n| 5 | 17.98 | 0.3995 | 500 | 0 |\\n\\n**Table2:** The number of iterations for sparse settings across 100 independent trials.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper explores additive separable graphon models (ASG) as a a flexible framework to capture low-rank structures in network data. The authors propose simple and efficient algorithm based on subgraph counts for probability connection matrix rank $r$ either being $1$ or $2$, and further use interpolation to recover the graphon functions. A wide range of simulations are included to back up the performance of their method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The general framework of additive separable graphon models is appealing, which naturally introduces the low-rankness for network data.\", \"I do enjoy reading the paper, as it presents ideas in a clear and logical way. This makes the methodology and its contributions straightforward to follow.\", \"The simulations showcase both the efficiency and effectiveness of their method. This makes it easy to assess how the method performs across a range of synthetic network settings.\"], \"weaknesses\": [\"Overall, I would regard the proposed method as a method for estimating the network moment or motifs. That said, I doubt that it would scale easily to even moderate values of $r$, especially due to computational demands. This constraint could make it less practical for most commonly studied stochastic block models, where the connection probability matrix rank is often higher than $2$.\", \"Despite what\\u2019s suggested in the discussion, I think it would help if the authors could explicitly address the estimation process for general ASG(r) as this is particularly important for practitioners looking to apply it more broadly.\", \"As a follow-up question, deciding the appropriate rank $r$ could be a challenge in practice. A brief discussion on rank selection\\u2014whether there\\u2019s a heuristic, or a data-driven way to guide practitioners would make the approach more usable.\", \"Given the wide range of applications inspired by network literature, I suggest the authors analyze at least one real-world dataset to demonstrate the practical utility of their method.\"], \"questions\": \"See weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response letter to Reviewer 3djw (part 3)\", \"comment\": [\"*``The notation $O_p$ used throughout the paper is not defined. In the finite sample results, such as in Theorem 1, the results should hold with high probability\\u2014this is currently unstated.''*\"], \"answer\": \"Thank you for your interest in the specifics. We kindly recall that, in this paper, we consider simple undirected graphs for modeling the network (i.e., without multiple edges or self-loops, and $A_{ij}=A_{ji}$). Consequently, we have $\\\\sum_{i,j}A_{ij}A_{ji}=\\\\sum_{i,j}A_{ij}$ which approximate up to rescaling $\\\\lambda_1(\\\\int_0^1 G_1(u)du)^2+\\\\lambda_2(\\\\int_0^1 G_2(u)du)^2$. We propose new methods instead of the previous sampling approach, for effectively estimating ASG(r) model utilizing lines and cycles (see Lines 223\\u2013227 on page 5 for a formal definition). We choose these subgraphs because they not only ensure theoretical guarantees (Theorems 3 and 4) but also maintain computational efficiency. As demonstrated in Section 3.3, our method achieves an asymptotically equivalent complexity to matrix multiplication, effectively avoiding the challenges of general subgraph counting. In practice, we also suggest eliminating paths with repetitive nodes subject to the computational constraint to improve the finite sample performance. In fact, when $r$ is small, the extra computational cost of this correction can be small, as confirmed by our simulation (see the running time in Table 2 on page 9).\"}", "{\"title\": \"Response letter to Reviewer whmb (part 2)\", \"comment\": [\"*``Theoretically, it raises the question: why is an additive separable scheme considered reasonable? Why not a multiplicative or another type of schema?''*\"], \"answer\": \"Thank you for your question. The additive separable form is a natural choice, grounded in the insight provided by the Hilbert\\u2013Schmidt theorem (see for example [1]), which implies that any bounded graphon function $ f(u,v) $ can be decomposed as: $f(u,v) = \\\\sum_{j=1}^\\\\infty \\\\lambda_j G_j(u) G_j(v),$ where $G_j$ are orthonormal eigenfunctions and $\\\\sum_{j} \\\\lambda_j^2 < \\\\infty$ [2].\\n\\nA practical approach to estimating this infinite series is to truncate it at $r$, retaining the $r$ eigenfunctions corresponding to the $r$ largest absolute eigenvalues. This results in a low-rank structure that aligns naturally with our model and facilitates efficient estimation. An additional advantage of this approach is that the resulting connection probability matrix $P$\\u2014formed by evaluating the graphon function at the nodes\\u2014naturally inherits the same low-rank property as the truncated graphon. We believe this model provides a robust and insightful framework for investigation. That said, exploring multiplicative or other alternative schemas remains an important direction for future research.\\n\\n[1] Szegedy, B. (2011). Limits of kernel operators and the spectral regularity lemma. European Journal of Combinatorics, 32(7), 1156\\u20131167.\\n\\n[2] Bickel, P. J., Chen, A., & Levina, E. (2011). The method of moments and degree distributions for network models. The Annals of Statistics, 39(5), 2280\\u20132301.\"}", "{\"title\": \"Response letter to Reviewer LuY1\", \"comment\": \"Thank you for your interest in our paper and for your thoughtful review. We appreciate your acknowledgment of the logical structure of our writing and the positive feedback on the simulation results. We summarize the main changes according to the valuable suggestions from reviewers as follows.\\n \\n1. We provide a complete theoretical framework for general $r$, updated in Lines 203\\u2013338 on pages 4\\u20137. \\n2. Computationally, we introduce a new approximation approach for computations, updated in Lines 339\\u2013381 on pages 7\\u20138. None of the theoretical results are affected, while it achieves a time complexity matching that of matrix multiplication ($O(n^{2.373})$), eliminating the need for subsampling techniques. \\n3. In Appendix A.3, we provide a method for selecting $r$ when it is unknown. We also point out this in Remark 2 of the main article.\\n4. We update the simulation results and included an additional simulation for $r = 3 $.\\n5. In Appendix A.1, we include a real data analysis, demonstrating the use of our method for selecting $r$, as well as our algorithms for estimating the connection matrix and the graphon function. \\n6. We update the mathematical proofs for all theoretical results.\\n \\nIn response to your questions, we have made revisions and enhancements to the manuscript. Below, we address your questions point by point.\\n\\n- *``Overall, I would regard the proposed method as a method for estimating the network moment or motifs. That said, I doubt that it would scale easily to even moderate values of $r$, especially due to computational demands. This constraint could make it less practical for most commonly studied stochastic block models, where the connection probability matrix rank is often higher than 2.''*\", \"answer\": \"Thank you for your suggestion. We have added a real data example in section A.1 of the appendix (page 12-13), which comes from contact records in a primary school. Using this data, we selected $ r = 4 $ and estimated the corresponding connection matrix and graphon function.\"}", "{\"title\": \"Continued\", \"comment\": [\"*``How they estimate the eigenvectors for ranks larger than one? Did they use the deflation technique?''*\"], \"answer\": \"Thank you for your question. We would like to clarify that in our current version, all the methods (both ours and others') include this adjustment, i.e., using $1 \\\\wedge (0 \\\\vee \\\\hat{p}_{ij})$ to calculate the final MSE and maximum error. We have added the statement at the beginning of Section 4.\"}", "{\"title\": \"Replies to Reviewer 3djw (Continued)\", \"comment\": [\"*``However, the cycles appearing in equation (5) should, in principle, be expressible in terms of the spectrum of the graph's adjacency matrix. Considering this and my prior comment regarding the complexity, I find it challenging to identify a clear advantage over spectral methods, particularly for the estimation of $P$. Further elaboration on the specific improvements or distinct benefits of the proposed approach in this context would strengthen its positioning.''*\"], \"answer\": \"Thank you for your suggestion. The differences between our method and spectral methods are highlighted in our previous response. Notably, our approach allows for the simultaneous estimation of both the connection probability matrix and the graphon function. Additionally, for estimating the connection probability matrix, our method shows clear advantages over spectral methods in the sparse region (see Table 3). Theoretically, even for Lipschitz graphons, spectral methods do not achieve optimal rates in sparse settings (see Xu, 2018). Exploring the convergence rate of our method in this sparse region presents a promising direction for future work.\\n\\nWe sincerely appreciate your suggestion to consider the power iteration and spectral methods, which prompted us to further investigate their empirical performance in this revision.\"}", "{\"title\": \"Main changes in the second revision\", \"comment\": \"We have made a second revision to our article, and the main changes are listed below for your reference. In the newly revised version, we have revised the wording to reduce the emphasis on the novelty of the model itself. Instead, we frame our estimation approach as a new attempt to estimate a low-rank representation. Additionally, we have added a new remark (Remark 1) that discusses the power iteration method and included it in our simulation studies (see Tables 2, 3, and 5). Furthermore, we have added Remark 3 (line 289) in the revised manuscript to further highlight the key differences between our method and spectral methods, including differences in motivation, estimation procedure, assumptions, and empirical performance for sparse graphons. It is worth noting that our approach allows for the simultaneous estimation of both the connection probability matrix and the graphon function. For estimating the connection probability matrix, our method shows clear advantages over spectral methods in the sparse region (see Table 3).\"}", "{\"comment\": \"Thank you for your response and for providing the reference. Indeed, a deeper exploration of this topic could be a valuable direction for future work. I would suggest adding this insight as a comment in the paper, if it has not been included already.\\n\\nBeyond this, I believe the revisions have improved the paper, and I have updated my score accordingly.\"}", "{\"comment\": \"We regret being rejected with a score of 666\\u2014reflecting all positive scores\\u2014after substantial improvements. We fully respect the decision, however, there are some points we need to clarify:\\n\\n1. During the rebuttal period, we extended our work to cover cases with arbitrary fixed $r$. The algorithms and theory are, of course, firmly grounded in the methods and theoretical framework originally submitted for $r\\\\le 2$. Therefore, the correctness is guaranteed. Furthermore, the reviewers acknowledged this extension without raising any concerns about its correctness. Finally, considering that the ICLR website clearly states that \\\"reviewers are not required to read the appendix\\\", I have reservations about the statement \\\"the changes warrant a new review of the paper for correctness\\\".\\n\\n2. Our method and the spectral method differ significantly in motivation, assumptions, and performance in sparse scenarios, as clearly laid out in Remark 3. Moreover, recovering the graphon function from an estimate of the connection probability matrix is, indeed, far from trivial. First, the latent variables associated with the graphon are both unknown and unordered, which prevents the connection probability matrix from being treated as a lattice sampling of the underlying graphon. Second, further analyzing the estimated connection probability matrix becomes complex, particularly when the goal is to achieve sup-norm consistency for graphon estimation. It is truly a pity that there are still misunderstandings regarding this problem, and, somehow, regarding our methods and theoretical framework.\"}", "{\"title\": \"Discussion on your response (part 2)\", \"comment\": \"**Regarding the Extension of the Results and the Complexity** While it is commendable that the authors have extended their results, the distinction between their approach and spectral methods, particularly for estimating the probability matrix, remains somewhat unclear. Given that the complexity of the proposed method is of the order of matrix multiplication, it appears comparable to that of spectral methods.\\n\\nFurthermore, leveraging the additional information of low-rank or approximate low-rank structure (ignoring noise terms), one might expect that methods like power iteration, with a constant number of iterations, could achieve similar results at a comparable complexity. Clarifying how the proposed method diverges from or improves upon these established techniques would help delineate its advantages more effectively.\"}", "{\"title\": \"Discussions on your response (part 1)\", \"comment\": \"I would like to thank the authors for their point-by-point response.\\n\\n**Regarding the novelty of the model** The authors state, \\\"However, this work is likely the first to leverage this model specifically for estimating both the connection probability matrix and, more importantly, the graphon function.\\\" However, this argument pertains more to the methods applied rather than the novelty of the model itself, and therefore, it is not entirely convincing in my opinion. Statements such as \\\"...we introduce the additive separable model as a new, parsimonious representation of the graphon...\\\" or \\\"...introduces our new low-rank graphon model, termed the Additive Separable Graphon Model (ASG)...\\\" seem, in my view, to overstate the novelty of the model. These claims could benefit from greater precision to avoid suggesting that the model itself is entirely new when it appears to build on established concepts.\\n\\n**Regarding the power iteration** The authors note, \\\"we would like to point out that although the connection matrix is rank-1, the corresponding adjacency matrix may still be full-rank.\\\" While this observation is valid, I believe the conclusion should hold approximately, up to noise terms. Specifically, for a rank-1 $P$, the adjacency matrix for sufficiently large $n$ is expected to exhibit a dominant eigenvalue, with the remaining eigenvalues falling below a certain threshold. \\n\\n**Regarding the emphasis on graphon function estimation** The authors state, \\\"our motivations differ fundamentally, as our goal is to estimate the graphon function itself, rather than just the connection probability matrix.\\\" However, this distinction is not consistently clear in the manuscript. While the estimation of the graphon function is indeed highlighted as a key contribution (and, in my opinion, the most interesting aspect of the work), the estimation of the connection probability matrix is also prominently presented as a main contribution.\"}", "{\"summary\": \"This paper proposes an additive separable model for graphons, producing a low-rank connection matrix and addressing certain identification issues. An estimation method using subgraph counts has been proposed to estimate the graph parameters. Several numerical experiments are showed to highlight the performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed generalization approach is simple and easy to follow.\\n2. The bounds and minmax rates obtained are reasonable and the authors did a good job in explaining the results and the implications. \\n2. The numerical experiments encompasses a wide class of gryphon functions and several competing methods. Unlike cherry picking the results, the authors also showed scenarios where their method did not perform well.\", \"weaknesses\": \"The utility of the proposed generalization is questionable from both theoretical and empirical perspectives. Empirically, as shown in the comparison tables, the proposed method yields only marginal improvement over existing methods. Theoretically, it raises the question: why is an additive separable scheme considered reasonable? Why not a multiplicative or another type of schema?\", \"questions\": \"n/a\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper studies algorithms for estimating low-rank graphons. They first estimate the low-rank probability matrix for the finite node sample and then use linear interpolation to estimate the full graphon function on [0,1]->[0,1]. Reviewers were generally positive about this paper, feeling that the model studied was nice and simple, the writing is clear and rigorous, and the experimental results are convincing.\\n\\nHowever, after discussion, we felt that 1) some substantial aspects still need further exploration/improvement and 2) given the substantial changes made during the rebuttal the paper should really see a resubmission and re-review in its new form. A summary of these issues/changes is below:\\n\\n1. The initial version claimed that the low-rank graphon model was new and introduced in this paper. This is inaccurate -- low-rank random graph models and graphons have appeared in significant prior work. The authors edited the paper to clarify that the model was not novel, although some of the language is still a bit vague.\\n2. The initial paper had limited scope in that it only gave theoretical results for rank <= 2 graphons. During the rebuttal period, the authors substantially strengthened the results to capture rank r graphons for any r and the main algorithm (Algorithm 2) has changed substantially. This is a very positive outcome of the rebuttal period, however, the changes warrant a new review of the paper for correctness.\\n3. As one reviewer pointed out, the paper's approach for estimating the edge probability matrix is very close to a spectral method: it looks like one iteration of power method for estimating the top eigenvector of the adjacency matrix. While the authors have added some discussion of spectral methods, we feel that before publication, a better understanding of how these methods compare is really needed. Theoretically, how would results change if one instead just computed the top eigenvector of the adjacency matrix (or top r eigenvectors in the rank-r case) to estimate the connection probability matrix? It does not feel enough to simply mention that the method is similar to spectral methods without exploring this further. The authors point out that spectral methods only estimate the finite-sized connection probability matrix, and not the full graphon function on [0,1] x [0,1]. But this was not a very convincing argument: after all, the two main algorithms presented in this work also focus on estimating the connection probability matrix, and the graphon is then estimated through interpolation. Interpolation could be applied to any method that first estimates the finite connection probability matrix.\\n\\nOn balance, we feel that given the above issues, the paper warrants a resubmission/re-review.\", \"additional_comments_on_reviewer_discussion\": \"See main meta review.\"}", "{\"summary\": \"This paper proposes a method for estimating both the connection probability matrix and the graphon function for low-rank graphons of rank 1 and 2. The authors provide finite-sample error bounds for the connection probability matrix estimation, specifically in the max-norm (i.e., $\\\\|A\\\\|_ {max} = \\\\max_ {ij} |A_{ij}|$). For the graphon function estimation, they establish both asymptotic and finite-sample bounds under the sup-norm on $[0,1]^2$. The experimental results on synthetic data highlight the method's performance across various rank-1 and rank-2 graphons.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1- **Clarity and Mathematical Rigor**: The paper is generally well-written and accessible, with clear explanations that guide the reader through the methodology. The methods are simple, and the proofs appear mathematically sound and contribute to a solid theoretical foundation.\", \"2__competitive_experimental_results\": \"The proposed methods demonstrate competitive performance on synthetic datasets.\", \"weaknesses\": \"1- **Novelty**: Regarding the proposed model Additive Separable Graphons ASG($r$), the authors state in line 076 that this model is new. However, it appears to me as a generic low-rank graphon, which is a well-recognized particular case of a general graphon already discussed in the literature (e.g., Chan and Airoldi, 2014; Xu, 2018). Indeed, for any rank-$r$ graphon, the decomposition given in Eq. $(1)$ follows directly from the spectral theorem. I struggle to see this as a new model, and in my opinion, it would be beneficial for the authors to justify why ASG($r$) is indeed novel.\\n\\nIn terms of the methods, **Algorithm 1** resembles a step of the power iteration applied to the adjacency matrix $E$ with the starting vector $\\\\mathbf{1}$ (the vector of all ones). Taking $v= \\\\frac{E\\\\mathbf{1}}{\\\\|E\\\\mathbf{1}\\\\|_ 2}$, we have $v_i=\\\\frac{\\\\sum_ {j}E_{ij}}{\\\\|E\\\\mathbf{1}\\\\|_ 2}=\\\\frac{d_i}{\\\\|d_i\\\\|_ 2}$, where $d_i$ is the degree of the node $i$, as in the paper. On the other hand, the estimation of the eigenvalue is given by the Rayleigh quotient $\\\\lambda'_ 1=\\\\frac{\\\\mathbf{1}^\\\\top E \\\\mathbf{1}}{n}=\\\\frac{\\\\sum_ {i,j}E_{ij}}{n}=\\\\frac{\\\\sum_ {i,j:i\\\\neq j}E_{ij}}{n}$. With this normalization $v_i$ is an estimation of $\\\\frac{G_1(U_i)}{\\\\sqrt{n}}$, then putting this together yields an estimation of probability matrix: $p'_ {ij}=n\\\\lambda'_ 1(vv^\\\\top)_ {ij}=\\\\frac{\\\\sum_ {i,j:i\\\\neq j}E_{ij}}{\\\\sum_ {i,j}d_id_j}d_id_j$. The proposed estimator is $\\\\hat{p}_ {ij}=1\\\\wedge \\\\frac{\\\\sum_ {i,j:i\\\\neq j}E_{ij}}{\\\\sum_ {i,j:i\\\\neq j}d_id_j}d_id_j$. Discounting the clipping on $1$, which one can always do knowing that the matrix to be estimated has entries below $1$, the only difference is the factor $\\\\sum_ {i,j:i\\\\neq j}d_id_j$ is the denominator of $\\\\hat{p}_ {ij}$, compared to $\\\\sum_ {i,j}d_id_j$ in $p'_ {ij}$. In the dense regime treated here, I believe that the difference is negligible.I suspect **Algorithm 2** could be similarly treated. The power iteration is known to converge in one iteration for matrices of rank 1 and fast for low rank matrices. Comparing with this approach may be beneficial, as the direct competitors here are spectral methods, and it\\u2019s possible the methods are analogous.\", \"2__difficulty_in_extending_results\": \"Although the authors discuss potential extensions, constructing estimators for lower-rank graphons appears challenging as the equations grow complex quickly. Furthermore, the order (in powers of $E$) of the quantities needed increases with the rank, complicating the computation. Although the subsampling technique seems feasible, it lacks a thorough theoretical foundation. For example, in **Remark 2**, the justification only considers asymptotics, and I suspect finite sample considerations could introduce additional variance. Providing more detail on possible extensions would add value.\", \"3__somewhat_unfair_experimental_comparisons\": \"In their experiments, the authors compare their method to more generic methods that work under broader assumptions (not strictly rank 1 or 2). For example, **USVT** can adapt to various low-rank situations. I consider this comparison somewhat unfair, as the proposed method is tailored to specific ranks in the application, effectively using additional information. Additionally, the authors note that methods are used with default values. Spectral methods with low-rank information might perform as efficiently as the proposed method.\", \"4__limited_applicability\": \"Limiting the methodology to rank-1 and rank-2 graphons restricts its applicability. Additionally, for rank-2 graphons, **Assumption 1** feels restrictive. For instance, the condition $\\\\int^1_0G_k(u)du\\\\neq 0$, for all $k$, is unmet for $ k = 2 $ when \\\\$ G_1 $ is constant (given $L_2 $ orthogonality). It seems natural to consider polynomial bases as the elements $ G_k $ in a low-degree graphon, which will typically include a constant function.\", \"questions\": \"1- Could you address my comments in the weaknesses section above, point by point?\\n\\n2- The notation $O_p$ used throughout the paper is not defined. \\n\\n3- In the finite sample results, such as in Theorem 1, the results should hold with high probability\\u2014this is currently unstated.\\n\\n4- In line 434, the authors mention that the absence of a tuning parameter \\\"enhances robustness.\\\" Could they provide a clearer justification for this claim?\\n\\n5- f I am following their proof correctly, it seems that alternative equations involving $\\\\hat{\\\\lambda}_ 1$ and $\\\\hat{\\\\lambda}_ 2$ could be derived for the rank-2 case. Specifically, it seems that $\\\\sum_{i,j}A_ {ij}A_{ji}$ should approximate up to rescaling $\\\\hat{\\\\lambda}^2_1+\\\\hat{\\\\lambda}^2_2$. Is this correct, and would it be useful? How does this approach compare with the method they propose?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I would like to thank the authors once again for their response. I believe the introduced modifications have enhanced the scope of their results. Out of curiosity, I have a final question: do the authors have any insights into why their method appears to experimentally outperform competing methods in the sparse regime?\"}", "{\"comment\": \"Thank you for your question. Our understanding is informed by Theorem 1 in Bickel et al. (2011), which establishes favorable convergence rates for the method of moments estimator, particularly in sparse settings. Our estimator builds on this foundation by leveraging specific network moments, although the precise details of its convergence properties require further investigation. Given the scope of this paper and time constraints, we propose leaving a deeper exploration of this topic for future research.\"}", "{\"title\": \"Response letter to Reviewer 3djw (part 1)\", \"comment\": \"Thank you for reviewing our manuscript and providing valuable feedback. It is encouraging that you could acknowledge the clarity, theoretical foundation, and simulation results of our paper. We summarize the main changes according to the valuable suggestions from reviewers as follows.\\n\\n1. We provide a complete theoretical framework for general $r$, updated in Lines 203\\u2013338 on pages 4\\u20137. \\n2. Computationally, we introduce a new approximation approach for computations, updated in Lines 339\\u2013381 on pages 7\\u20138. None of the theoretical results are affected, while it achieves a time complexity matching that of matrix multiplication ($ O(n^{2.373})$), eliminating the need for subsampling techniques. \\n3. In Appendix A.3, we provide a method for selecting $r$ when it is unknown. We also point out this in Remark 2 of the main article.\\n4. We update the simulation results and included an additional simulation for $ r = 3 $.\\n5. In Appendix A.1, we include a real data analysis, demonstrating the use of our method for selecting $r$, as well as our algorithms for estimating the connection matrix and the graphon function. \\n6. We update the mathematical proofs for all theoretical results.\\n\\nIn response to the weaknesses and questions you pointed out, we have made some additions and clarifications. Below, we address your questions point by point. \\n\\n- *``Regarding the proposed model Additive Separable Graphons ASG(r), the authors state in line 076 that this model is new. However, it appears to me as a generic low-rank graphon, which is a well-recognized particular case of a general graphon already discussed in the literature (e.g., Chan and Airoldi, 2014; Xu, 2018). Indeed, for any rank-r graphon, the decomposition given in Eq. (1) follows directly from the spectral theorem. I struggle to see this as a new model, and in my opinion, it would be beneficial for the authors to justify why ASG(r) is indeed novel.''*\", \"answer\": \"Thank you for your insightful comments! The connection between our method and spectral methods is indeed an interesting topic. When $r = 1$, the results from the spectral method closely align with those of our method, as shown in Table 5 on page 14.\\n\\nRegarding power iteration, we would like to point out that although the connection matrix $P$ is rank-1, the corresponding adjacency matrix $E$ may still be full-rank. Additionally, both the spectral method and the power iteration method face challenges when extended to estimating graphon functions over $[0,1]$.\\n\\nFor $r > 1$, the relationship between these methods and ours remains unclear. We have not identified any definitive connections, either through theoretical analysis or simulations. Moreover, our motivations differ fundamentally, as our goal is to estimate the graphon function itself, rather than just the connection probability matrix. It is important to note that the graphon function captures the core of the statistical model, while the connection matrix is merely a realization of the graphon function evaluated at a finite set of discrete points. Furthermore, the connection probability matrix is not a proper limit object in random graph models, whereas the key advantages of graphon models arise precisely in this limiting sense.\\n\\nFrom a practical perspective, our method tends to be slightly faster than USVT, as implemented in the R package provided by the authors of USVT.\"}", "{\"comment\": \"Thank you very much for your response and your positive feedback on our revisions. We truly appreciate it.\\n\\nAlso, we would like to let you know our second revision. The main changes are listed below. In the newly revised version, we have revised the wording to reduce the emphasis on the novelty of the model itself. Instead, we frame our estimation approach as a new attempt to estimate a low-rank representation. Additionally, we have added a new remark (Remark 1) that discusses the power iteration method and included it in our simulation studies (see Tables 2, 3, and 5). Furthermore, we have added Remark 3 (line 289) in the revised manuscript to further highlight the key differences between our method and spectral methods, including differences in motivation, estimation procedure, assumptions, and empirical performance for sparse graphons. It is worth noting that our approach allows for the simultaneous estimation of both the connection probability matrix and the graphon function. For estimating the connection probability matrix, our method shows clear advantages over spectral methods in the sparse region (see Table 3).\\n\\nOnce again, we sincerely appreciate your thoughtful review and valuable suggestions for our article.\"}" ] }
9LAqIWi3QG
R3HF: Reward Redistribution for Enhancing Reinforcement Learning from Human Feedback
[ "Jiahui Li", "Tai-Wei Chang", "Fengda Zhang", "Long Chen", "JUN ZHOU" ]
Reinforcement learning from human feedback (RLHF) provides a paradigm for aligning large language models (LLMs) with human preferences. This involves the initial training of a reward model based on pairwise human feedback. The reward model is subsequently utilized in reinforcement learning to assess the scores of each generated sentence as a whole, further guiding the optimization of LLMs. However, current approaches have a significant shortcoming: They allocate a single, sparse, and delayed reward to an entire sequence of output. This may overlook some significant individual contributions of each token towards the desired outcome. To overcome this limitation, our paper proposes a novel reward redistribution method called R3HF, which facilitates a more fine-grained, token-level reward allocation. Specifically, our method treats the reward prediction task of the reward model as a regression problem. As a result, the redistributed rewards are computed by evaluating the specific contribution of each token to the reward model's output. This detailed approach improves the model's understanding of language nuances, leading to more precise enhancements in its performance. Our method is crafted to integrate seamlessly with most current techniques while incurring minimal computational costs. Through comprehensive experiments across diverse datasets and tasks, we have verified the effectiveness and superiority of our approach.
[ "RLHF" ]
https://openreview.net/pdf?id=9LAqIWi3QG
https://openreview.net/forum?id=9LAqIWi3QG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uzWPWD1hxO", "g5fRGa2Job", "ZC1S4D3CHi", "V1TDLMaLFM", "JLfHRunCj2", "3gC4qUtkwF" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730758834013, 1730580384019, 1729913613880, 1729071621406, 1730733522361, 1731437715404 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4789/Reviewer_iha9" ], [ "ICLR.cc/2025/Conference/Submission4789/Reviewer_8UNW" ], [ "ICLR.cc/2025/Conference/Submission4789/Reviewer_Hqh7" ], [ "ICLR.cc/2025/Conference/Submission4789/Reviewer_bqTD" ], [ "ICLR.cc/2025/Conference/Submission4789/Reviewer_GAkt" ], [ "ICLR.cc/2025/Conference/Submission4789/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This work proposes a modification to reward labelling for RLHF training. Instead of labelling each sequence with a single reward at the EOS token, \\\"reward redistribution\\\" proposes to give a reward to each token (like a value function). The reward for a token is the difference in reward model score if this token is added. The reward model is not re-trained but used as is for this, but this changes the loss for PPO training.\\n\\nThey demonstrate their method applied with PPO outperforms a baseline PPO on question-answering, a modified TLDR, and safeRLHF. They evaluate win-rates against their own trained SFT model using their own reward model as well as GPT-4.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper is easy to read and describes the method succintly.\", \"weaknesses\": \"This work is fundamentally flawed theoretically, practically, and in experiment setup. I believe it is impossible for the authors to adequately addresss my concerns in the rebuttal period and recommend they fundamentally rethink their direction.\\n\\nTheoretically, RLHF is not a sequential MDP as said by the authors and this invalidates the need for reward redistribution. The authors say that generating N tokens from an LLM can be viewed as N steps in an MDP. But this assumes that: 1. states of the MDP are simply the current sequence 2. Taking an action is equivalent to choosing the next token But 3. there is no transition probability because choosing a next token deterministically adds the token to the current sequence. Single-turn RLHF is more accurately seen as a single step: generating a full sequence of tokens and then getting feedback from a reward model. This is not an MDP but a contextual bandit. If generating a full sequence is a contextual bandit then it makes sense for it to have a single reward, and not need N rewards per token. Their approach could be shown as empirically effective, but it isn't. \\n\\n\\nPractically, it has been shown on TLDR that the learned reward model is not an effective method of labelling individual tokens with reward (Huang et al, 2024). Trained reward models are explicitly not good at giving reward for non-EOS tokens. Instead, actor-critic methods explicitly learn a value function that does this instead. The authors somehow miss the clear connection that they are trying to create a value function from the reward model. But this doesn't make sense since their baseline method, PPO, already learns a value function! Furthermore, recent works like RLOO (Ahmadian et al, 2023) have argued that value functions are fundamentally not good in the LLM setting. The authors never compare their method to value functions, to monte-carlo estimates of value (like RLOO), and do not have any experiments explaining the qualitative benefits of their approach and *why* it works in practice.\\n\\nExperiment setup is poor. The TLDR task they use is not at all equivalent to the original TLDR task (Stiennon et al, 2020) because the SFT dataset is somehow 14,900 examples and not ~117000 examples as it should be. The authors should follow Huang et al (2024) for open-source TLDR experiments. The authors use LLaMA 1 which is out-dated at this point. They also compare against their own SFT baseline as opposed to existing human-written baselines, which is standard practice (again see Huang et al (2024)). RLHF should not be evaluated using point metrics but with curves comparing KL to performance (Gao et al, 2022). Other issues exist but I cannot enumerate them all. Evals should also not use the training reward as this is explicitly trained with (the authors also have GPT-4 win-rates but these results are few). Figure 5 demonstrates clear overfitting of the baselines but not the authors method, simply reducing the learning rate of baselines would probably fix their performance. There are other issues but I cannot enumerate them all.\\n\\nOverall, the approach is quite weak. The authors would make a much stronger case if they showed their reward redistribution is actually effective on qualitative examples. Furthermore, it makes no sense why the reward model would be effective on non EOS tokens, and perhaps training it to fulfill the requirements of section 3.2 makes more sense.\", \"questions\": \"What is the point of section 3.4 (connection to DPO) ? DPO assumes a contextual bandit formulation, so your comparison is moot.\\n\\nWhy are your results in Table 2a so different from Table 2b? DPO is the worst-performing method by far in 2a and essentially the same as your top-performing method in 2b. Clearly your reward model and GPT-4 are not aligned.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes R3HF, a reward redistribution approach for RLHF. The authors address limitations in traditional RLHF reward allocation and distribute rewards at the token level instead of assigning a single, delayed reward to the entire output sequence. Experimental results show that this fine-grained approach enhances learning efficiency and reduces dependency on human labeling\\u200b.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The idea of redistributing rewards at the token level in RLHF is compelling. Traditional RLHF methods usually assign a single, sequence-level reward, but these approaches cannot capture the contributions of individual tokens, thus limiting the model\\u2019s optimization efficiency and nuanced understanding of language. This paper introduces a method that provides fine-grained, token-level reward allocation.\\n\\nThe workflow of R3HF is reasonable, it provides immediate, token-specific feedback to make language models more responsive to human preferences.\", \"weaknesses\": \"This paper does not sufficiently explore existing credit assignment and reward redistribution methods in the context of RLHF.\\n\\nThe proposed method relies heavily on the reward model's accuracy, assuming that the regression model can precisely represent each token's reward value in time-difference calculations. Any inaccuracies in the reward model can directly impact reward redistribution, leading to flawed allocations. \\n\\nAlso this method assumes that each token's contribution can be accurately calculated using a time-difference approach, which may not be feasible for sequence generation tasks. For long sequences, and especially when the model relies on context spanning entire sentences or paragraphs, the method may be inadequate.\", \"questions\": \"Please notice: In your presentation, please pay attention to the inconsistent use of tenses in the experimental section. And the repetitive use of \\\"Eq. equation\\\" impacts readability.\", \"authors_may_consider_comparing_with\": \"@article{wu2023fine,\\n title={Fine-grained human feedback gives better rewards for language model training},\\n author={Wu, Zeqiu and Hu, Yushi and Shi, Weijia and Dziri, Nouha and Suhr, Alane and Ammanabrolu, Prithviraj and Smith, Noah A and Ostendorf, Mari and Hajishirzi, Hannaneh},\\n journal={Advances in Neural Information Processing Systems},\\n volume={36},\\n pages={59008--59033},\\n year={2023}\\n}\\n\\n@misc{chan2024denserewardfreereinforcement,\\n title={Dense Reward for Free in Reinforcement Learning from Human Feedback}, \\n author={Alex J. Chan and Hao Sun and Samuel Holt and Mihaela van der Schaar},\\n year={2024},\\n eprint={2402.00782},\\n archivePrefix={arXiv},\\n primaryClass={cs.LG},\\n url={https://arxiv.org/abs/2402.00782}, \\n}\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"To address the sparse issue in current RLHF, this paper proposes a novel reward redistribution method, R3HF, that allows token-level reward allocation.\\nSpecifically, the proposed method formulates the reward prediction task as a regression problem that computes redistributed rewards by evaluating the specific contribution of each token to the reward model\\u2019s output, which improves the model\\u2019s understanding of language nuances.\\nExperimental results verifies the proposed method.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The overall direction of learning a dense reward function for RLHF in LMs is promising.\\n2. The experiments and ablation studies are comprehensive.\", \"weaknesses\": \"1. Methods for learning a dense per-step reward without additional fine-grained label data has been clearly proposed in the literature, e.g., [1,2] and the reference therein. The authors ought to have a adequate citation, discussion, and ideally comparison with these prior works. Otherwise, the contribution of this work will be significantly unjustifiable. For example, the central idea in L52-85 has been stated similarly in [2].\\n\\n[1] Feng, Yihao, et al. \\\"Fantastic Rewards and How to Tame Them: A Case Study on Reward Learning for Task-oriented Dialogue Systems.\\\" The Eleventh International Conference on Learning Representations.\\n\\n[2] Yang, Shentao, et al. \\\"Preference-grounded token-level guidance for language model fine-tuning.\\\" Advances in Neural Information Processing Systems 36 (2023).\\n\\n2. Source code seems not anonymously available for review purpose.\\n\\n3. Eq. (7) seems confusing. IMHO, starting from the second line, all $\\\\tilde r^{RM}$ should be $\\\\mathcal{R}_\\\\phi$.\", \"questions\": \"1. How is the per-step reward function $\\\\mathcal{R}_\\\\phi$ trained?\\n2. What is the meaning of \\\"sequence-wide return\\\" and \\\"state-action sequence\\\" in L96?\\n3. What is the meaning of $p^*$ in L155?\\n4. In Table 1 (a), which reward model is used to calculate the Average Score? And why is this evaluation reasonable given that the learned reward model may be far from perfect?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper aims to improve the standard RLHF setting for LLMs (optimizing a reward function learned by a Bradley Terry model under a KL constraint) by considering more fine-grained rewards. For this, they seem to essentially propose what is already known in the literature as potential based reward shaping. They choose the potential to be the reward model logit score of intermediate tokens (instead of the full sequence). The paper also contains limited experiments against a small number of baselines.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The paper considers a relevant topic: how to best align LLMs to human preferences.\", \"weaknesses\": \"1. The paper seems to be of very limited novelty, as the proposed reward formulation seems to essentially correspond to what is known as potential based reward shaping (see [A]). As a result, the actual contribution of the paper seems trivial: It seems to be limited to the choice of the potential function in the form of the reward model logits of intermediate tokens.\\n2. In addition, the authors do not clearly motivate why the logit scores of intermediate tokens under the reward model are a good potential function for reward shaping. After all, there is a clear train-test mismatch: SotA reward models are usually only trained by looking at the logits of the last token in the sequence (e.g. an end-of-string-token). It is without motivation why any other logit would yield a reasonable assessment of the quality of the subsequence.\\n3. As the authors correctly point out, the optimal solution to the (regularized) MDP with potential based reward shaping is the same as the optimal solution to the (regularized) MDP with sequence-level results. This limits the potential of this approach, as we can only hope to get better optimization during finetuning but not a better optimum. The authors do not seem to provide a convincing, principled case why their approach should actually provide better training dynamics.\\n4. The experimental section is extremely weak as the authors only compare their approach to PPO and DPO. Given that this paper is about improving RL training dynamics (see 3), it should compare to both simple baselines (in particular REINFORCE variants with various obvious choices of sequence and token-level learned and non-learned baselines) as well as SotA RL finetuning algorithms like the inner optimization loop of WARP (see [B]) or BOND (see [C]). In addition, it is well known that hyper parameter tuning is critical for RLHF and there are for example clear interactions between learning rate, regularization, number of steps and final performance, so the authors should add experiments showcasing that their results are robust to the choice of such hyperparameters.\\n\\n[A] Wiewiora, E. (2003). Potential-based shaping and Q-value initialization are equivalent. Journal of Artificial Intelligence Research, 19, 205\\u2013208.\\n[B] WARP: On the Benefits of Weight Averaged Rewarded Policies https://arxiv.org/abs/2406.16768\\n[C] BOND: Aligning LLMs with Best-of-N Distillation https://arxiv.org/html/2407.14622v1\", \"questions\": \"Unless I strongly misunderstood the paper, this paper requires major edits that cannot be addressed in a rebuttal.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a novel reward redistribution method known as R3HF, designed to enable a more precise, token-level allocation of rewards. Our approach reframes the reward prediction task of the reward model as a regression problem. Consequently, the redistributed rewards are determined by assessing the individual contribution of each token to the output of the reward model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper proposes a novel reward shaping method in the LLM setting.\", \"The experiment result is good.\"], \"weaknesses\": [\"The reward design seems heuristic, and it is probably not fit for some complex reasoning settings.\", \"I don't see too much advantage from Figure 3.\"], \"questions\": [\"Can this method scale to more datasets like reasoning tasks (math, coding, etc.)?\", \"Can you analyze the reward distribution of your method?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
9KxnxWOBA5
Towards Optimal Multi-draft Speculative Decoding
[ "Zhengmian Hu", "Tong Zheng", "Vignesh Viswanathan", "Ziyi Chen", "Ryan A. Rossi", "Yihan Wu", "Dinesh Manocha", "Heng Huang" ]
Large Language Models (LLMs) have become an indispensable part of natural language processing tasks. However, autoregressive sampling has become an efficiency bottleneck. Multi-Draft Speculative Decoding (MDSD) is a recent approach where, when generating each token, a small draft model generates multiple drafts, and the target LLM verifies them in parallel, ensuring that the final output conforms to the target model distribution. The two main design choices in MDSD are the draft sampling method and the verification algorithm. For a fixed draft sampling method, the optimal acceptance rate is a solution to an optimal transport problem, but the complexity of this problem makes it difficult to solve for the optimal acceptance rate and measure the gap between existing verification algorithms and the theoretical upper bound. This paper discusses the dual of the optimal transport problem, providing a way to efficiently compute the optimal acceptance rate. For the first time, we measure the theoretical upper bound of MDSD efficiency for vocabulary sizes in the thousands and quantify the gap between existing verification algorithms and this bound. We also compare different draft sampling methods based on their optimal acceptance rates. Our results show that the draft sampling method strongly influences the optimal acceptance rate, with sampling without replacement outperforming sampling with replacement. Additionally, existing verification algorithms do not reach the theoretical upper bound for both without replacement and with replacement sampling. Our findings suggest that carefully designed draft sampling methods can potentially improve the optimal acceptance rate and enable the development of verification algorithms that closely match the theoretical upper bound.
[ "speculative sampling" ]
Accept (Poster)
https://openreview.net/pdf?id=9KxnxWOBA5
https://openreview.net/forum?id=9KxnxWOBA5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zV8p9MNOno", "vqI5Exc6sY", "uiwO3vmiHh", "uZzSmp8HyN", "nGDCqP0zbP", "lprnbwbkHl", "lcMpX6FwtE", "i0eVgmy9Q1", "hvXNA05RYG", "ekRDhk8gHm", "ZvFq87RTwt", "XrR2G4fYgo", "TzCd1vvAPh", "SU6JVG12gT", "STvr8WjVlj", "Q70UyUq0LS", "P8C05RctmO", "MJnO3H1OiN", "JUjpczPsgL", "JQuKON8ZRt", "I6UUY2jbGi", "HoF0HmLkV8", "D6D0doUZsr", "BsEizFlkdA", "9nyN4m0LGa", "4CsixilsQV" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1731658753415, 1732342283666, 1731658732310, 1729496145535, 1732338091339, 1733269247995, 1733090883097, 1734972394634, 1731658286548, 1731658045951, 1731658554396, 1733269097401, 1733269068351, 1730696306277, 1730790655618, 1732338013884, 1731658323609, 1732038077814, 1730688469971, 1731988728456, 1732492556631, 1731657657215, 1732037937822, 1737524247058, 1731658491240, 1732338182825 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Submission13244/Reviewer_DHWn" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Submission13244/Reviewer_ayAW" ], [ "ICLR.cc/2025/Conference/Submission13244/Area_Chair_2gDE" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Submission13244/Reviewer_ayAW" ], [ "ICLR.cc/2025/Conference/Submission13244/Reviewer_q6tv" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Submission13244/Reviewer_WnL3" ], [ "ICLR.cc/2025/Conference/Submission13244/Reviewer_DHWn" ], [ "ICLR.cc/2025/Conference/Submission13244/Reviewer_q6tv" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ], [ "ICLR.cc/2025/Conference/Submission13244/Authors" ] ], "structured_content_str": [ "{\"title\": \"Author's Response (2)\", \"comment\": \"> In the real-world applications of speculative decoding, the acceptance rate of different position is usually not i.i.d. I wonder if this will affects the proposed theory and greedy draft sampling methods.\\n\\nOur work does not assume the acceptance rates at different positions to be i.i.d. at any point, so this does not affect our proposed theory or methods.\\n\\n> In Table 1, some results of empirical is even higher than the theoretical upper bound. Could you please provide a detailed explanation?\\n\\nThe observed discrepancies are due to random errors and are not statistically significant. Speculative decoding inherently involves randomness, and whether a token is accepted is a random behavior. We have done our best to measure the statistical errors and faithfully report the results in each table and figure. In Table 1, although there are a few positive values, none of them are statistically significant at \\u03b1=0.05. All statistically significant signals indicate that the empirical results are lower than the theoretical upper bound. We hope these explanations help clarify the results.\\n\\n> In ablation study 1, the authors show an interesting phenomenon that the impact of temperature is non-monotonic. Different methods consistently show a turn around temperature. Could you please provide a detailed explanation?\\n\\nWe have experimentally observed the non-monotonic behavior of acceptance rate with respect to temperature in the speculative decoding process of current language models. Recently, independent researchers have discovered similar behaviors in\\\"Temperature-Centric Investigation of Speculative Decoding with Knowledge Distillation, Figure 1, although their specific settings differ as they do not consider multi-draft scenarios. They found that the acceptance rate peaks at a temperature of 0.2, while in our Figure 1(a) and (c) with sampling with replacement, the acceptance rate is highest at temperatures between 0.1 and 0.3. We also found that this non-monotonicity depends on the dataset, as evident in Figure 1(b) where the behavior changes with a different dataset. Our work, with dense sampling in the 0.8-1.0 range, discovered new non-monotonic behaviors that were not observed in the \\\"Temperature-Centric Investigation of Speculative Decoding with Knowledge Distillation\\\" study. \\n\\nThe non-monotonic phenomenon is a recently discovered phenomenon with dedicated and ongoing independent research still in its early stages. We do not yet know the exact cause of this behavior. Our findings, although not part of the 4 main contributions summarized in the introduction, provide additional evidence for this phenomenon.\\n\\n> Can proposed greedy draft sampling methods adapt to other retrieval-based speculative decoding methods? (e.g. Lookahead Decoding [1] and REST [2])\\n\\nFirstly, we note that Lookahead Decoding is not retrieval-based but rather attempts to consider multiple steps ahead. Our greedy draft method in this paper only considers the 1-step case when deriving the theoretical upper bound. Combining the strengths of both approaches is non-trivial and left for future work. Secondly, in principle, our method can be applied to REST: Retrieval-based speculative decoding. The authors designed the retrieval process to be deterministic, resulting in a delta distribution for the draft distribution. In this case, the optimal transport is trivial. It is possible to design non-delta draft distributions based on the retrieval results, in which case our greedy draft method can be applied. This can be explored in future work.\\n\\nWe apologize for any misunderstandings and have addressed each comment individually. \\n\\nOnce again, we appreciate your time and effort in reviewing our paper. We have incorporated your suggestions to improve the presentation of the paper. \\n\\nWe would be immensely grateful if you could re-evaluate our paper considering the clarifications we have provided, \\n\\nSincerely,\\n\\nThe Authors\"}", "{\"title\": \"Appreciating Your Feedback\", \"comment\": \"Thank you for your time and recognition of our novel contribution. We would appreciate your feedback on if any concerns remain as the discussion phase comes to a close. We noticed that the other paper you invited for comparison also provided one, and we wanted to clarify a minor inaccuracy on \\\"The experiments reported in the other paper appear to be limited to a specific draft sampling scheme\\\": our experiments considered multiple sampling schemes, including with/without replacement, SpecHub and a new greedy draft distribution. Nevertheless, we believe both papers make unique contributions. We are grateful for this discussion opportunity.\"}", "{\"title\": \"Author's Response (1)\", \"comment\": \"Thank you for volunteering your time and effort to review our paper. We have carefully considered your feedback and improve the presentation of the manuscript. Below, we address your comments point by point.\\n\\n> The motivation and background should be clearly illustrated, and the authors could consider add some intuitive examples and figures to improve the presentation.\\n\\nWe have added additional illustrations to enhance the presentation and help readers better understand our work.\\n\\n> Please discuss the connections to the related works. It is confusing for readers without the knowledge about SpecTr. \\n\\nWe have included discussions on the connections to important related works in Appendix A and comparisons to other related works in Appendix B. If there are specific works you would like us to elaborate on regarding the connections, please let us know and we would be happy to include additional discussions.\\n\\n> Besides, please give a clear notation section. For example, the number of draft tokens for each draft position and the number of draft length should be clarified.\\n\\nWe have added a notation section in Section E to clarify the symbols used in the paper. In the main text, $n$ represents the number of draft tokens for each draft position. The number of draft length is not explicitly discussed in our theory as our theory and algorithm works independently for each position. The specific values used in the experiments are described in the header of Table 2. For example, in the first column, the number of draft tokens for each draft position is 2 and the number of draft length is 4. In the second column, the number of draft tokens for each draft position is 4 and the number of draft length is 3. The third column corresponds to the more complex setting proposed in EAGLE, where the number of draft tokens and draft length vary for different branches. We hope these clarifications help improve the understanding of our notations.\\n\\n> Please describe the whole algorithm of the greedy draft sampling method. For example, after constructing the first draft tokens for the first draft position, how we can construct the following draft tokens?\\n\\nWe provide pseudocode in Section D to help illustrate the algorithm. After constructing the first draft token, we append it to the prompt, feed it into the draft model, and obtain the next set of draft tokens through sampling. The process becomes more complex when considering a general tree topology where different branches may have different depths. In each step, the construction of draft tokens follows Section 5.1 and the verification of draft tokens follows Section 5.2. The specific implementation details, including parallelization for acceleration, can be found in the provided code.\\n\\n> Besides, this algorithm is similar to the top-k sampling, with only an additional random sampled token. The authors should discuss their difference.\\n\\nOur greedy draft construction method differs significantly from top-k sampling. Top-k sampling constrains the output space to k tokens and generates a single token, while our method does not constrain the output space and generates k tokens.\\n\\n> The experiments could be strengthen by evaluating the block-wise mean accepted tokens and real-world speedup. \\n\\nThe real-world speedup results are already provided in the \\\"Speed\\\" column of Table 2. We have added the block-wise mean accepted tokens results in Section F, with the original data already available in the supplementary materials to ensure reproducibility.\\n\\n> Besides, more experiments with different model scales (e.g. 33B, 70B) and different benchmarks (e.g. MT-Bench [1] and Spec-Bench[2]) are necessary to demonstrate the conclusions.\\n\\nWe have conducted experiments on 4 datasets and 4 model architectures, including the MT-Bench benchmark you mentioned (see Table 2). While we agree that more experiments with larger models and additional datasets would be beneficial, the computational cost quickly becomes prohibitive. \\n\\nWe would appreciate it if you could kindly revisit your evaluation, taking into the consideration of MT-Bench, real-world speedup, and the newly added block-wise mean accepted tokens results in the paper.\"}", "{\"summary\": \"This paper studies the problem of multi-draft speculative decoding (MDSD), where the draft model provides multiple draft tokens for each draft position. The authors first provide a way to compute the optimal acceptance rate. Then, they measure the theoretical upper bound of MDSD with large vocab size and quantify the gap between existing verification algorithms and this bound. Besides, the authors also provide a greedy draft sampling methods to approach the theoretical upper bound of MDSD.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The idea of transforming the problem into a subset selection problem and considering the dual of the problem is novel and makes sense.\\n2. The authors rigorously give some theoretical findings, including the upper bound of MDSD and a efficient method to compute the theoretical acceptance rate.\\n3. The authors propose a greedy draft sampling method and conduct extensive experiments to demonstrate its effectiveness.\", \"weaknesses\": \"While this paper provides rigorous theory and analysis, I think there exists some weakness to further improve the manuscript.\\n\\n1. The motivation and background should be clearly illustrated, and the authors could consider add some intuitive examples and figures to improve the presentation. \\n2. Please discuss the connections to the related works. It is confusing for readers without the knowledge about SpecTr. Besides, please give a clear notation section. For example, the number of draft tokens for each draft position and the number of draft length should be clarified. \\n3. Please describe the whole algorithm of the greedy draft sampling method. For example, after constructing the first $n$ draft tokens for the first draft position, how we can construct the following draft tokens? Besides, this algorithm is similar to the top-k sampling, with only an additional random sampled token. The authors should discuss their difference.\\n4. The experiments could be strengthen by evaluating the block-wise mean accepted tokens and real-world speedup. Besides, more experiments with different model scales (e.g. 33B, 70B) and different benchmarks (e.g. MT-Bench [1] and Spec-Bench[2]) are necessary to demonstrate the conclusions.\\n\\n[1] Zheng, Lianmin, et al. \\\"Judging llm-as-a-judge with mt-bench and chatbot arena.\\\" Advances in Neural Information Processing Systems 36 (2023): 46595-46623.\\n\\n[2] Xia, Heming, et al. \\\"Unlocking efficiency in large language model inference: A comprehensive survey of speculative decoding.\\\" arXiv preprint arXiv:2401.07851 (2024).\", \"questions\": \"1. In the real-world applications of speculative decoding, the acceptance rate of different position is usually not i.i.d. I wonder if this will affects the proposed theory and greedy draft sampling methods.\\n2. In Table 1, some results of empirical $\\\\alpha$ is even higher than the theoretical upper bound. Could you please provide a detailed explanation? \\n3. In ablation study 1, the authors show an interesting phenomenon that the impact of temperature is non-monotonic. Different methods consistently show a turn around temperature $T=0.9$. Could you please provide a detailed explanation?\\n4. Can proposed greedy draft sampling methods adapt to other retrieval-based speculative decoding methods? (e.g. Lookahead Decoding [1] and REST [2])\\n\\n[1] Fu, Yichao, et al. \\\"Break the sequential dependency of llm inference using lookahead decoding.\\\" arXiv preprint arXiv:2402.02057 (2024).\\n\\n[2] He, Zhenyu, et al. \\\"Rest: Retrieval-based speculative decoding.\\\" arXiv preprint arXiv:2311.08252 (2023).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Appreciating Your Feedback\", \"comment\": \"We would appreciate your feedback on if any concerns remain as the discussion phase comes to a close. Thank you for your time and we are grateful for the discussion opportunity.\\n\\nThanks!\"}", "{\"title\": \"Author's Response (3)\", \"comment\": \"We have made modifications point by point in response to your comments above. Unfortunately, we have been unable to upload a new version of the PDF since Nov 27, but the changes will be reflected in the final version. We also emphasize that our work has significant practical implications, as it not only advances theoretical understanding but also delivers real acceleration in MDSD. Thank you again for your constructive feedback and the opportunity to improve the clarity of our paper.\"}", "{\"title\": \"Thank you for the rebuttal\", \"comment\": \"Dear authors,\\n\\nI have gone through your rebuttal and all the other reviewers' opinions on the paper. Unfortunately, I cannot increase my score. Two specific reasons are:\\n\\n**Significance**: \\n\\n3 out of 4 main contributions are about the theoretical upper bounds of the acceptance rate of multi-draft SD algorithms. The developed theory and the proposed efficient algorithm for solving the theoretical upper bounds seem to be interesting. \\n\\nHowever, this is a higher-order problem whose significance depends on the effectiveness of MDSD. As is also mentioned by Reviewer DHWn, the MDSD framework does not necessarily lead to a higher throughput. There are still gaps between the **acceptance rate** and the final **speedup**/**throughput**. Obtaining a higher acceptance rate does not always lead to a higher speedup/throughput, and other effects like longer draft time need to be considered. \\n\\nI would like to see if the developed theory can be of **independent interest**, especially outside the field of MDSD. If the developed duality, q-convexity theory, and the proposed efficient algorithm can be applied to another empirical setting (maybe discrete optimal transport?) then this will strengthen the significance of the theoretical results.\\n\\n**Clarity**:\\n\\nThe clarity of the paper can still be significantly improved even for the revised draft. For example:\\n\\n- Section 2.1 introduces the background of speculative decoding. The points 3 & 4 are hard to read and understand. \\n- For section 2.2, I would suggest deleting Lines 121-123, which is extremely hard to understand without reading the further descriptions of the optimal transport formulation --- maybe bring up OT first?\\n- Section 2.3 (multi draft) is the extension of section 2.2 (single draft), I would suggest introducing MDSD formally first, and writing the single draft setting as a special case. Besides, the overview of RSS, K-Seq, or SpecTr should be introduced somewhere in the main text.\\n- TOTAL UNIMODULARITY should not be the subsection name of section 3.2. This should not even be an independent subsection. It serves as a comment in the proof sketch, which can be moved to the appendix (full proof) without hurting the readability of the main text.\\n- Section 4 introduces the concept of q-convex function. This section will benefit from having more illustrating examples of the q-convex functions. E.g., when does it reduce to the normal convex function that we are more familiar with? What are the definitions of \\\"supermodular functions\\\" for readers who are not familiar with the context? \\n- Can you give one counter-example that a q-convexity function is not necessarily supermodular? \\n\\nBased on these considerations, I will respectfully maintain my score.\"}", "{\"metareview\": \"This paper studies multi-draft speculative decoding and provides multiple theoretical contributions. The paper starts with the optimal transport formulation of multi-draft speculative decoding acceptance rate (Sun et al., 2023) and derives its dual. Then it is shown that the optimal solution to the dual problem takes a subset selection form. Then, solutions are obtained under mild assumptions for three key drafting strategies: (1) sampling with replacement; (2) sampling without replacement; and (3) so-called greedy decoding (which I refer to as almost-top-k sampling as the name greedy already has a meaning and should not be overloaded) where top-(k-1) tokens are selected and then the last token is selected by sampling without replacement. Several key insights are derived in the sequel. The case of k=2 degenerates to concurrent work of (Khisti et al., 2024) that fully solves the optimal transport problem for k=2 and the same acceptance rate is derived. There is a large gap between sampling with replacement and sampling without replacement. The proposed almost-top-k sampling (called greedy in the paper) offers further improvements over sampling without replacement, which can be substantial in some regimes. The findings are corroborated through experiments with several draft and verification models. The reviewers' main concern is that the impact of multi-draft speculative decoding is limited in cases where large batch sizes are used due to throughput limitations. Also, the reviewers note that the applicability of the theory in this paper beyond speculative decoding is unclear. While the AC agrees with these weaknesses, I think the theoretical contributions of this paper are non-trivial on their own right and help further understand the theoretical limits of multi-draft speculative decoding. As such, the paper is recommended to be accepted. Congratulations to the authors!\\n\\nSun, Ziteng, et al. \\\"Spectr: Fast speculative decoding via optimal transport.\\\" Advances in Neural Information Processing Systems 36 (2023).\\n\\nKhisti, Ashish, et al. \\\"Multi-Draft Speculative Sampling: Canonical Architectures and Theoretical Limits.\\\" arXiv preprint arXiv:2410.18234 (2024).\", \"additional_comments_on_reviewer_discussion\": \"While the reviewers have recommended the paper to be rejected, after a careful reading of the paper and further consultation with an additional expert reviewer and the SAC, the AC believes that the paper makes a significant contribution that deserves to be accepted in the conference. In addition, the concerns of the reviewers (while correct) appear not to be a blocker for publication.\"}", "{\"title\": \"Author's Response (2)\", \"comment\": \"### Regarding clarity:\\n\\nIn our revision, we have refined the writing and presentation of the paper to address all the points below without changing any of the results, in order to help reviewers understand the article.\", \"here_are_point_by_point_responses\": \"> (1). In Section 2.2, the informal description of speculative decoding is very confusing. This seems to be a short summary of the formal description below, but it is hard to understand what $\\\\max P(i=j)$ means before going into the details of the optimal transport problem.\", \"we_added_a_natural_language_explanation\": \"\\\"that is to maximize the probability of random variable $i$ to be the same as one of random variable in $(\\\\bar{i}_1,\\\\dots,\\\\bar{i}_n)$.\\\"\\nWe did not move content up and down because the essential content is the same.\\n\\n> (3). Section 3 is dedicated to provide the proof for the subset selection problem (Eq. 8). I would suggest move the proof details to the appendix and optionally write a short proof sketch section that only displays the important idea behind the proof and/or the part of the proof that is needed for the development of the later sections.\\n\\nThe current proof already only shows the important ideas, with details left to the appendix, such as Lemma 1 and Appendix C.1, just as you envisioned. We have highlighted the final conclusion at the beginning of Section 3. Readers uninterested in the proof are free to skip it, while interested readers can refer to it. This has been clarified in the original text.\\n\\n> (4). The description of Theorem 3 and 4 can be improved. What is the definition of $Q$ and $q$ in these cases? What are the consequences of the special cases (with replacement and without replacement)?\\n\\n$Q$ and $q$ are well-defined:\\n- $Q$: \\n - line 250: $Q(H)=\\\\sum_{\\\\overline{i}\\\\in H^n}p_{\\\\mathrm{draft}}(i)$\\n - lines 156-161: \\n - Sampling with replacement: $p_{\\\\mathrm{draft}}(\\\\bar{i})=\\\\prod_{j=1}^{n}q(\\\\bar{i}_{j})$\\n - Sampling without replacement: $p\\\\_{\\\\mathrm{draft}}(\\\\overline{i})=\\\\prod\\\\_{j=1}^nq^{\\\\neg\\\\overline{i}\\\\_1,\\\\ldots,\\\\overline{i}\\\\_{j-1}}(\\\\overline{i}\\\\_j)$, where $q^{\\\\neg\\\\overline{i}\\\\_1,...,\\\\overline{i}\\\\_{j-1}}(x)=\\\\begin{cases}\\\\frac{q(x)}{1-\\\\sum_{z\\\\in\\\\{\\\\bar{i}\\\\_1,\\\\ldots,\\\\bar{i}\\\\_{j-1}\\\\}}q(z)}&x\\\\notin\\\\{\\\\bar{i}\\\\_1,\\\\ldots,\\\\bar{i}\\\\_{j-1}\\\\},\\\\\\\\\\\\\\\\ 0&x\\\\in\\\\{\\\\bar{i}\\\\_1,\\\\ldots,\\\\bar{i}\\\\_{j-1}\\\\}&\\\\end{cases}$\\n- $q$: output distribution of the draft model (line 156)\\n\\nWe have added an explanation to clarify this in case readers missed it in the paper.\\n\\n> (5). In Table 1, are \\\"greedy\\\" and \\\"verify\\\" the proposed methods in Section 5.1 and 5.2?\\n\\nYes.\\n\\n> Given the above review, I would recommend the authors to further refine the writing and the presentation of the paper\\n\\nWe have added additional explanations. Could you please check if it is now clear enough to not affect your understanding of our contributions? We believe the only reliable metric for measuring clarity is whether the reader can understand the contributions of the paper. If there are still parts you cannot understand, please let us know so we can improve how we express them. Otherwise, if you feel you can now understand all the contributions of this paper, would you please consider revisiting your assessment of the clarity?\\n\\n### Regarding soundness:\", \"the_other_three_reviewers_all_recognize_the_soundness_of_this_paper\": \"- Reviewer q6tv: Soundness: 4: excellent\\n- Reviewer WnL3: Paper is mathematically rigorous \\n- Reviewer DHWn: rigorously give some theoretical findings, including the upper bound of MDSD and a efficient method to compute the theoretical acceptance rate.\\n\\nYou did not comment on soundness, only stating \\\"Soundness: 2: fair\\\". \\n\\nWe would greatly appreciate it if you could provide more details on the specific aspects that led to your assessment of the soundness, so we can better address your concerns.\\n\\nYou didn't mention any error of our paper in your comment, therefore we are not aware of the basis of your evaluation of soundness. Only by knowing the basis of your evaluation of soundness can we have a targeted discussion to clarify any misunderstandings.\"}", "{\"title\": \"Author's Response (1)\", \"comment\": \"Thank you for volunteering your time and effort to review our paper.\", \"regarding_significance\": \"We will comment on each of your points below to clarify the misunderstandings.\", \"regarding_clarity\": \"We have revised the paper to refine the writing and presentation without changing any of the results, in order to help you understand the article better. Could you please check if it is now clear enough to not affect your understanding of our contributions? We believe the only reliable metric for measuring clarity is whether the reader can understand the contributions of the paper. If there are still parts you cannot understand, please let us know so we can improve how we express them. Otherwise, if you feel you can now understand all the contributions of this paper, would you please consider revisiting your assessment of the clarity?\\n\\nWe would greatly appreciate it if you could kindly re-evaluate our paper, taking into account the clarifications we have provided.\\n\\nSincerely,\\n\\nAuthors\\n\\n---\\n### Regarding significance:\\n\\n> However, the description and the development of the proposed algorithm is underplayed.\\n\\nThere might be a misunderstanding. This paper provides two novel and useful algorithms:\\n1. Section 4.2.1 reduces the time complexity of the subset selection problem in the paper from $2^{|\\\\Sigma|}$ to $O(|\\\\Sigma|\\\\log|\\\\Sigma|)$, a significant improvement achieved by deeply analyzing the problem structure and proposing the highly original q-convexity to guide algorithm design. \\n2. Section 5 proposes the greedy draft construction method, which improves the optimal acceptance rate and comes with a verification algorithm that achieves the optimal acceptance rate.\\n\\n> The proposed methods deserve a proper name, clear demonstration of the verification algorithm \\n\\nWe apologize that the name doesn\\u2019t fully convey the novelty of our approach. We are open to suggestions for a more suitable name if you have any recommendations. \\n\\nWe note that the name does not change our four contributions shown in the introduction, which provide multiple new theoretical insights and new algorithms.\\n\\nWe have also unfolded the math definition in the verification algorithm, hoping it will help you understand better.\\n\\n> is the algorithm practical for $n>2 as compared to SpecHub?\\n\\nBy definition, SpecHub does not support the case of $n>2$, as we emphasized on line 335. Therefore there is no way to compare.\\n\\n> more thorough theoretical and experimental investigations to demonstrate the pros and cons compared with previous algorithms.\\n\\nOur contributions contain rigorous mathematical proofs, many novel theorems, and extensive experimental verification. We have achieved very small error bars, as can be seen in the nearly invisible error ranges in the tables and figures. Therefore the signal in our experiment is very strong and the noise very small. We would be grateful if you could share what specific additional experiments or theoretical investigations would help strengthen the paper in your opinion, and what additional research questions you would like us to explore.\\n\\n> It is interesting to see that two existing verification approaches (K-Seq and the widely used RRS) can be unified as solving the same optimal transport problem coresponding to sampling without replacement $p_{draft}$. Therefore, they share the same upper bound.\\n\\nThis is not our contribution and we did not claim it as such. This is a previously known result - in the SpecTr paper proposing K-Seq, the authors already pointed out this unifying view, so the credit for this contribution should go to them.\", \"our_contributions_are\": \"1. Theoretical upper bound by transforming the problem of solving the optimal acceptance rate corresponding to the optimal transport into a subset selection problem\\n2. Solving the subset selection if the draft distribution satisfies certain properties \\n3. Measuring the theoretical upper bound of MDSD efficiency on real text, and the gap of existing verification algorithms\\n4. New greedy Multi-Draft Speculative Sampling algorithm that improves the theoretical upper bound in many situations.\"}", "{\"title\": \"Author's Response (2)\", \"comment\": \"> Your greedy algorithm is another version of coin problem, in order for it to be optimal, the environment has to be canonical (see \\\"Error Bounds and the Applicability of the Greedy Solution to the Coin-Changing Problem,\\\"), I suggest you incorporate that in your proof.\\n\\nFirstly, our greedy draft token construction method in Section 5.1, which is a probability distribution, has nothing to do with the Coin-Changing Problem, which is an optimization problem. \\n\\nAdditionally, the reviewer may be referring to $f(H) = P(H) \\u2212 Q(H)$ in Section 4, but this algorithm is not called a greedy algorithm. It is an optimization problem, but it also has many differences than the Coin-Changing Problem.\", \"definitions\": [\"In Section 4: $\\\\min_H \\\\sum_{i\\\\in H}p(i) \\u2212 Q(H)$\", \"Coin-Changing Problem: $\\\\begin{aligned}\\\\min\\\\_{x\\\\in\\\\mathbb{Z}\\\\_+^n} & \\\\sum\\\\_{i=1}^n c_i x_i \\\\\\\\\\\\\\\\ \\\\text{s.t.}& \\\\sum\\\\_{i=1}^n a\\\\_i x\\\\_i = b\\\\end{aligned}$\"], \"important_differences\": \"- The optimization functions are different: Q is a nonlinear function. There is no nonlinear function in the Coin-Changing Problem.\\n- The constraints are different: the former has no constraints, while the Coin-Changing Problem has linear constraints.\\n- The variable spaces are different: the Coin-Changing Problem is an integer programming problem, while our paper is a subset selection problem.\\n\\nWe hope that the differences we have pointed out help resolve your misunderstanding.\\n\\n> You have mentioned theoretical upper bound multiple times, however it is not explicitly defined, is it $\\\\alpha^\\\\ast$?\\n\\nYes, the theoretical acceptance rate upper bound is the optimal acceptance rate $\\\\alpha^\\\\ast$.\\n\\n> I suggest you use Radix sort, which is linear in the size of input, and helps with the overall complexity of your problem.\\n\\nWe would like to clarify this misunderstanding by noting that radix sort runs in $O(kn)$ time, where k is the number of digits in each value of the input array and n is the size of the input array. This running time is worse than $O(n\\\\log n)$ when the number of digits is larger, especially in the case of using floating-point numbers in experiments. Secondly, we are dealing with floating-point numbers, which, unlike integers, cannot be directly used with Radix in machine representation. Implementing radix sort correctly for floating-point numbers is non-trivial and introduces complexity that is unrelated to the core contributions of our paper. Finally, the largest vocabulary size of the models we tested is 150,000, and it only takes 7ms to sort using numpy, which is not a system bottleneck and is very small compared to the language model.\\n\\nOnce again, thank you for volunteering your time and effort to review our paper. We have incorporated your suggestion of not making claims sounding overly certain. \\n\\nWe would greatly appreciate it if you could re-evaluate our paper considering the clarifications we have provided.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Author's Response (2)\", \"comment\": \"Regarding improving clarity, we have made revisions based on your suggestions:\\n\\n> Section 2.1 introduces the background of speculative decoding. The points 3 & 4 are hard to read and understand.\\n\\nWe provided additional explanations. The draft tokens generated in speculative decoding need to be verified. Such a process is reflected in point 3. The verification result is $x_{m+1}$, and the verification process has randomness: $x_{m+1}\\\\sim P_{\\\\operatorname{verify}}(\\\\cdot|\\\\widehat{x}^{(1)},\\\\dots\\\\widehat{x}^{(n)})$. The specific verification result probability distribution depends on different algorithm implementations, such as eq (5) for single draft and eq (19) for our greedy method.\\n\\nIf a draft token is accepted, it brings acceleration, as reflected in point 4. Multiple tokens can be generated in one forward pass, accelerating compared to basic autoregressive sampling, which generates a single token for each forward pass. Minimizing the forward process of large language models leads to acceleration.\\n\\n> For section 2.2, I would suggest deleting Lines 121-123, which is extremely hard to understand without reading the further descriptions of the optimal transport formulation --- maybe bring up OT first?\\n\\nThank you for this valuable suggestion. We have deleted Lines 121-123 and directly provided the rigorous optimal transport formulation.\\n\\n> Section 2.3 (multi draft) is the extension of section 2.2 (single draft), I would suggest introducing MDSD formally first, and writing the single draft setting as a special case.\\n\\nFollowing your suggestion, we adjusted the order of Sections 2.2 and 2.3 and introduced single draft setting as a special case of general MDSD.\\n\\n> Besides, the overview of RSS, K-Seq, or SpecTr should be introduced somewhere in the main text.\\n\\nWe have added brief overviews of these methods in at the end of Section 2.PRELIMINARIES to provide context:\\n\\n- Recursive Rejection Sampling (RRS): RSS (Yang et al., 2024b; Jeon et al., 2024) is one of the foundational multi-draft speculative decoding methods. It generates multiple draft tokens simultaneously and verifies them in a sequential manner, serving as a baseline for MDSD.\\n- SpecTr and K-Seq Methods: The SpecTr paper (Sun et al., 2024d) pointed out that optimal acceptance rate is characterized by an optimal transport problem, and proposed K-Seq method to improve the acceptance rates both theoretically and practically.\\n\\n> TOTAL UNIMODULARITY should not be the subsection name of section 3.2. This should not even be an independent subsection. It serves as a comment in the proof sketch, which can be moved to the appendix (full proof) without hurting the readability of the main text.\\n\\nFollowing your suggestion, we removed the subsection name and only used a comment to explain that this is a step in the proof. The main content was moved to the appendix, keeping only a brief proof sketch in the main text to make room for your other suggested modifications.\\n\\n> Section 4 introduces the concept of q-convex function. This section will benefit from having more illustrating examples of the q-convex functions. E.g., when does it reduce to the normal convex function that we are more familiar with?\\n\\nThe two do not overlap. A q-convex function is defined on subsets of a set $\\\\Sigma$, where the input variable is a set. In contrast, the input variable of a convex function is a real number or vector. Therefore, a q-convex function does not reduce to a convex function.\\n\\nHowever, q-convex functions are inspired by convex functions, hence the name. We introduced the connection after Definition 2. For any ordering of the elements in $\\\\Sigma$, we denote it as $\\\\Sigma=\\\\{\\\\sigma_1,\\\\dots,\\\\sigma_{|\\\\Sigma|}\\\\}$, we can construct a sequence of sets $H_i$ by adding elements one by one: $H_i=\\\\{\\\\sigma_1,\\\\dots,\\\\sigma_{i}\\\\}$. We can then construct a set of points $\\\\{(x_i,y_i)\\\\}_{1\\\\leq i\\\\leq|\\\\Sigma|}$ in a two-dimensional plane, where $x_i=\\\\sum_{\\\\sigma\\\\in H_i}q(\\\\sigma)$ and $y_i=Q(H_i)$. Connecting these points with straight lines always forms a convex function.\\n\\n> What are the definitions of \\\"supermodular functions\\\" for readers who are not familiar with the context?\\n\\nThank you for the suggestion. We have added the definition of supermodular functions: A function $f$ defined on subsets of a set $\\\\Sigma$ is supermodular if for any two subsets $A\\\\subseteq \\\\Sigma$ and $B\\\\subseteq \\\\Sigma$, $f(A) + f(B) \\\\leq f(A \\\\cup B) + f(A \\\\cap B)$.\\n\\n> Can you give one counter-example that a q-convexity function is not necessarily supermodular?\\n\\nAs proved in Theorem 5, all q-convex functions are supermodular functions. Therefore, there is no counter-example.\"}", "{\"title\": \"Author's Response (1)\", \"comment\": \"Thank you for your feedback. We appreciate you taking the time to review our paper again and provide constructive comments. We have carefully considered your points and made revisions to carry out your suggestions. Below, we address each of your points:\\n\\n> However, this is a higher-order problem whose significance depends on the effectiveness of MDSD. As is also mentioned by Reviewer DHWn, the MDSD framework does not necessarily lead to a higher throughput. There are still gaps between the acceptance rate and the final speedup/throughput. Obtaining a higher acceptance rate does not always lead to a higher speedup/throughput, and other effects like longer draft time need to be considered.\\n\\nOur work brings real speedup and creates practical value, as shown in Table 2. You mentioned that Reviewer DHWn raised the point about gaps between the acceptance rate and the final speedup/throughput. However, we have not observed this gap in our experiments, which demonstrate genuine acceleration.\\n\\nThis is consistent with other recent works. For example, in Table 7 of the EAGLE paper, both speedup and throughput improved:\\n\\n| Model | Batch size 1 | Batch size 2 | Batch size 3 | Batch size 4 | Throughput |\\n|---------|-------------|-------------|-------------|-------------|------------|\\n| Vicuna 7B | 2.90x | 2.87x | 2.65x | 2.76x | 1.97x |\\n| LLaMA2-Chat 70B | 3.01x | 2.81x | 2.50x | 2.40x | 1.99x |\\n\\nThe recent work \\\"MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding\\\" also reported similar results, as we discussed in our response to Reviewer DHWn.\\n\\nIn summary, our work is significant because it not only provides theoretical insights but also delivers practical performance improvements.\\n\\n> I would like to see if the developed theory can be of independent interest, especially outside the field of MDSD.\\n\\nWhile our theory may hold independent research significance in other related fields\\u2014a topic we are open to exploring further in the future\\u2014such considerations fall beyond the scope of our current study.\\n\\nWithin MDSD, our contributions are already sufficiently significant, providing real acceleration and theoretical insights.\\n\\n> If the developed duality, q-convexity theory, and the proposed efficient algorithm can be applied to another empirical setting (maybe discrete optimal transport?) then this will strengthen the significance of the theoretical results.\\n\\nOur novel theory was not proposed for general discrete optimal transport problems. Instead, we analyzed the special class of discrete optimal transport problems in MDSD and proposed a theory and efficient algorithm tailored to the MDSD scenario. This is non-trivial and conveys a deep understanding of this class of problems.\\n\\nWe agree that if future research finds broader applications of our novel concepts, it will further enhance their significance. \\n\\nHowever, we emphasize that the existing contributions in MDSD, improving real generation speed and providing theoretical insights, are already sufficiently significant.\"}", "{\"summary\": \"The paper works on the dual problem of the transport problem of multi-draft speculative decoding and show that the optimal acceptance rate is equivalent to a subset selection problem. Then, the paper provides several methods to compute such rates for commonly used multi-draft proposal methods (sampling with replacement, sampling without replacement). The paper proposes a greedy draft construction method and provides several empirical results that showcase the benefit of the proposed algorithm.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"It is interesting to see that two existing verification approaches (K-Seq and the widely used RRS) can be unified as solving the same optimal transport problem coresponding to sampling without replacement $p_{draft}$. Therefore, they share the same upper bound. Table 1 shows that for a variety of models and settings, the two methods are close enough to the optimal acceptance rates.\\n\\nThe proposed \\\"greedy\\\" draft generation approach and verification method is an interesting combination of the greedy decoding method and ordinary sampling method.\\n\\nThe ablation studies are informative with a clear comparison with other baseline methods across different temperature and number of draft tokens.\", \"weaknesses\": \"### significance\\n\\nMuch portion of the paper is dedicated to theoretical derivations of the optimal acceptance rate. However, the description and the development of the proposed algorithm is underplayed. \\n\\nThe proposed methods deserve a proper name, clear demonstration of the verification algorithm (is the algorithm practical for $n>2 as compared to SpecHub?) and more thorough theoretical and experimental investigations to demonstrate the pros and cons compared with previous algorithms.\\n\\n\\n### clarity \\n\\nThe clarity of the paper can be significantly improved, including but not restricted to:\\n\\n(1). In Section 2.2, the informal description of speculative decoding is very confusing. This seems to be a short summary of the formal description below, but it is hard to understand what $\\\\max P(i=j)$ means before going into the details of the optimal transport problem.\\n\\n(2). The same problem also applies to Section 2.3, where the informal description of multi-draft speculative decoding is confusing. I would suggest reframing the descriptions and move the examples of $p_{draft}$ after the definition.\\n\\n(3). Section 3 is dedicated to provide the proof for the subset selection problem (Eq. 8). I would suggest move the proof details to the appendix and optionally write a short proof sketch section that only displays the important idea behind the proof and/or the part of the proof that is needed for the development of the later sections. \\n\\n(4). The description of Theorem 3 and 4 can be improved. What is the definition of $Q$ and $q$ in these cases? What are the consequences of the special cases (with replacement and without replacement)? \\n\\n(5). In Table 1, are \\\"greedy\\\" and \\\"verify\\\" the proposed methods in Section 5.1 and 5.2?\\n\\n\\n----\\n\\nGiven the above review, I would recommend the authors to further refine the writing and the presentation of the paper and put more efforts into both the theoretical part and the empirical results that supports the proposed algorithms.\", \"questions\": \"N/A.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents several new results about Multi-Draft Speculative Sampling.\\n1. It transforms the problem of computing the optimal acceptance rate into a subset selection problem.\\n2. For some cases it provides a practical solution of the problem. This provides a theoretical upper bound on the acceptance rate.\\n\\nThe authors then measure the theoretical upper bound on some datasets, and measure the gap between the upper bound and previous algorithms on these datasets.\\nThey present a greedy algorithm which is able to match the theoretical upper bound in many cases.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"The paper makes progress on understanding the acceptance rate of Multi-Draft Speculative Sampling. The authors show a clever transformation of the transportation problem formulation of optimal acceptance rates to a subset selection problem, and then show an algorithm to solve the subset selection if the draft distribution satisfies certain properties.\\nThey then propose a new greedy Multi-Draft Speculative Sampling algorithm, which is closer to the optimal acceptance rate on some datasets.\\n\\nThe results in the paper seem to me to be quite novel and significant.\", \"weaknesses\": \"The paper takes a bit of effort to read, partly because of a lot of notation, and partly because of results that may not be familiar to a lot f readers. I am not sure if the authors can do much about this.\", \"questions\": \"Do your results extend to trees of drafts in a straightforward way?\", \"i_would_like_to_see_a_comparison_to_https\": \"//openreview.net/forum?id=N1L5TgtkAw\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Appreciating Your Feedback\", \"comment\": \"We would appreciate your feedback on if any concerns remain as the discussion phase comes to a close. Thank you for your time and we are grateful for the discussion opportunity.\\n\\nThanks!\"}", "{\"title\": \"Author's Response (3)\", \"comment\": \"### Regarding request for more results:\\n\\n> put more efforts into both the theoretical part and the empirical results that supports the proposed algorithms.\\n\\nWe have presented 4 contributions, as summarized in the Introduction section, which provide multiple new theoretical insights and new algorithms, in the current paper. What additional research questions would you like us to explore? Please let us know.\\n\\n### Questions:\\n\\nThank you for volunteering your time and effort to review our paper. \\nWe would be grateful for any further response you can provide to these following questions, as only by understanding the reviewer's specific expectations and viewpoint can we make targeted efforts to facilitate discussion.\", \"based_on_our_explanations\": \"- If there are still parts you feel you don't understand, could you please let us know? \\n- If you now feel you can understand all the contributions, would you please consider revisiting your assessment of the clarity?\\n\\nCould you please share what is the basis of your evaluation of the soundness of this paper?\\n\\nCould you please share what additional research questions you would like us to explore?\\n\\n### Summary:\\n\\nOnce again, thank you for volunteering your time and effort to review our paper. We have incorporated your suggestions to improve clarity. \\n\\nWe would greatly appreciate it if you could kindly re-evaluate our paper, taking into account the clarifications we have provided.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Author's Response (2)\", \"comment\": \"> For weakness 3, what I mean \\\"top-k sampling\\\" is using top k candidate tokens (k tokens in total) to construct multi-drafts. This vanilla method seems similar to the proposed method. Could the authors give a further explanation?\\n\\nThank you for the clarification. We now understand that your \\\"top-k sampling\\\" refers to selecting the top k tokens with the highest logits from the draft model as draft tokens in speculative decoding. We would like to explain that this is not a popular method due to its low acceptance rate, even in the single draft setting. Note that even if the draft distribution and the target distribution are perfectly identical, using the tokens with the highest logits from the draft model as draft tokens may still fail to achieve an acceptance rate of 1. In contrast, using optimal transport can achieve an acceptance rate of 1. The same holds true in the multi-draft setting: even if the draft distribution and the target distribution are perfectly identical, using the top k tokens with the highest logits from the draft model as draft tokens may still fail to achieve an acceptance rate of 1. Due to this decline in acceptance rate, the method of selecting tokens with the highest logits is only commonly used when the temperature is 0, where optimal transport converges to this method.\\n\\n> In section 4.2.2, the complexity of sorting is $O(\\\\Sigma \\\\log \\\\Sigma)$. In the latest LLMs (e.g. Llama 3), the vocab size has been extended to 128k, which makes sorting more expensive. Could the authors give some empirical results of the time cost of the sorting operation? Maybe it is the bottleneck for the current algorithm.\\n\\nIn the models we tested, the maximum vocabulary size is about 150,000\\u2014larger than 128k\\u2014it takes only 7ms to sort using numpy. This is not a bottleneck for the system at all and is negligible compared to the language model. It is important to note that before our work, the complexity for the same problem is exponential, which is intractable. Our novel contribution makes it possible to compute the exact solution without approximation on such a scale for the first time.\\n\\nOnce again, thank you for your time and effort in reviewing our paper and engaging in this discussion. If you could consider the clarifications we have provided, assess whether they address your remaining concerns, we would be deeply appreciative.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"summary\": \"This paper is concerned with acceptance rate of MDSD in LLMs. Authors derive the dual problem, and then prove it has an integer optimal solution, furthermore, they provide a greedy algorithm that, in some cases, perform better without replacement.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Paper is mathematically rigorous; it is relatively easy to follow and grasp new concepts. Authors do a good job of highlighting the drawbacks of previous work and offer solutions.\", \"weaknesses\": \"Some claims are optimistic, such as \\\"the upper bound has never been computed before\\\", I personally refrain from making such certain statements. Some contributions are minor, as an example, deriving the dual of an LP is not a contribution, yet it is claimed to be in the first bullet point of contributions. Although paper is mathematically mature, it borrows a lot from previous publications, in other words, novel theoretical contribution is minor.\", \"questions\": \"You stated that problem (equation 7) is intractable/difficult to solve. What algorithms did you use? modern LP algorithms such as interior point methods are quite capable at handling large problems (with exponential constraints), and recently there have been solvers implemented on GPU (see cuPDLD-C). Your greedy algorithm is another version of coin problem, in order for it to be optimal, the environment has to be canonical (see \\u201cError Bounds and the Applicability of the Greedy Solution to the Coin-Changing Problem,\\u201d), I suggest you incorporate that in your proof. You have mentioned theoretical upper bound multiple times, however it is not explicitly defined, is it $\\\\alpha^*$? I suggest you use Radix sort, which is linear in the size of input, and helps with the overall complexity of your problem.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your detailed explanation and prompt reply. I highly recognize the authors' efforts in developing the algorithm.\\n\\nHowever, the rebuttal does not fully address my concerns as follows:\\n\\n1. (**clarity**) While I appreciate the authors' effort in improving the clarify of this paper, it is still difficult for readers to get the key idea, due to massive notations and equations. I strongly recommend the authors to focus on simplifying the notations and equations. \\n2. (**contributions**) While the proposed methods and theoretical findings are rigorous and novel, the multi-draft framework remains trivial to me. In real-world applications, speculative decoding has been proven to be less effective with batch size > 1, even slow down the inference speed with batch size > 64. Therefore, the multi-draft framework, which brings additional overhead, may harm the overall throughput more severely in the high request settings.\", \"other_questions\": \"1. For weakness 3, what I mean \\\"top-k sampling\\\" is using top k candidate tokens (k tokens in total) to construct multi-drafts. This vanilla method seems similar to the proposed method. Could the authors give a further explanation?\\n2. In section 4.2.2, the complexity of sorting is $O(\\\\Sigma \\\\log \\\\Sigma)$. In the latest LLMs (e.g. Llama 3), the vocab size has been extended to 128k, which makes sorting more expensive. Could the authors give some empirical results of the time cost of the sorting operation? Maybe it is the bottleneck for the current algorithm.\\n\\nBased on these considerations, I will respectfully maintain my score. I sincerely hope your understanding.\"}", "{\"title\": \"Thank you for the comparison.\", \"comment\": \"Please include it in the final version of your paper.\"}", "{\"title\": \"Author's Response\", \"comment\": \"We sincerely appreciate the time and effort you have invested in reviewing our paper and providing valuable feedback. We are delighted to know that you find our contributions to be \\\"quite novel and significant.\\\"\", \"to_address_your_questions\": \"> Do your results extend to trees of drafts in a straightforward way?\\n\\nYes, our method has been extensively validated on trees of drafts with various structures, including k-branch trees and sparse tree structures, as shown in Table 2. Specifically, we implemented our algorithm based on the Eagle framework and tested its effectiveness on several configurations:\\n- #Drafts = 2, #Steps = 4 \\n- #Drafts = 4, #Steps = 3\\n- EAGLE default sparse tree\\n\\nThe results in Table 2 demonstrate that our method performs consistently well across these scenarios, highlighting its robustness and adaptability to diverse tree structures.\\n\\n> I would like to see a comparison to https://openreview.net/forum?id=N1L5TgtkAw\\n\\nThere is an interesting connection between our work and the mentioned paper \\\"Multi-Draft Speculative Sampling: Canonical Architectures and Theoretical Limits.\\\" In Remark 2 of their paper, they conjecture that \\\"the optimal acceptance probability in the general case of $K > 2$ drafts is attained by replacing the exponent of 2 in the second term in (9) to K.\\\" Our results not only prove this conjecture but are also more general, as their work only considers sampling with replacement, while our results apply to any draft distribution and any number $K$ of drafts.\\n\\nIn other words, our first contribution is more general than Theorem 3 and Remark 2 in that mentioned paper.\", \"our_second_and_third_contributions_go_further_beyond_their_work\": \"2. Solving the subset selection problem if the draft distribution satisfies certain properties \\n3. Measuring the theoretical upper bound of MDSD efficiency on real text and the gap of existing verification algorithms\\n\\nWe introduce the concept of q-convexity, which enables efficient and exact solutions to the subset selection problem. To our understanding, their work does not propose a structure similar to q-convexity, and thus cannot achieve efficient solutions. Even if they assume their Remark 2 holds, the subset selection problem has $2^{50272}$ possible variable assignments for the OPT model with $|\\\\Sigma|=50,272$. Brute-force search would be even more costly than linear programming, which already requires petabyte-level storage for either our formulation ($C{\\\\in}\\\\mathbb{R}^{\\\\Sigma\\\\times\\\\Sigma^n}$) and theirs ($\\\\beta_y(x_{1:K}), \\\\forall x_{1:K}\\\\in\\\\Omega^K, \\\\forall y{\\\\in}\\\\Omega$). However, they still report the optimal acceptance probability in Table 2 for $K=2,4,8$. We are not sure how they computed these results as we could not find the exact information in their paper, and they did not provide code.\\n\\nOur unique methods, unlocked by a deeper understanding of the problem's structure, are far more efficient and can handle cases with tens of thousands of tokens without any approximation.\\n\\nRegarding practical methods, the unique contribution of that mentioned paper is a new verification method that more closely approximates the theoretical upper bound of sampling with replacement. Our fourth contribution takes a different path:\\n\\n4. We propose a new greedy Multi-Draft Speculative Sampling algorithm that goes beyond sampling with replacement by considering new draft distributions and directly improving the theoretical upper bound.\\n\\nOur greedy draft construction produces a higher acceptance rate than the theoretical upper bound of sampling with replacement.\\n\\nWe understand that the paper may require some effort to read due to the notation. We have added an extra section in Appendix E, summarizing all the notations, acting as a handy reference for readers.\\n\\nOnce again, we express our gratitude for your time and your recognition of our work.\\n\\nSincerely,\\n\\nAuthors\"}", "{\"title\": \"Author's Response (1)\", \"comment\": \"We sincerely appreciate your time and effort in reviewing our paper and engaging in this discussion. We would like to address your remaining concerns point by point.\\n\\n> While I appreciate the authors' effort in improving the clarify of this paper, it is still difficult for readers to get the key idea, due to massive notations and equations. I strongly recommend the authors to focus on simplifying the notations and equations.\\n\\nThank you for acknowledging our existing efforts to improve the clarity of the paper. Regarding the notations and equations, they are inherent burdens that come from presenting rigorous and novel theoretical results. We believe that all the formulas are necessary to maintain the theoretical rigor, and we are not aware of any equations that can be removed without compromising the strictness. If you have specific suggestions on which parts we could remove while preserving the rigor, we would be more than happy to know.\\n\\nIn addition, as per your request, we have already added a clear notation section. Please feel free to use it to navigate through the notations. We would be glad to know if there is anything else we could do to help you better get the key idea of the paper.\\n\\n> While the proposed methods and theoretical findings are rigorous and novel, the multi-draft framework remains trivial to me.\\n\\n\\nWe appreciate your recognition of the rigor and novelty of our methods and theoretical findings. The contribution of this paper, which simplifies an exponential complexity problem to $O(|\\\\Sigma|\\\\log|\\\\Sigma|)$, is clearly a highly non-trivial result. As for the general field of multi-draft speculative sampling, it has helped many people and organizations in the real world to save costs and has made the capabilities of LLMs accessible to more people. The field continues to have new research works [1, 2, 3, 4, 5] that improve upon it, therefore we believe the field of multi-draft speculative sampling is also highly non-trivial.\\n\\n[1] Chen, Z., May, A., Svirschevski, R., Huang, Y., Ryabinin, M., Jia, Z., & Chen, B. (2024). Sequoia: Scalable, robust, and hardware-aware speculative decoding. NeurIPS 2024.\\n\\n[2] Sun, H., Chen, Z., Yang, X., Tian, Y., & Chen, B. (2024). Triforce: Lossless acceleration of long sequence generation with hierarchical speculative decoding. COLM, 2024.\\n\\n[3] Cai, T., Li, Y., Geng, Z., Peng, H., Lee, J. D., Chen, D., & Dao, T. (2024). Medusa: Simple llm inference acceleration framework with multiple decoding heads. ICML 2024.\\n\\n[4] Li, Y., Wei, F., Zhang, C., & Zhang, H. (2024). Eagle: Speculative sampling requires rethinking feature uncertainty. ICML 2024.\\n\\n[5] Li, Y., Wei, F., Zhang, C., & Zhang, H. (2024). Eagle-2: Faster inference of language models with dynamic draft trees. EMNLP 2024.\\n\\n> In real-world applications, speculative decoding has been proven to be less effective with batch size > 1, even slow down the inference speed with batch size > 64. Therefore, the multi-draft framework, which brings additional overhead, may harm the overall throughput more severely in the high request settings.\\n\\nTo our understanding, the impact of batch size on speculative decoding is still an ongoing research topic with different viewpoints. For example, \\\"MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding\\\" suggests that \\\"an intelligent drafting strategy can achieve better speedup with increasing batch size\\\" and \\\"for moderate to large sequence lengths, speculative decoding can achieve all three objectives: increased throughput, reduced latency, and lossless accuracy.\\\" Regardless of the impact of batch size, it does not affect our study of the theoretical optimal acceptance rate and is orthogonal to our contributions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Author's Response (1)\", \"comment\": \"Thank you for volunteering your time and effort to review our paper. We apologize for any misunderstandings that have arisen. Based on your comments, we note that the complexity of the problem has not been fully conveyed. We will do our best to clarify the complexity to help you better understand our work. We will then address your other comments.\\n\\n### Regarding the complexity of our work:\\n\\n> You stated that problem (equation 7) is intractable/difficult to solve. What algorithms did you use? modern LP algorithms such as interior point methods are quite capable at handling large problems (with exponential constraints), and recently there have been solvers implemented on GPU (see cuPDLD-C).\\n\\nThe difficulty of solving the LP does not depend on the specific LP solver used, but rather on the scale of the problem. As stated in our paper, \\\"The difficulty lies in the exponential number of variables and constraints.\\\" \\n\\nFor example, let's conservatively assume the vocabulary size $|\\\\Sigma| = 10^3$ (for reference, Llama's vocabulary size is already 32,000) and the number of multi-drafts $n = 3$. Then the variable $C$ in equation (7) has dimension $\\\\mathbb{R}^{\\\\Sigma\\\\times\\\\Sigma^n} = 10^{12}$. Just expressing the variable would require around 1 Terabyte of storage, which exceeds the memory of a typical CPU or GPU. \\n\\nThe exponential growth in complexity prevents us from feasibly doing large scale experiments with the LP. This difficulty applies to any LP algorithm, including the interior point methods and cuPDLD-C that you mentioned, and they cannot resolve this issue. We hope this clarifies the complexity of the LP problem in equation (7).\\n\\n### Addressing your other comments:\\n\\n> Some claims are optimistic, such as \\\"the upper bound has never been computed before\\\", I personally refrain from making such certain statements.\", \"we_would_like_to_clarify_this_point_by_quoting_the_original_text\": \"\\\"For modern LLMs, where the vocabulary size is typically in the thousands, the optimal acceptance rate has never been computed.\\\" The preceding text was discussing RRS and K-SEQ, which are used for sampling with/without replacement.\\n\\nThe intractability of the problem has been explained above. To the best of our knowledge, we have not seen any insights similar to our paper or any methods that reduce the exponential complexity to polynomial time, making it solvable. \\n\\nHowever, we acknowledge that there might be relevant works we have overlooked. If you are aware of any such prior works, we would greatly appreciate it if you could point them out. We are open to discussing and comparing our work with related literature.\\n\\nIn the absence of identified prior works addressing this specific problem, we believe our claim of novelty stands. Nonetheless, we will be more cautious in our phrasing to avoid sounding overly certain.\\n\\n> Some contributions are minor, as an example, deriving the dual of an LP is not a contribution, yet it is claimed to be in the first bullet point of contributions.\", \"we_would_like_to_clarify_this_misunderstanding_by_quoting_the_original_text_of_the_first_bullet_point_of_contributions\": \"\\\"We transform the problem of solving the optimal acceptance rate corresponding to the optimal transport into a subset selection problem by considering the dual of the problem and then applying total unimodularity. This provides a novel perspective for understanding the efficiency of MDSD.\\\"\\n\\nAccording to the original text, our first contribution is \\\"We transform the problem of solving the optimal acceptance rate corresponding to the optimal transport into a subset selection problem\\\", and the method we use to make this contribution is \\\"by considering the dual of the problem and then applying total unimodularity\\\". The significance of our contribution is that \\\"This provides a novel perspective for understanding the efficiency of MDSD.\\\" \\n\\nTherefore, deriving the dual is only a small step in proving our first contribution. Your previous comment only considered the step of deriving the dual, leading to the misunderstanding that \\\"Some contributions are minor\\\". By reading our original text, our conclusion is that our contributions are groundbreaking and important. \\n\\nWe would appreciate it if you could kindly revisit your evaluation of the impact, taking the entire scope of this contribution into consideration.\\n\\n> Although paper is mathematically mature, it borrows a lot from previous publications, in other words, novel theoretical contribution is minor.\\n\\nWe would like to clarify this misunderstanding by noting that citing many works is not a sufficient condition for minor contributions. While we indeed build upon many classic works, as evidenced by our extensive references, this is by no means a reason to consider our theoretical contributions as minor.\\n\\nAll four of our contributions are novel and add significant new insights to the field of multi-draft speculative decoding.\"}", "{\"title\": \"Appreciating Your Feedback\", \"comment\": \"We would appreciate your feedback on if any concerns remain as the discussion phase comes to a close. Thank you for your time and we are grateful for the discussion opportunity.\\n\\nThanks!\"}" ] }
9KiE3t6CsL
ALBAR: Adversarial Learning approach to mitigate Biases in Action Recognition
[ "Joseph Fioresi", "Ishan Rajendrakumar Dave", "Mubarak Shah" ]
Bias in machine learning models can lead to unfair decision making, and while it has been well-studied in the image and text domains, it remains underexplored in action recognition. Action recognition models often suffer from background bias (i.e., inferring actions based on background cues) and foreground bias (i.e., relying on subject appearance), which can be detrimental to real-life applications such as autonomous vehicles or assisted living monitoring. While prior approaches have mainly focused on mitigating background bias using specialized augmentations, we thoroughly study both foreground and background bias. We propose ALBAR, a novel adversarial training method that mitigates foreground and background biases without requiring specialized knowledge of the bias attributes. Our framework applies an adversarial cross-entropy loss to the sampled static clip (where all the frames are the same) and aims to make its class probabilities uniform using a proposed entropy maximization loss. Additionally, we introduce a gradient penalty loss for regularization against the debiasing process. We evaluate our method on established background and foreground bias protocols, setting a new state-of-the-art and strongly improving combined debiasing performance by over 12% absolute on HMDB51. Furthermore, we identify an issue of background leakage in the existing UCF101 protocol for bias evaluation which provides a shortcut to predict actions and does not provide an accurate measure of the debiasing capability of a model. We address this issue by proposing more fine-grained segmentation boundaries for the actor, where our method also outperforms existing approaches.
[ "Bias Mitigation", "Action Recognition" ]
Accept (Poster)
https://openreview.net/pdf?id=9KiE3t6CsL
https://openreview.net/forum?id=9KiE3t6CsL
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y9VizzsY3C", "pwe3QH05LV", "poHDoYD15A", "ki2O17D4ky", "js4jIrMDYY", "j3ShN3CFvv", "ib8nUIyS1a", "bogJYIqWFW", "TYjdf22lnm", "THMFIWwCf5", "Pg4Q2TtTDg", "NWjVfm4iFB", "KgLRaZMUSP", "KO1DSikc2l", "JFepage0vo", "EUvClKMLoK", "5h4qgjFMQK", "273rdgbmzH" ], "note_type": [ "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "decision", "official_comment" ], "note_created": [ 1730348405729, 1732497719646, 1731075974889, 1732253848825, 1732254145418, 1732538462069, 1733154642586, 1732293049538, 1732254258036, 1732254425376, 1732254482794, 1732253972905, 1732254012545, 1730687216216, 1734406323511, 1732292556841, 1737523850673, 1732497673061 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7609/Reviewer_aFRh" ], [ "ICLR.cc/2025/Conference/Submission7609/Authors" ], [ "ICLR.cc/2025/Conference/Submission7609/Reviewer_5f7A" ], [ "ICLR.cc/2025/Conference/Submission7609/Authors" ], [ "ICLR.cc/2025/Conference/Submission7609/Authors" ], [ "ICLR.cc/2025/Conference/Submission7609/Reviewer_EAVn" ], [ "ICLR.cc/2025/Conference/Submission7609/Reviewer_aFRh" ], [ "ICLR.cc/2025/Conference/Submission7609/Reviewer_EAVn" ], [ "ICLR.cc/2025/Conference/Submission7609/Authors" ], [ "ICLR.cc/2025/Conference/Submission7609/Authors" ], [ "ICLR.cc/2025/Conference/Submission7609/Authors" ], [ "ICLR.cc/2025/Conference/Submission7609/Authors" ], [ "ICLR.cc/2025/Conference/Submission7609/Authors" ], [ "ICLR.cc/2025/Conference/Submission7609/Reviewer_EAVn" ], [ "ICLR.cc/2025/Conference/Submission7609/Area_Chair_9g1v" ], [ "ICLR.cc/2025/Conference/Submission7609/Reviewer_EAVn" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7609/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents ALBAR, an adversarial learning method aimed at reducing biases in action recognition models. The study focuses on addressing background and foreground biases, which can impact the performance of applications such as autonomous vehicles and assisted living monitoring. ALBAR utilizes an adversarial training technique to minimize the model's dependence on static background cues, encouraging the use of dynamic motion information for action classification.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"S1: This paper resents state-of-the-art results on established SCUBA/SCUFO background/foreground debiasing benchmarks, and formulation is technically sound.\", \"s2\": \"Several ablations show the contribution of each component.\", \"weaknesses\": \"W1: Although the results demonstrate that the model outperforms state-of-the-art methods, the technical contribution is incremental and purely based on previously introduced techniques.\", \"w2\": \"How is the fairness of the comparative experiments ensured in this paper, and how are the results of the comparative methods obtained?\", \"w3\": \"The ALBAR performs well on specific bias evaluation protocols, but its generalization capabilities to new types of biases or different domain tasks have not been fully validated.\", \"w4\": \"Although the paper proposes a simplified end-to-end training framework, adversarial training often involves additional computational costs. The paper does not discuss the computational efficiency and scalability of the ALBAR method in detail.\", \"questions\": \"Although the results demonstrate that the model outperforms state-of-the-art methods, the technical contribution is incremental and purely based on previously introduced techniques.\\n\\nHow is the fairness of the comparative experiments ensured in this paper, and how are the results of the comparative methods obtained?\\n\\nThe ALBAR performs well on specific bias evaluation protocols, but its generalization capabilities to new types of biases or different domain tasks have not been fully validated.\\n\\nAlthough the paper proposes a simplified end-to-end training framework, adversarial training often involves additional computational costs. The paper does not discuss the computational efficiency and scalability of the ALBAR method in detail.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"#### Citations:\\n[1] Haoxin Li, Yuan Liu, Hanwang Zhang, and Boyang Li. Mitigating and evaluating static bias of action representations in the background and the foreground. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 19911\\u201319923, 2023.\\n\\n[2] Jinpeng Wang, Yuting Gao, Ke Li, Yiqi Lin, Andy J Ma, Hao Cheng, Pai Peng, Feiyue Huang, Rongrong Ji, and Xing Sun. Removing the background by adding the background: Towards background robust self-supervised video representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11804\\u201311813, 2021.\\n\\n[3] Shuangrui Ding, Maomao Li, Tianyu Yang, Rui Qian, Haohang Xu, Qingyi Chen, Jue Wang, and Hongkai Xiong. Motion-aware contrastive video representation learning via foregroundbackground merging. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9716\\u20139726, 2022a.\\n\\n[4] Sagawa, Shiori, et al. \\\"Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization.\\\" arXiv preprint arXiv:1911.08731 (2019).\\n\\n[5] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 dataset. Technical report, California Institute of Technology, 2011.\"}", "{\"summary\": \"This paper proposes a novel adversarial learning-based method to mitigate biases in action recognition, which provides simplified end-to-end training and does not require any labels/classifiers for bias-related attributes.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"This paper is well-written and well-organized.\", \"Good performance on the popular action recognition datasets.\"], \"weaknesses\": [\"This paper introduces adversarial learning into action recognition, which is a relatively novel idea, but I have some concerns as follows.\", \"Are there more visual examples that can depict biases in action recognition?\", \"Whether the method proposed in this paper can be used for skeleton-based action recognition task\\uff1f\"], \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Initial Rebuttal\", \"comment\": \"Thank you for your positive comments!\\n\\n**_W1: Are there more visual examples that can depict biases in action recognition?_**\\n\\n- **Response to Weakness 1:** Absolutely, we can include more. Please download the updated supplementary material. We have added a video file titled `paper7609_bias_examples.mp4`, more qualitative examples can be seen there. Let us know if you would like to see any more.\\n\\n**_W2: Whether the method proposed in this paper can be used for skeleton-based action recognition task\\uff1f_**\\n\\n- **Response to Weakness 2:** While we have not explicitly validated our methodology in a skeleton-based context, our core approach would definitely transfer. One potential bias in skeleton action recognition could be related to static poses, for example predicting a \\\"throwing\\\" action based on a single pose with one arm raised. Applying our static adversarial loss could mitigate this static pose orientation bias, encouraging the model to better consider the motion differences between skeletons. This could be an exciting future direction, and we plan to publicly release our code to support and encourage further exploration in this area.\"}", "{\"title\": \"Initial Rebuttal (Part 1/4)\", \"comment\": [\"Thank you for highlighting our strengths and for the comprehensive analysis of the paper.\", \"**_W1: The paper is is somewhat \\\"small\\\" in that although it describes what seems to be a new method and its evaluation on reasonable benchmarks, there is little discussion around key elements of the method, their limitations, their rationale, etc. (see questions)_**\", \"**_Q1: The mechanism for adversarial learning in the paper is very specific to action recognition. How can this mechanism be generalized to be of broader interest to the ICLR community?_**\", \"**Response to Weakness 1 & Question 1:** We humbly request the reviewer to consider the significance of action recognition as a foundational problem in the broader domain of video understanding. Below, we provide both context and empirical evidence to demonstrate the broader applicability of our approach:\", \"**Importance of Action Recognition in Video Understanding:**\", \"Action recognition serves as the core problem for advancements in video understanding. Many state-of-the-art models in the diverse tasks of video understanding are trained on large-scale action recognition datasets (e.g., Kinetics-400) and subsequently adapted to diverse tasks. Even though standard action recognition models are trained on trimmed videos, their learned representations are directly transferable to various downstream tasks, often without fine-tuning (frozen pretrained model). Such tasks require an encoder to compute high-quality local action understanding, then slide this across videos and model global information across these sets of local low-dimensional features, instead of trying to pass in the high-dimensional videos all at once. In these scenarios, having an unbiased, powerful encoder for trimmed action recognition is crucial.\", \"**Empirical Evidence of Broader Applicability of our Method:**\", \"We provide empirical results in the table below to demonstrate the utility of our proposed debiasing training paradigm across multiple downstream tasks:\", \"**Anomaly Detection:** The first downstream task we evaluate on is weakly supervised anomaly detection. The task requires a model to localize the frames within long, untrimmed videos where some anomaly (defined by the dataset) occurs. Due to the long videos, this task typically starts with a set of features extracted in sliding-window fashion from a Kinetics400 (or similarly) pretrained video encoder. In this task, discriminability between feature segments is crucial in order to localize anomalous segments. If an encoder exhibits a background bias for example, then it may output similar features for two clips with the same background but drastically different foregrounds, making the anomalous segments difficult to distinguish. For this weakly supervised anomaly detection task, we report results on the UCF_Crime [1] dataset as the frame-level Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC), which is the standard evaluation metric for this task. We use one of the current SOTA methods MGFN [3] with unchanged hyperparameters for this evaluation, only swapping the feature sets used to those extracted from a baseline model and a model trained using our framework on HMDB51.\", \"**Temporal Action Localization:** The second additional task we evaluate on is untrimmed temporal action localization. The goal of this task is to identify time intervals where particular action classes occur within long video. Similar to weakly supervised anomaly detection, this tasks utilizes features extracted from a pretrained video encoder and stands to benefit from models with less static bias and better temporal modeling. For the temporal action detection task, we report results on the THUMOS14 [2] dataset. Evaluation is given as mean average precision (mAP). We use a SOTA model TriDet [4] with standard hyperparameters, again only swapping the feature sets used in a similar fashion to the UCF_Crime protocol.\", \"In the table, HMDB51 (OOD) refers to the Contrasted Accuracy (Contra. Acc.) results explained in Main Paper Section 4.4. Table 1 (next comment, Part 2) shows all of these results. Notably, we see that performance is greatly improved across tasks that require high-quality temporal understanding. These results highlight that our approach is not only relevant for action recognition but also beneficial for diverse downstream tasks in video understanding, further establishing its broader impact.\"]}", "{\"comment\": \"Thank you for the more direct response to the statement about the generality of the proposed method. The discussion is relevant and fine. I'll update the review to match my understanding of the paper and how it may fit into ICLR.\"}", "{\"comment\": \"After reading the rebuttal, I chang my rating from 5 to 6.\"}", "{\"title\": \"Discussion response to Parts 3 and 4\", \"comment\": \"These are answered together because they are related.\\n \\nThe answers here seem reasonable. The additional experimental data on the frame selection is interesting, and relevant to help understand the paper. But, I do not see this in the current pdf. It would have been helpful to understand how the paper actually evolves based on this discussion (and the other one above).\"}", "{\"title\": \"Initial Rebuttal (Part 2/4)\", \"comment\": \"Table 1: Additional video understanding task results comparison between baseline model and one trained with our ALBAR framework. Action recognition performance is reported as Top-1 accuracy (%), anomaly detection score is given as AUC (%), and temporal action detection as mAP (%). A higher score is desired for both tasks.\\n| Method | HMDB51 (OOD) | UCF_Crime | THUMOS14 |\\n| :---: | :---: | :---: | :---: |\\n| Baseline | 27.84 | 82.39 | 54.89 |\\n| Ours | **53.22** | **84.91** | **55.20** |\\n\\n\\n### Citations\\n[1] Waqas Sultani, Chen Chen, and Mubarak Shah. Real-world anomaly detection in surveillance videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6479\\u20136488, 2018.\\n[2] Y.-G. Jiang, J. Liu, A. Roshan Zamir, G. Toderici, I. Laptev, M. Shah, and R. Sukthankar. THUMOS challenge: Action recognition with a large number of classes. http://crcv.ucf.edu/ THUMOS14/, 2014.\\n[3] Yingxian Chen, Zhengzhe Liu, Baoheng Zhang, Wilton Fok, Xiaojuan Qi, and Yik-Chung Wu. Mgfn: Magnitude-contrastive glance-and-focus network for weakly-supervised video anomaly detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 387\\u2013395, 2023.\\n[4] Dingfeng Shi, Yujie Zhong, Qiong Cao, Lin Ma, Jia Li, and Dacheng Tao. Tridet: Temporal action detection with relative boundary modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18857\\u201318866, 2023.\"}", "{\"title\": \"Initial Rebuttal (Part 3/4)\", \"comment\": [\"**_W2: Does the \\\"static clip\\\" way of approaching video bias mitigation, have broader utility in video understanding or beyond?_**\", \"**Response to Weakness 2:** Video biases, such as foreground or background bias, primarily stem from the static appearance cues in a video. These biases arise due to the spurious correlation between the appearance and the action label, leading to predictions that rely on appearance rather than motion. By using a static clip, which inherently eliminates motion information, we ensure that the video encoder generates features solely based on the static appearance of the frames. Penalizing the model for making predictions based on these static appearance features helps mitigate appearance-based biases, including foreground and background biases. Since static clips exclusively capture appearance-related information, this approach can be broadly useful in mitigating all types of biases that originate from appearance, extending its utility beyond just action recognition to other video understanding tasks.\", \"**_W3: Certain interesting experiments are omitted, even though the writing suggests they may be useful. For example, L233 reads \\\"A naive application of Eq. 2 results in degraded performance.\\\" What exactly is a naive application? Furthermore, this implies that the two parts of Eq. 2 is individually interesting; in particular, does the right hand side do anything? The ablation study does not include these two parts separately._**\", \"**_Q5: How are there reasonable IID results in Table 3 for the case that Ladv is not used ---> when it is not used, there is no actual gradient to guide the model to do any recognition?_**\", \"**Response to Weakness 3 & Question 5:** This confusion appears comes from our non-ideal notational choices for Ladv In all experiments, we include the cross entropy loss on the standard motion clip (the left hand side of Ladv as written). In cases where Ladv is not applied, technically this just means that omega_adv = 0 (the right hand side weight). We grouped these components into a single equation to illustrate their adversarial relationship, acknowledging this approach complicates future notation, though we acknowledge that this makes future notation tricky. To improve clarity, we propose separating Ladv into distinct losses and will adjust notation accordingly throughout our work. This modification will help prevent misinterpretation and make the mathematical formulation more transparent.\", \"**Weakness 3 part 1:** Naive application may not be the best wording here. This simply means applying the loss (Ladv) without the additional regularizing losses (Lent, Lgp).\", \"**Weakness 3 part 2:** Technically, the ablation that evaluates the left hand side separately from the right hand side of Ladv is Main Paper Table 3 row (a) vs. row (b). We did not ablate using just the static adversarial component without the base temporal cross-entropy, though doing so just results in the model devolving into a state where it predicts the same class for every input, technically achieving 1.96% on IID HMDB51. Addressing notation should clarify this point.\", \"**Question 5:** To directly address Q5, the baseline L_CE is still utilized in situations where Ladv is not, just not the right hand static adversarial side (omega_adv = 0).\"]}", "{\"title\": \"Initial Rebuttal (Part 4/4)\", \"comment\": \"**_Q2: The method works by sampling any frame from the video and repeating it for a static clip. Why any frame? Aren't certain frames better or worse than others for the stated goal? (The pre-segmented nature of the datasets in question create itself an algorithmic bias. In the stated real world application deployments no such pretemporal segmentation is avavilable, and hence brings into question the feasability of the method in practice.) But, more concretely to the task, why is there no analysis whatsoever on the impact of this frame selection? Even for a subset of one dataset, it would have been interesting to understand the breadth of potential with different static clips._**\\n\\n- **Response to Q2:** This is a very interesting point. We wanted to ensure that our method did not require any additional input or assumptions about what biases were occurring to maintain solid generalization and minimize computation. At first, this meant not having a method/model to choose frames that are more biased than others. However, this is needlessly pedantic, since choosing a frame position from within the clip takes minimal input and can have a drastic effect. As you say, it is interesting to evaluate the impact of different static clips. We explored frame selection strategies for static clips, evaluating first, middle, last, and random frame positions, shown in Table 1 below.\", \"here_is_a_summary_of_our_key_findings\": [\"First and last frames sharply decreased performance, likely due to scene changes or irrelevant information before and after actions, even in the trimmed setting.\", \"Contrasting the action learning from irrelevant information is trivial, not learning anything useful.\", \"Random frame selection ensures variety in static objectives, leading to strong results.\", \"Notably, middle frame selection improved performance over random selection!\", \"This makes sense, since the middle frame is likely to contain the full background and actor in the middle of performing an action, making it a hard negative sample for the adversarial learning process.\", \"Using a sophisticated method to detect actors/backgrounds in frames (in trimmed or untrimmed setting) and choosing frames based on the existence of both in the chosen static frames would likely achieve the best performance, but this adds too much inductive bias and computation, so we avoid this in this work. Thanks for bringing this up, this experiment led to valuable insights that improved our overall results.\"], \"table_1\": \"Experiment with choosing a specific frame for the static adversarial objective.\\n | Method | IID | SCUBA | SCUFO | ConflFG | ContraAcc |\\n | :--- | :---: | :---: | :---: | :---: | :---: |\\n | Random Frames | 73.20 | 53.22 | 0.42 | 49.84 | 53.02 |\\n | First Frame | 72.75 | 50.92 | 0.18 | 45.59 | 50.91 |\\n | Middle Frame | 72.81 | 53.53 | 1.50 | 48.13 | **53.22** |\\n | Last Frame | 72.68 | 49.49 | 0.40 | 42.42 | 49.40 |\\n\\n\\n**_Q3: Wouldn't it be clearer to concretely specify that the action label notation is a one-hot vector? It is one-hot, right?_**\\n\\n- **Response to Q3:** You are correct, this the label vector is technically a one-hot vector, we can specify this in the final version.\\n\\n**_Q4: At line 218, shouldn't \\\"maximized\\\" be \\\"minimized\\\"? At least, something in that sentence does not match up: \\\"p(t) is still matched to gt distribution y, but the similar ....maximized\\\" Maximizing the \\\"similarity\\\" (a loose term here) is minimizing the ce loss._**\\n\\n- **Response to Q4:** Thanks for reading closely and for pointing this out, this is a crucial part to get correct in writing. As you say, the loss should be maximized, not the \\\"similarity\\\" (perhaps alignment is a better term). It should be more clear to say that the cross-entropy should be maximized in this paragraph.\"}", "{\"title\": \"Initial Rebuttal (Part 1/2)\", \"comment\": \"Thank you for your insightful review and detailed questions.\\n\\n**_W1: Although the results demonstrate that the model outperforms state-of-the-art methods, the technical contribution is incremental and purely based on previously introduced techniques._**\\n- **Response to Weakness & Question 1:** While the basic idea of adversarial training, entropy maximization, and gradient penalties are not alone novel, we want to highlight the aspects of our technical contribution that are novel:\\n 1. First, adversarial training typically utilizes labels or separate models to facilitate the counter-objective, while we reduce complexity and create a strong negative by utilizing the same encoder and classifier head, manipulating the input itself instead. By consolidating adversarial training techniques into a single, streamlined framework, we offer a more efficient and integrated approach to addressing bias in video action recognition.\\n - Previous works have utilized separate 2D and 3D encoders, repelling representations in a contrastive-like objective. We are the first to combine the objective into the same encoder and adversarially train in this manner. \\n 2. Our formulation of minimizing gradient norm to stabilize adversarial training weight updates w.r.t. a specific input type represents a novel technical contribution. This approach provides a more nuanced method of managing adversarial training dynamics.\\n\\n**_W2: How is the fairness of the comparative experiments ensured in this paper, and how are the results of the comparative methods obtained?_**\\n\\n- **Response to Weakness & Question 2:** Great care was taken to ensure fairness in experimentation. Most results were sourced from prior publications. The StillMix paper released their codebase along with comprehensive hyperparameter choices. Due to this, we were able to replicate their numbers in our own implementation. We retained their original hyperparameter choices and incorporated our proposed losses, ensuring a fair comparative evaluation. In the case of results not being reported previously\\u2014on our proposed UCF101 protocol fix\\u2014we used the reproduced model for evaluation.\\n\\n**_W3: The ALBAR performs well on specific bias evaluation protocols, but its generalization capabilities to new types of biases or different domain tasks have not been fully validated._**\\n\\n- **Response to Weakness & Question 3:** We appreciate your point here. In the following Table 1, we demonstrate our method's generalizability across different domain tasks (weakly supervised anomaly detection, temporal action localization). Regarding biases, the SCUBA/SCUFO protocols are designed to comprehensively address static biases in video action recognition, notably introducing foreground bias evaluation. Currently, we are unaware of alternative protocols for assessing other types of video action recognition bias. We welcome suggestions for additional bias evaluation methods and would be eager to incorporate them.\\nIn the table, HMDB51 (OOD) refers to the Contrasted Accuracy (Contra. Acc.) results explained in Main Paper Section 4.4. The downstream tasks use features extracted from our debiased model trained on HMDB51. Notably, performance is greatly improved across tasks that require high-quality temporal understanding. These results highlight that our approach is not only relevant for action recognition but also beneficial for diverse downstream tasks in video understanding, further establishing its broader impact. For additional details on this implementation/analysis, please refer to our initial response to Reviewer `EAVn`.\", \"table_1\": \"Additional video understanding task results comparison between baseline model and one trained with our ALBAR framework. Action recognition performance is reported as Top-1 accuracy (%), anomaly detection score is given as AUC (%), and temporal action detection as mAP (%). A higher score is desired for both tasks.\\n| Method | HMDB51 (OOD) | UCF_Crime | THUMOS14 |\\n| :---: | :---: | :---: | :---: |\\n| Baseline | 27.84 | 82.39 | 54.89 |\\n| Ours | **53.22** | **84.91** | **55.20** |\"}", "{\"title\": \"Initial Rebuttal (Part 2/2)\", \"comment\": \"**_W4: Although the paper proposes a simplified end-to-end training framework, adversarial training often involves additional computational costs. The paper does not discuss the computational efficiency and scalability of the ALBAR method in detail._**\\n\\n- **Response to Weakness & Question 4:** This is a good point for us to address. While adversarial training does introduce computational overhead, our framework is intentionally designed to minimize additional computational costs. Specifically, the only additional cost comes from creating the static video batch and passing it through the model in addition to the standard clips, which scales only linearly with the video encoder size. Critically, we avoid loading additional models, introducing new parameters, or overly complex adversarial architectures. On HMDB51, using a single 80GB A100 GPU, our complete training process (including validation each epoch) requires approximately 9 hours\\u2014demonstrating the method's computational efficiency. With the only additional overhead coming from the computation of extra losses coming from the same model, our approach is lightweight and practical. Apart from the training computational cost, it is important to note that our method incurs no additional computational overhead during practical deployment compared to a standard model.\"}", "{\"summary\": \"The paper proposes a method to improve generalization capability of an activity recognition method in temporally-segmented video datasets. The method focuses on an adversarial approach to removing biases from static elements of the scene. The paper takes a random frame from the video and creates a \\\"static\\\" video by repeating the frame the same length as the original clip, then using this static clip as the adversary. The paper combines this adversarial approach with other reasonable loss terms. It demonstrates state of the art accuracy on the problem.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The problem of bias mitigation in ML is seriously important. The paper proposes a concrete approach to mitigating clear bias problems in action recognition.\", \"The entropy maximization term makes sense for avoiding the static cues.\", \"The paper uses practical mechanisms to overcome the challenges of training the models.\", \"The evaluation is performed on a recent OOD benchmark for bias analysis, and performs well relative to other methods.\", \"The writing is mostly clear and concrete.\"], \"weaknesses\": [\"The paper is is somewhat \\\"small\\\" in that although it describes what seems to be a new method and its evaluation on reasonable benchmarks, there is little discussion around key elements of the method, their limitations, their rationale, etc. (see questions)\"], \"edit\": [\"The review discussion thus far, and the other reviewers seems to have identified similar concerns. Discussion around this point suggests there is some generality to visual problems, but some concern about the narrowness of the contribution remains. Most of the comparative papers are in CV conferences. I raised my scored, but this point should be discussed among the AC pair/triplet however ICLR is doing it this round.\", \"Does the \\\"static clip\\\" way of approaching video bias mitigation, have broader utility in video understanding or beyond?\", \"Certain interesting experiments are omitted, even though the writing suggests they may be useful. For example, L233 reads \\\"A naive application of Eq. 2 results in degraded performance.\\\" What exactly is a naive application? Furthermore, this implies that the two parts of Eq. 2 is individually interesting; in particular, does the right hand side do anything? The ablation study does not include these two parts separately.\"], \"questions\": [\"The mechanism for adversarial learning in the paper is very specific to action recognition. How can this mechanism be generalized to be of broader interest to the ICLR community?\", \"The method works by sampling any frame from the video and repeating it for a static clip. Why any frame? Aren't certain frames better or worse than others for the stated goal? (The pre-segmented nature of the datasets in question create itself an algorithmic bias. In the stated real world application deployments no such pretemporal segmentation is avavilable, and hence brings into question the feasability of the method in practice.) But, more concretely to the task, why is there no analysis whatsoever on the impact of this frame selection? Even for a subset of one dataset, it would have been interesting to understand the breadth of potential with different static clips.\", \"Wouldn't it be clearer to concretely specify that the $\\\\vby$ action label notation is a one-hot vector? It is one-hot, right?\", \"At line 218, shouldn't \\\"maximized\\\" be \\\"minimized\\\"? At least, something in that sentence does not match up: \\\"p(t) is still matched to gt distribution y, but the similar ....maximized\\\" Maximizing the \\\"similarity\\\" (a loose term here) is minimizing the ce loss.\", \"How are there reasonable IID results in Table 3 for the case that Ladv is not used ---> when it is not used, there is no actual gradient to guide the model to do any recognition?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a novel adversarial learning-based method to mitigate biases in action recognition, which provides simplified end-to-end training and does not require any labels/classifiers for bias-related attributes. This paper is well-written and well-organized. Good performance on the popular action recognition datasets. Reviewers are concerned about the scalability and generalization of the proposed methods, the incremental technical contribution, and the fairness of method comparison. After rebuttal, the author has addressed these major concerns. The final vote is acceptance.\", \"additional_comments_on_reviewer_discussion\": \"After the rebuttal period, the reviewers raised the vote to accept. So this submission is above the acceptable standard.\"}", "{\"title\": \"Discussion (what the author's called \\\"Initial Rebuttal\\\" Part 1 and 2 / 4)\", \"comment\": \"The response unfortunately does not address the questions directly.\\n(1) The question does not doubt the importance of action recognition in video understanding. The many thousands of papers on the topic over the last 15 years is sufficient to do that. The question states: \\\"there is little discussion around key elements of the method, their limitations, their rationale, etc.\\\" Where in the paper is that discussion, that analysis?\\n\\n(2) Broader applicability. The review comment stated \\\"The mechanism for adversarial learning in the paper is very specific to action recognition. How can this mechanism be generalized to be of broader interest to the ICLR community?\\\" The response noted anomaly detection and temporal action localization. Indeed these are downstream tasks in video understanding. However, that is not the essence of the question, which perhaps was a bit unclear, my mistake. Let me rephrase more directly. \\\"How is the proposed adversarial learning method useful and relevant to problems outside of video understanding, if it is?\\\"\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Discussion\", \"comment\": \"We apologize for the missed response, we had misinterpreted your concerns. We had thought that your concern was with the limited scope of specifically action recognition, but we see now that you are referring to how the core methodology could be applicable outside of video understanding as a whole. Throughout the method and ablation section, we discuss rationale for adding each component due to limitations of each on their own, combining for the best performance. However, to your point, the analysis is indeed limited to our direct problem setting, not necessarily a general analysis of the components.\\n\\nOur paper scope is positioned as an improvement on previous background and foreground debiasing works in action recognition, similar to papers like StillMix [1], Background Erasing (BE) [2], & ActorCutMix [3]. That being said, it would certainly strengthen the work to look outside of video understanding and express how the method components may be of use to the general community. Our method was built based on an observation of a semi-unique property of video -- the fact that a single frame from along the temporal dimension contains all the same 2D information as a 3D clip, but it does not contain the information necessary to classify the end result: an action taking place across time. At the core of this is the idea that we have paired inputs: one containing all necessary information for classification, and one containing mostly the same information, but lacks crucial information to complete the task. We believe that given this unique problem setup, our methodology should apply. As such, we have put together a quick experiment to evaluate our method in an alternate, albeit related, domain: image classification. Similar to action recognition, the background bias problem is well-known and well studied. In the Waterbirds [4] dataset (based on CUB-200-2011 [5]), images are built by taking segmented images of specific bird types and placing them on specific background types (land birds, waterbirds vs. land-based, water-based backgrounds), causing models to learn the spurious correlation between the bird type and background type. We observe a similar phenomenon to our problem setup, where instead of having the temporal dimension to reduce, we can reduce information in the spatial dimensions by removing the foreground. Here, our pairing becomes the original image (containing the foreground bird for classification) and the original background image (with no bird, so it should be useless for bird classification). Acquiring these pairs in a non-synthetic setup takes more effort than our video-based setup, but our core method should nonetheless still apply. The results in Table 3 below indicate that there is merit to our method outside of video understanding, seeing as we improve both minority classes and worst-group accuracy, even without spending time to optimize hyperparameters. While this is not a robust analysis, we believe that in setting up these two scenarios (video debiasing, image classification debiasing), we demonstrate the broader applicability of our method across domains, contingent upon having the unique paired input setup.\", \"table_3\": \"Generalization experiment using Waterbirds [4]. Per-class accuracy (%) evaluation is provided, with worst-group accuracy commonly used as an evaluation metric. WoW = Waterbirds on Water, LoL = Landbirds on Land, etc.\\n| Method | WoW (majority) | WoL (minority) | LoL (majority) | LoW (minority) |\\n| :--- | :---: | :---: | :---: | :---: |\\n| Baseline (R18) | 92.68 | 50.00 | **99.16** | 76.45 |\\n| Ours (R18) | **92.99** | **59.50** | 98.67 | **78.00** |\\n\\n\\nWe would like to emphasize that the scope of this work was meant to advance the current state of foreground and background debiasing in action recognition, but we thank you for recognizing that our methods are potentially useful to the general ICLR community, not just those working in video understanding. It is our hope that publishing our work, along with our preliminary analysis into applications in new domains, would expose these ideas to the ICLR community, giving them the opportunity for future exploration of similar methods in their respective domains.\", \"note\": \"Since the standard Waterbirds dataset does not separately contain references to the background images used, we had to create a split (using the public code provided by the original authors), modifying it to additionally save the background images to create our pairing.\"}" ] }
9KatbAXLAq
Certified PEFTSmoothing: Parameter-Efficient Fine-Tuning with Randomized Smoothing
[ "Chengyan Fu", "Yue Xu", "Jian Lou", "Meikang Qiu", "Zhan Qin", "Wenjie Wang" ]
Randomized smoothing is the primary certified robustness method for accessing the robustness of deep learning models to adversarial perturbations in the $l_2$-norm, by taking a majority vote over the multiple predictions of a random Gaussian perturbed input of the base classifier. To fulfill the certified bound and empirical accuracy of randomized smoothing, the base model either needs to be retrained from scratch to learn Gaussian noise or adds an auxiliary denoiser to eliminate it. In this work, we propose \textit{PEFTSmoothing}, which teach the base model to learn the Gaussian noise-augmented data with Parameter-Efficient Fine-Tuning (PEFT) methods in both white-box and black-box settings. This design is based on the intuition that large-scale models have the potential to learn diverse data patterns, including the noise data distributions. In addition, we explore the possibility of combining \textit{PEFTSmoothing} with the fine-tuning for downstream task adaptation, which allows us to simultaneously obtain a robust version of the large vision model and its adaptation tailored to downstream datasets. Extensive results demonstrate the effectiveness and efficiency of \textit{PEFTSmoothing}, which allow us to certify over 98\% accuracy for ViT on CIFAR-10, 20\% higher than SoTA denoised smoothing, and over 61\% accuracy on ImageNet which is 30\% higher than CNN-based denoiser and comparable to the Diffusion-based denoiser.
[ "Certified Robustness", "Parameter-Efficient Fine Tuning", "Adversarial Example" ]
https://openreview.net/pdf?id=9KatbAXLAq
https://openreview.net/forum?id=9KatbAXLAq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "jSURj6AcoA", "ektPAR7nC3", "TNXpS5JMRZ", "D98i4hAx8e", "8Rjudeql6a" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730680842141, 1731657031371, 1730122642025, 1730548508592, 1730393475208 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9448/Reviewer_mJTE" ], [ "ICLR.cc/2025/Conference/Submission9448/Authors" ], [ "ICLR.cc/2025/Conference/Submission9448/Reviewer_Q9gT" ], [ "ICLR.cc/2025/Conference/Submission9448/Reviewer_5MLq" ], [ "ICLR.cc/2025/Conference/Submission9448/Reviewer_fmc4" ] ], "structured_content_str": [ "{\"summary\": \"In this paper, the authors propose a PEFT-inspired method for adapting pre-trained base models to produce robust classification performance under noise-augmented data. The authors propose a white-box method that augments a trainable layer to a given base model (of various architectures) and tunes its parameters with respect to a Cross-Entropy Loss under noise-augmented labeled data samples. They also provide a black-box approach that uses an AutoEncoder style Coordinator with a frozen encoder and a trainable decoder to denoise/augment the noisy data samples before feeding them to the base model. The black-box coordinator is trained using SPSA with Cross Entropy Loss under noise-augmented labeled data. Finally, the authors also propose a joint fine-tuning and robustifying training that retrains a large model to adapt it to a downstream task while also adapting it to the randomized smoothing noise injection.\\n\\nThe authors also provide extensive experiments for CIFAR-10 and Imagenet datasets, comparing the proposed methods to other SOTA denoised smoothing methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The proposed method combines the advantages of retraining and denoising-based randomized smoothing methods to provide a best-of-both-worlds solution. The black-box approach also makes PEFTSmooting easy to use with multiple different architectures without requiring further customization.\\n\\nThe experimental results of CIFAR10 beat the current SOTA methods by an impressively large margin. \\n\\nThe ablation studies, as well as the GradCAM experiments, are quite helpful for understanding the relative advantages of the proposed approach.\", \"weaknesses\": \"The presentation in the paper needs to improve. The paper assumes a lot of background on PEFT on the reader. Given that most of the results in the paper are based on LoRA, it might be useful to provide a detailed explanation of the setup and training of the LoRA model. Similarly, I would urge the authors to provide a more detailed overview of the four different PEFT approaches considered in the paper. Figure 1 does not adequately explain the PEFT setting.\\n\\nThe Imagenet results in the appendix are quite a lot worse compared to the Diffusion denoiser. While the authors suggest that this can be attributed mostly to the fact that the diffusion-based denoiser is more powerful for higher-resolution images, is there a smoother trade-off that can be established between the two models? A small discussion on this could be quite helpful.\", \"questions\": \"Please refer to the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This work proposes a parameter-efficient fine-tuning (PEFT) method to explore the large-scale model's potential to learn the Gaussian noised data pattern, thus enhancing its performance on randomized smoothing. The white-box PEFT and black-box PEFT smoothing methods are proposed for different scenarios. Extensive and comprehensive experiments are conducted to demonstrate the effectiveness of the proposed PEFT method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. This paper is well-written and structured. The method is intuitive and easy for readers to understand.\\n\\n2. Comprehensive experiments are conducted to demonstrate its effectiveness.\", \"weaknesses\": \"1. My major concern is related to the novelty and significance of this work:\\n\\n**Novelty**: The white-box and black-box tuning methods in this paper are all existing work. This paper just utilizes the existing PEFT on a large-scale classification model and tests its performance under randomized smoothing. This raises concerns regarding the paper\\u2019s technical contribution.\\n\\n**Significance**: The primary challenge in randomized smoothing (RS) is not training noise-robust classifiers but reducing the extensive inference time required for certification, often involving a majority vote over 10,000 samples. While the paper aims to reduce model training time through PEFT, this does not address the more critical issue of RS's substantial inference delay. This raises concerns about the practical impact of the proposed work.\\n\\n2. Some experimental settings are unfair:\\n\\nIn Figure 2, it's unfair to compare the PEFT methods with those training-free denoising methods.\\n\\nIn Table 1, clarification is needed regarding the base models used for RS and DS methods. Based on the reported performance, it appears these methods are evaluated using ResNet architectures, while PEFTsmoothing employs ViT-L and ViT-B models.\\n\\n3. An interesting result is that the LORA-based tuning method outperforms the full fine-tuning-based method. Typically, the performance of parameter-efficient methods could be close to but not surpass the full fine-tuning method. A more detailed analysis of this would be valuable.\", \"questions\": \"see the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethics concerns.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work proposes a new approach to the random smoothing approach for making neural networks robust to adversarial perturbation. Rather than retraining models from scratch, this work proposes using parameter efficient fine-tuning (PEFT), typically used to fine-tune language models, to enable models to learn adversarially perturbed data-distributions.\", \"the_key_contributions_are_as_follows\": [\"White-box PEFT smoothing, in which a vision model is fine-tuned after perturbing the dataset with gaussian noise.\", \"Black-box PEFT tuning, in which zeroth-order optimization (specifically SPSA) is used to create a prompt-tuning based method for fine-tuning the model with perturbed data.\", \"A thorough experimental validation shows that PEFT can be used to efficiently make ViT models more robust.\"], \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The fundamental idea of using fine-tuning techniques to improve models' capacities for adversarial robustness is an interesting one.\", \"The experimental validation is reasonably thorough (though there are issues in the setup).\"], \"weaknesses\": [\"This work currently has several weaknesses.\", \"There are several instances when the writing is imprecise. For instance\", \"The claim made on lines 144-146 \\\"Theoreticaly, Theorem 2.1 ... with Gaussian noised inputs\\\" is a strong one, and requires either a citation or some kind of rigorous or empirical justification in the paper.\", \"Lines 150-156 are also unclear. For instance, the authors should clarify \\\"large-scale vision model ... acquire such potential ...\\\". What does the phrase \\\"acquire such potential\\\" refer to?\", \"lines 346-347: \\\"this is mainly due ... especially for high resolution images\\\" is another strong claim that either requires a citation or empirical justification\", \"It is unclear what the authors mean by \\\"certified.\\\" Is it that the PEFT can be used to guarantee the conditions of Theorem 2.1? If so, can the authors provide a formal proof for this? In order to make a claim that a method is 'Certified', a rigorous guarantee must be provided.\", \"Equation (6) is written poorly and appears to be key to the entire experimental slate (and thus, this paper). Specifically:\", \"The expression should be written in terms of indicator functions\", \"The use of the '&' is unclear - do you mean the product?\", \"The authors state that \\\"certifiedCheck returns 1 if Theorem 2.1 is satisfied\\\". However, the conditions in Theorem 2.1 are all probabilistic in nature. How are the probabilities computed? Specifically, how is $\\\\mathrm{Pr}[F(x+\\\\varepsilon) = c_A]$ computed?\"], \"questions\": \"See 'Weaknesses' section for questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes PEFTsmoothing, an approach for teaching the base model to learn the Gaussian noise augmented data for white box and black box certified robustness. Certification results are shown on benchmark datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The authors present PEFTSmoothing, a certifiable method to convert large base models into robust versions. They also explore how it can be extended to achieve certified robustness and downstream task adaption with fine-tuning. Black-box extensions are considered. Experiments are shown on large vision models and results show the effectiveness of the approach.\", \"weaknesses\": \"1. Lack of novelty. The training objective (4) is not novel, see baselines below. The overall premise of the paper does not seem novel enough for ICLR standards. Combining two well known methods, smoothing and PEFT, does not cross the novelty threshold. There is also a very large literature on certified robustness that the authors do not compare against, see https://sokcertifiedrobustness.github.io/\\n\\n2. Limited baselines. Compare against and cite the following relevant works:\\n- S. Srinivas et al., Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness, NeurIPS 2023\\n- R. Shao et al., On the Adversarial Robustness of Vision Transformers, TMLR 2022, https://arxiv.org/pdf/2103.15670\\n- T. Tsiligkaridis et al, Diverse Gaussian Noise Consistency Regularization for Robustness and Uncertainty Calibration, IJCNN 2023, https://arxiv.org/pdf/2104.01231\\n\\nThe datasets are also quite limited and simple so I suggest a more thorough experimental study involving more domain-specific datasets, e.g. WILDS.\\n\\n3. How far is empirical robustness from certified robustness in your setting? A comparison is needed. Approaches for gradient masking mitigation have been studied. What are the tradeoffs? What about computational complexity?\\n\\n4. How does the geometry of the loss landscape depend upon the robustness properties of the large vision models you study? These are intimately related based the adversarial robustness literature.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
9KNnSvUxLl
TimeKAN: A Transparent KAN-Based Approach for Multivariate Time Series Forecasting
[ "Zechuan Chen", "TianMing Sha", "Ziyi Tang", "Keze Wang" ]
In recent years, numerous deep learning models have been proposed for Multi-variate Time Series (MTS) forecasting, with Transformer-based models showing significant potential due to their ability to capture long-term dependencies. However, existing models based on MLPs or Transformers often suffer from a lack of interpretability due to their large parameter sizes, which can be problematic in many real-world applications. To address this issue, we propose TimeKAN, a model based on Kolmogorov-Arnold Networks. The KAN model offers two key advantages: (1) it achieves accuracy comparable to MLPs with significantly fewer parameters, and (2) its parameters can be symbolized, which makes it possible to interpret the meaning of the parameters. Additionally, instead of the usual attention mechanisms, we designed a Multi-Scale Patching (MSP) module for MTS that allows for more flexible and simple multi-patching and effectively extracts both temporal and cross-dimensional features. By leveraging this strategy along with KAN, TimeKAN constructs a hierarchical structure capable of utilizing information across different scales, leading to highly accurate predictions. Extensive experiments on six real-world datasets demonstrate that TimeKAN outperforms state-of-the-art (SOTA) methods in terms of predictive performance. Furthermore, we interpret TimeKAN by visualizing its learning process for extracting symbolized features, opening the black box and revealing meaningful patterns within the time series.
[ "Multi-variate Time Series (MTS) Forecasting", "Kolmogorov-Arnold Networks (KAN)", "White Box", "Multi-scale modelling" ]
https://openreview.net/pdf?id=9KNnSvUxLl
https://openreview.net/forum?id=9KNnSvUxLl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tN2r0jmjha", "f7WO1RwhrM", "T7dYiHMc0o", "GROfzvEbrx", "FYvTxKJxRo" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730347529807, 1730523974869, 1731912655038, 1730108734452, 1729001158155 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6750/Reviewer_3zmk" ], [ "ICLR.cc/2025/Conference/Submission6750/Reviewer_EkiL" ], [ "ICLR.cc/2025/Conference/Submission6750/Authors" ], [ "ICLR.cc/2025/Conference/Submission6750/Reviewer_U1Xw" ], [ "ICLR.cc/2025/Conference/Submission6750/Reviewer_X5XU" ] ], "structured_content_str": [ "{\"summary\": \"The article proposes TimeKAN, a model based on Kolmogorov-Arnold Networks for multivariate time series forecasting. TimeKan employs symbolization techniques to represent the learned features and visualize the training process, addressing the interpretability challenge. Experimentally, TimeKAN outperforms state-of-the-art (SOTA) methods in terms of predictive performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper introduces Kolmogorov-Arnold Networks to time series data to improve the performance and interpretability.\\n2. The method is simple and intuitively effective.\", \"weaknesses\": \"1. The overall novelty is limited. Since the hierarchical decomposition architectures have already been studied by N-BEATS and N-HiTs, the patch size selecting method is similar to the existing TimesNet.\\n2. Although the authors have included some MLP-based as baselines, some highly-related works is omitted, such as N-BEATS [1] and Pathformer [2].\\n[1] Oreshkin, Boris N., et al. \\\"N-BEATS: Neural basis expansion analysis for interpretable time series forecasting.\\\" ICLR 2020.\\n[2] Chen, Peng, et al. \\\"Pathformer: Multi-scale transformers with adaptive pathways for time series forecasting.\\\" ICLR 2024.\\n3. Since the author argues that existing MLP-based and Transformer-based models lack interpretability due to their large parameter sizes (Line 15), and propose the effective TimeKAN, detailed efficiency analysis is required.\", \"questions\": \"1. Could you give more explanation on the notations $x_1, ..., x_5$ in Figure 1 and $X_6, ..., X_{10}$ in Figure 5?\\n2. In the experiments, why did you choose not to use ECL and Traffic datasets? They are also well-established multivariate forecasting benchmarks similar to ETT.\\n3. Could you provide more discussion on the experimental results in Table 2? Why did the number of MSPs introduce such a large fluctuation in the results?\\n4. Could you provide some intuitive showcase of the prediction results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, TimeKAN, a Kolmogorov-Arnold network based time series model, is proposed and demonstrate state-of-the-art prediction performance on some real-world datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper attempts to introduce the KAN and a Multi-Scale Patching module for time series forecasting, aiming to achieve lower model parameters and better prediction performance.\\n\\n2. Try to explore the predictive and interpretable capabilities of the proposed method in time series.\\u200b\", \"weaknesses\": \"\\u200bLack of comprehensive explanation and analysis of the benefits of introducing new architectures in place of well-used model modules and incomplete experimental validation.\\u200b\", \"questions\": \"1. \\u200bThe proposed method lacks novelty and introduces a new architecture in time series prediction, but there is insufficient theoretical justification for the benefits of introducing KAN into time series prediction.\\u200b\\n\\n2. The paper emphasizes that KAN has fewer parameters compared to MLP, yet it lacks a comparative analysis of parameter counts and effectiveness against existing MLP-based models such as Dlinear, Nbeats, RLinear, and Transformer-based models.\\n\\n3. Experimental comparisons with baselines are not fully fair, as they lack tests on multidimensional datasets such as ECL and Traffic, which include more channels and larger data sizes. Some advanced time series models such as iTransformer can better accommodate these prediction scenarios.\\n\\n4. Ablation studies are incomplete and require a thorough comparison across all datasets, rather than just some small datasets ETT and Exchange. Moreover, the improvement from replacing MLP with KAN on existing datasets is not significant. This conclusion can be seen by the relative improvement in prediction performance.\\u200b\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces TimeKAN, a novel approach for multivariate time series forecasting that leverages Kolmogorov-Arnold Networks (KAN) and a Multi-Scale Patching (MSP) module. TimeKAN addresses the challenges of interpretability and parameter efficiency while maintaining high predictive accuracy. The model captures both temporal and cross-dimensional features across various scales and provides insights into its decision-making process through symbolic feature extraction.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. TimeKAN enhances model interpretability through symbolic regression, allowing for the extraction of human-readable models that explain underlying data patterns. The model achieves comparable accuracy to MLPs with significantly fewer parameters, which is beneficial for computational efficiency and model training.\\n2. The MSP module effectively captures multi-period dependencies between variates, allowing the model to focus on a broader spectrum of temporal patterns. And TimeKAN outperforms existing methods, including both MLP-based and Transformer-based architectures, across various datasets and forecasting horizons.\", \"weaknesses\": \"1. TimeKAN may not be the best performer in ETTm2, as indicated by MSD-Mixer outperforming it for every output length. The effectiveness of the MSP module relies on the determination of patching sizes through FFT, which may introduce complexity in data preprocessing.\\n2. The paper primarily focuses on performance on seen datasets, and it is unclear how TimeKAN would generalize to unseen or significantly different data distributions. While the model is parameter-efficient, the computational cost of training and inference, especially for real-time scenarios, is not explicitly addressed.\\n3. The paper could benefit from a more detailed discussion on scenarios where TimeKAN may underperform or fail, which would aid in understanding its limitations.\", \"questions\": \"As is commented in Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper titled \\\"TimeKAN: A Transparent KAN based Multivariate Time Series Prediction Method\\\" introduces TimeKAN, a new method for multivariate time series forecasting (MTSF) using Kolmogorov Arnold network (KAN). TimeKAN addresses challenges such as lack of interpretability and difficulty in capturing long-term dependencies and cross dimensional relationships in MTSF. This model introduces a Multi Scale Splicing (MSP) module to capture features at different temporal resolutions, while using symbolic techniques to enhance interpretability by visualizing the features. Extensive experiments on six real-world datasets have shown that TimeKAN outperforms state-of-the-art models in both prediction accuracy and interpretability, making it highly suitable for complex time series tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Originality: In the context of multivariate time series prediction, using Kolmogorov Arnold network (KAN) to bridge the gap between interpretability and accuracy is a new approach. KAN allows for symbolic representation of learned features, which improves the transparency of the model compared to deep learning models, especially in Transformer based architectures that often lack interpretability.\", \"quality\": \"The design of the MSP module is particularly noteworthy as it captures temporal dependencies at different scales. By dividing the input sequence into patches of different sizes, this model can capture short-term and long-term dependencies. The experimental section clearly demonstrated that the model consistently outperforms state-of-the-art models on six benchmark datasets, including ETT, Weather, and Exchange.\", \"clarity\": \"This article provides a clear explanation of the architecture innovation of TimeKAN, particularly the detailed decomposition of multi-scale patching mechanisms (pages 6-7) and the symbolic regression technique used to explain learning representations. The addition of visualization tools further increases the transparency of the model.\", \"weaknesses\": \"Computational efficiency: Although TimeKAN has strong predictive accuracy, its computational efficiency on very large datasets or ultra long sequences may become a bottleneck. The MSP module increases the complexity of the model by generating multiple patch sizes, which may result in high computational costs, especially when applied to datasets outside of those included in the experiment. It would be beneficial to discuss in more detail the potential of parallelization or distributed training to alleviate this situation.\", \"extension_to_other_fields\": \"Although the datasets used for evaluation are diverse, they are limited to specific fields such as energy, meteorology, and finance. There is no detailed exploration of extending symbolic feature representation to other fields, such as biological or industrial time series data. Future work may include verifying the interpretability of TimeKAN in more diverse fields.\", \"questions\": \"Computational complexity: Considering the potential overhead introduced by the MSP module, could you discuss any optimizations you are considering to improve the performance of the model on large datasets or ultra long sequences?\", \"symbolic_representation_generalization\": \"Have you tested symbolic feature interpretation in other types of data, such as biological or industrial datasets? How common is this interpretable feature in areas outside of your experiment?\", \"long_term_prediction\": \"How will TimeKAN's performance decline in very long-term predictions? For example, does the model maintain accuracy in predicting the next few weeks, and in these cases, how does it compare to simpler models such as DLlinear?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
9JtG4nN7ql
An Optimal Discriminator Weighted Imitation Perspective for Reinforcement Learning
[ "Haoran Xu", "Shuozhe Li", "Harshit Sikchi", "Scott Niekum", "Amy Zhang" ]
We introduce Iterative Dual Reinforcement Learning (IDRL), a new method that takes an optimal discriminator-weighted imitation view of solving RL. Our method is motivated by a simple experiment in which we find training a discriminator using the offline dataset plus an additional expert dataset and then performing discriminator-weighted behavior cloning gives strong results on various types of datasets. That optimal discriminator weight is quite similar to the learned visitation distribution ratio in Dual-RL, however, we find that current Dual-RL methods do not correctly estimate that ratio. In IDRL, we propose a correction method to iteratively approach the optimal visitation distribution ratio in the offline dataset given no addtional expert dataset. During each iteration, IDRL removes zero-weight suboptimal transitions using the learned ratio from the previous iteration and runs Dual-RL on the remaining subdataset. This can be seen as replacing the behavior visitation distribution with the optimized visitation distribution from the previous iteration, which theoretically gives a curriculum of improved visitation distribution ratios that are closer to the optimal discriminator weight. We verify the effectiveness of IDRL on various kinds of offline datasets, including D4RL datasets and more realistic corrupted demonstrations. IDRL beats strong Primal-RL and Dual-RL baselines in terms of both performance and stability, on all datasets.
[ "imitation learning", "offline RL", "deep RL", "dual RL" ]
Accept (Poster)
https://openreview.net/pdf?id=9JtG4nN7ql
https://openreview.net/forum?id=9JtG4nN7ql
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yR0QGctlw9", "wOajmJWKay", "rwlaoUx0nL", "b0fRfWdfTt", "YKVUbmzgBn", "W8aDxPxx9f", "TxzX1SvMSw", "Ts1uwTSAsp", "SqVM4Qnksh", "RPRcuP6y6y", "QG9wzxiHCQ", "ORNMnS9z1B", "NozbXKErAB", "K6h02pvaLt", "FkZif8mPrN", "FiMDZzQS8n", "FPGTOSACFs", "6DiAHiSNjY", "4s2dqLrNWx", "43G49eeRyE", "2MeNA2A4LF", "1IpS6I8de6", "0LfEqKfmbu" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731997653477, 1733249842621, 1730720732427, 1733122762886, 1732141542439, 1737523417767, 1731968262394, 1729952498099, 1733226040012, 1729755361137, 1734954942839, 1731998858061, 1731968084533, 1732226729187, 1732574602435, 1730679804233, 1732195486218, 1733209826024, 1731968184172, 1732137951631, 1733121553898, 1732563504135, 1731967753819 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission838/Reviewer_DNBG" ], [ "ICLR.cc/2025/Conference/Submission838/Authors" ], [ "ICLR.cc/2025/Conference/Submission838/Reviewer_DfDQ" ], [ "ICLR.cc/2025/Conference/Submission838/Authors" ], [ "ICLR.cc/2025/Conference/Submission838/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission838/Authors" ], [ "ICLR.cc/2025/Conference/Submission838/Reviewer_DNBG" ], [ "ICLR.cc/2025/Conference/Submission838/Reviewer_DfDQ" ], [ "ICLR.cc/2025/Conference/Submission838/Reviewer_msFL" ], [ "ICLR.cc/2025/Conference/Submission838/Area_Chair_K5X2" ], [ "ICLR.cc/2025/Conference/Submission838/Authors" ], [ "ICLR.cc/2025/Conference/Submission838/Authors" ], [ "ICLR.cc/2025/Conference/Submission838/Authors" ], [ "ICLR.cc/2025/Conference/Submission838/Reviewer_oApW" ], [ "ICLR.cc/2025/Conference/Submission838/Reviewer_oApW" ], [ "ICLR.cc/2025/Conference/Submission838/Reviewer_msFL" ], [ "ICLR.cc/2025/Conference/Submission838/Reviewer_msFL" ], [ "ICLR.cc/2025/Conference/Submission838/Authors" ], [ "ICLR.cc/2025/Conference/Submission838/Reviewer_oApW" ], [ "ICLR.cc/2025/Conference/Submission838/Authors" ], [ "ICLR.cc/2025/Conference/Submission838/Authors" ], [ "ICLR.cc/2025/Conference/Submission838/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response after rebuttal\", \"comment\": \"I greatly appreciate the authors' comprehensive response. They have effectively addressed my initial concerns and provided additional details that considerably improve the quality of the paper. As a result, I am increasing my score with expectations that the authors will supplement the additional details (e.g., evidence of divergence, math details) in the revised manuscript.\"}", "{\"comment\": \"We thank the reviewer for the suggestion, but note that at the time this is posted more changes are not allowed to the manuscript by ICLR guidelines. IDRL is not complicated to implement despite requiring two additional networks. We provide psedocode in the main paper and our implementation details are available in detail in Appendix C where we instantiate f with chi-square divergence. We will add code snippets to the appendix to make the implementation more clear and release code when further modifications are allowed to the submission.\"}", "{\"summary\": \"The paper presents a new framework, Iterative Dual-RL (IDRL), which utilizes an optimal discriminator-weighted imitation approach to enhance offline reinforcement learning (RL). This method iteratively refines the dataset to approximate the optimal visitation distribution by filtering out suboptimal transitions, thus aiming to overcome limitations of previous Dual-RL methods. IDRL is evaluated on D4RL benchmarks and several corrupted datasets, showing promising improvements in stability and performance over existing offline RL methods.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe motivation of this paper is interesting and meaningful, which tries to combine offline RL with expert datasets.\\n2.\\tThe proposed IDRL offers a novel discriminator-weighted imitation view that extends Dual-RL to better handle offline datasets by iteratively optimizing the dataset. \\n3.\\tDetailed theoretical derivations and empirical validations make the methodology clear and support the proposed approach\\u2019s effectiveness. \\n4.\\tThe empirical results show that IDRL outperforms Primal-RL and existing Dual-RL methods on various benchmarks, indicating IDRL\\u2019s superior policy performance and dataset filtering effectiveness.\", \"weaknesses\": \"1.\\tThis paper misses some literature in RL trained with weighted loss, such as EDP, and QVPO [1, 2].\\n\\n[1] Kang B, Ma X, Du C, et al. Efficient diffusion policies for offline reinforcement learning[J]. Advances in Neural Information Processing Systems, 2024, 36.\\n\\n[2] Ding S, Hu K, Zhang Z, et al. Diffusion-based Reinforcement Learning via Q-weighted Variational Policy Optimization[J]. arXiv preprint arXiv:2405.16173, 2024.\", \"questions\": \"1.\\tThe reviewer believes the author should provide more explanation on how the additional variance introduced by the training of the U and W networks affects the overall stability of the algorithm.\\n2.\\tIn Algorithm 1, the reviewer wonders whether the W network is updated based on (12) rather than (10) on line 10? \\n3.\\tThe reviewer is confused about the value of M used in the experiments, and considers further clarification is needed here.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer once again for their time and effort in providing valuable feedback. We respectfully note that many papers receive improved scores during the rebuttal period after addressing reviewers\\u2019 concerns. Since there appear to be no remaining issues across all reviews for this paper, and considering your assessment of its soundness as excellent and its contributions as novel, we kindly ask if you might consider revisiting your score to further support this work, if you find it appropriate. We understand that such a request may seem unusual, but given that the reviewer who gave a score of 5 has not responded, the current statistics place our paper in a highly borderline position. We truly appreciate your understanding and consideration.\"}", "{\"comment\": \"Thank you for your reply. The stochasticity of the environment is not an issue. If the environment is stochastic, the training data will naturally reflect this. The filtering process ensures that **all optimal states visited by the optimal policy** are retained in the data, guaranteeing that the policy will not encounter states that were filtered out. In other words, IDRL preserves transitions within the optimal state-action distribution, regardless of whether the environment\\u2019s dynamics are deterministic or stochastic.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"We thank the reviewer for their thoughful review of our work. We address the reviewers questions and concerns below:\\n\\n>I believe the performance of IDRL is not stellar. Considering that the algorithm requires multiple dataset filtering steps, the performance gains from extra computation might not necessarily suggest the significance of the results.\\n\\nNote that on almost all tasks, IDRL matches or outperforms previous SOTA Primal-RL methods and greatly outperforms all Dual-RL methods. We acknowledge the additional computational cost brought by IDRL, however, we want to highlight that the core contibution of the paper is providing **the first**, **optimal discriminator-weighted** imitation view of solving offline reinforcement learning problem and showing the promising of correctly using Dual-RL methods. These new findings could potentially lead a new trend in the RL community.\\n\\n\\n>Since it works with filtering, the algorithm might fail in scarcity of data. In MuJoCo, single demonstration imitation learning is a standard setting. In this case, I suspect IDRL's performance will converge with (single demonstration) behavior cloning performance.\\n\\nWe respectfully disagree with this argument. Although IDRL leverages imitation, it does not operate at the trajectory level. Instead, IDRL could do stitching: extracting useful parts from different trajectories\\u2014those within the optimal distribution\\u2014effectively forming a superior demonstration compared to any single existing trajectory in the offline dataset. For example, in MuJoCo, Behavior Cloning (BC) on selected high-return trajectories in the dataset (as shown by the X%-BC results in the paper) performs significantly worse than IDRL.\\n\\n\\n>An analysis scalability (such as computation costs) of various offline RL tasks should be reported and experimentally validated.\\n\\nWe acknowledge the increased computational requirements of IDRL, as noted in the limitations section of our paper. To provide additional clarity, we present a runtime comparison of IDRL with several baseline offline RL algorithms below. On D4RL datasets, IDRL introduces only one additional iteration. During each iteration, we train $Q$, $V$ for half the training steps and $U$, $W$ for the other half. As a result, the total runtime is approximately twice that of IQL or SQL.\\n\\n| | IQL/SQL | CQL | Diffusion-QL | IDRL\\n| --------------- | ----| ----| ---------- | ---- \\n|Run Time (training+evaluation) | ~5h| ~8h| ~8h | ~11h\\n\\n\\n>Is the denoising process of diffusion-based offline RL (such as Diffusion-QL) similar to filtering datasets? How is IDRL conceptually different?\\n\\nThe success of diffusion-based methods relies on both accurate behavior modeling and effective sampling guidance. However, both steps can introduce errors. For instance, [1] demonstrates that diffusion behavior policies may produce potential out-of-distribution (OOD) actions, leading to overestimation errors in the guided sampling process. \\n\\nIDRL is related to these methods but differs significantly. IDRL is grounded in discriminator-weighted imitation learning, which avoids the use of potential OOD actions during training. This distinction ensures that IDRL does not inherit the overestimation errors associated with diffusion-based approaches.\\n\\n[1] [Diffusion-DICE: In-Sample Diffusion Guidance for Offline Reinforcement Learning](https://arxiv.org/pdf/2407.20109)\\n\\nPlease let us know if any further questions remain. We hope the reviewer can reassess our work with these clarifications.\"}", "{\"summary\": \"The authors point out that current Dual-RL methods incorrectly estimate the visitation distribution ratio. As a remedy, they propose a method to recover the true visitation distribution ratio by solving an OPE problem using Fenchel-Rockafellar duality. Additionally, they introduce a method to iteratively refine the offline dataset using the learned distribution ratio. They theoretically analyze the performance bound and the monotonic improvement property of the filtering procedure. The authors perform experiments on a gridworld toy case and the D4RL benchmarks to validate their claims.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This work theoretically demonstrates that semi-gradient Dual-RL only learns an action-distribution ratio, and derives a method for recovering the full state-action visitation ratio with tractable objectives.\\n2. The proposed iterative filtering procedure is supported by theoretical analysis and empirical evaluations.\", \"weaknesses\": \"1. There should be a comparison of compute costs (e.g., run time, memory usage), given the substantial amount of modifications introduced (e.g., additional updates and iterative dataset refinement).\\n2. The proof of Theorem 1 lacks clarity for readers not familiar with Fenchel-Rockafellar duality, as the authors have omitted some details (e.g., solving for $w^{*}(s)$). A more detailed explanation would be helpful.\\n3. Line 240 states that Deep RL algorithms are prone to overestimation errors caused by fragmented trajectories. And the authors claim that the proposed method avoids this issue (Line 448-449). However, this fragmentation effect does not seem to be supported by any theoretical/empirical analysis in the paper or in a previous work. Please cite relevant texts if any.\", \"questions\": \"1. Equation 12 shows that $w^{*}(s, a) = w^{*}(s) * w^{*}(a | s)$, which implies that state-action pairs filtered by $w^{*}(a | s)$ would also be filtered by $w^{*}(s, a)$. If $w^{*}(a | s)$ produces fragmented trajectories during dataset refinement, the trajectories produced using $w^{*}(s, a)$ will only be more fragmented. Also, from looking at Figure 2(e), it appears that using $w^{*}(s, a)$ produces incomplete trajectories as well. How does correcting the visitation distribution address the fragmented trajectory problem?\\n2. Line 240 states that Deep RL algorithms are prone to overestimation errors caused by fragmented trajectories. Is this conclusion based on a previous study? To the best of my knowledge, the \\\"stitching\\\" challenge (which is a task design factor of D4RL) requires offline RL algorithms to assemble sub-trajectories in order to solve a task [1].\\n3. (Line 263, 283) Which equation are you referring to? I assume it is Equation 9?\\n4. Does the \\\"IDRL w/ $w^{*}(a | s)$\\\" result in Table 2 apply the iterative refinement procedure? If so, does iterative refinement contribute negatively with $w^{*}(a |s )$? Without distribution correction, one might expect the algorithm to produce results similar to conventional Dual-RL methods (e.g., IQL). However, the average score in Table 2 seems to be significantly worse (56.8 vs. 77.8 of IQL on Mujoco). A more detailed ablation study may help.\\n\\n[1] Fu, Justin, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. 2020. \\u201cD4RL: Datasets for Deep Data-Driven Reinforcement Learning.\\u201d _arXiv Preprint arXiv:2004.07219_. [http://arxiv.org/abs/2004.07219](http://arxiv.org/abs/2004.07219).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your reply. Although this work offers a novel discriminator-weighted imitation view, it looks a little bit difficult to realize and needs extra costs for U and W network. In that case, I keep my score.\\n\\nBesides, I suggest the authors add a practical implementation part and instantiate the convex function $f$ as some common metrics (e.g., KL divergence). This will make it more straightforward to realize IDRL.\"}", "{\"summary\": \"This paper proposes Iterative Dual-RL (IDRL), a new algorithm for solving offline RL. The paper claims that an iterative \\\"filtering weight\\\" for imitation learning outperforms other offline RL methods. This point can be understood with the well-known dual formulation of RL, and by iterative self-distillation, the authors argue that RL can gradually correct state-action distribution between train and expert datasets. To validate this claim, the authors have provided theoretical justification for IDRL techniques and experimental results to support their claims.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written; the core contribution is straightforward to understand and supported by the theoretical arguments (I did not fully read the proof line-by-line).\\n2. This paper offers a novel perspective on offline RL. The authors successfully demonstrated that combining the dual formation of RL and imitation learning algorithms brings synergy to solve various tasks.\\n3. The paper contains realistic imitation learning experiments with corrupted datasets.\", \"weaknesses\": \"1. I believe the performance of IDRL is not stellar. Considering that the algorithm requires multiple dataset filtering steps, the performance gains from extra computation might not necessarily suggest the significance of the results.\\n2. Since it works with filtering, the algorithm might fail in scarcity of data. In MuJoCo, single demonstration imitation learning is a standard setting. In this case, I suspect IDRL's performance will converge with (single demonstration) behavior cloning performance.\\n3. An analysis scalability (such as computation costs) of various offline RL tasks should be reported and experimentally validated.\", \"questions\": \"1. Is the denoising process of diffusion-based offline RL (such as Diffusion-QL) similar to filtering datasets? How is IDRL conceptually different?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"Thank you for your submission to ICLR. This paper presents Iterative Dual-RL (IDRL), which builds on the work of Dual-RL while trying to mitigate two issues: (1) difficulty in accurately estimating the state-action visitation ratio, and (2) learning a regularized, rather than optimal, visitation distribution ratio. IDRL aims to instead iteratively filter out suboptimal data, and then perform imitation learning on the remaining data close enough to the optimal visitation distribution.\\n\\nReviewers agree that the problem setting is well motivated, the proposed IDRL algorithm is novel, and the paper is well-written and clearly presented. On the other hand, multiple reviewers also point out concerns involving the increased computational costs of the method, compared with baseline methods. Regardless, a majority of the reviewers\\u2019 concerns were sufficiently addressed during the rebuttal phase, and I therefore recommend this paper for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the authors answered all of the reviewers\\u2019 questions thoroughly, and also provided an additional runtime comparison to shed light on the computational cost of their method. Reviewers, on the whole, appreciated these updates and most felt their concerns were addressed, which led to an increase in scores.\"}", "{\"comment\": \"Thanks for raising the score!\"}", "{\"comment\": \"We thank the reviewer for their time and effort in reviewing our paper and for the constructive comments. Below, we address the concerns in detail:\\n\\n>Further, this method seems to be computationally more expensive compared to other methods. It would be nice if this was discussed.\\n\\nWe acknowledge the increased computational requirements of IDRL, as noted in the limitations section of our paper. To provide additional clarity, we present a runtime comparison of IDRL with several baseline offline RL algorithms below. On D4RL datasets, IDRL introduces only one additional iteration. During each iteration, we train $Q$, $V$ for half the training steps and $U$, $W$ for the other half. As a result, the total runtime is approximately twice that of IQL or SQL.\\n\\n| Algorithm | IQL/SQL | CQL | Diffusion-QL | IDRL\\n| --------------- | ----| ----| ---------- | ---- \\n|Run Time (training+evaluation) | ~5h| ~8h| ~8h | ~11h\\n\\n>How does filtering the dataset in round k, change the approximation of previously removed s,a pairs in later rounds?\\n\\nTransitions filtered at iteration $k$ are no longer used in subsequent iterations. These transitions do not belong to the optimal distribution and are therefore excluded from further approximations.\\n\\n\\n>How does this policy generalize to states that were filtered out?\\n\\nThank you for this insightful question. This issue is less significant if the initial state distribution during deployment remains similar to the training data, which is typically the case in standard scenarios. States filtered during training will not be visited by the policy because they lie outside the distribution of the optimal policy. However, as we noted in the limitations section, we acknowledge that generalization issues may arise when the initial state distribution during deployment deviates significantly from the offline data. Note that in this case offline RL algorithms that don't filter out dataset may also encounter issues like suboptimality of the dataset action if behavior regularization weight is strong, or overestimation caused by the value function if behavior regularization weight is weak.\\n\\nPlease let us know if any further questions remain. We hope the reviewer can reassess our work with these clarifications\"}", "{\"comment\": [\"We thank the reviewer for the reply, about your question on the theory, we provide the responses below:\", \"Proposition 1 proves that current Dual-RL methods learn the action distribution ratio $w^*(a|s)$ rather than the correct state-action distribution ratio $w(s,a)$, and Theorem 1 provides a way to recover $w^*(s)$ from $w^*(a|s)$ and we leverage that to get the correct state-action distribution ratio by following $w^*(s,a) = w^*(s) w^*(a|s)$.\", \"Theorem 2 doesn't manifest an optimization problem for $w^*(a|s)$. Instead, it addresses the biased estimator issue in Obj.(9) and offers an unbiased estimator as a solution.\", \"We acknowledge that this theorem builds upon \\u201cTheorem 3 in Li et al. (2024).\\u201d However, our contribution lies in how we leverage it to derive a theoretical analysis (performance bound) for IDRL. This extension is non-trivial since our work focuses on a different setting (offline RL) compared to Li et al. (2024) (offline imitation learning with supplemental datasets). Moreover, we propose Theorem 4, showing how the iterative process in IDRL minimizes the performance bound, which is also novel.\", \"To recall, the contribution of this paper is introducing the first optimal discriminator-weighted imitation view of solving offline RL, we propose a new algorithm based on Dual-RL to implement this idea and give theoretical and empirical analysis to it.\"]}", "{\"title\": \"response to rebuttal\", \"comment\": \"Thank you clearing up my misunderstanding. I am still wondering about this question:\\n\\n> How does this policy generalize to states that were filtered out?\\n\\nAdding small experiments to show at least some general robustness to states outside of the normal trajectory would be helpful.\"}", "{\"summary\": \"This paper presents IDRL (iterative Dual RL), an algorithm for dual reinforcement learning that aims to solve two issues in current dual RL methods -- the semi gradient update and data regularized policy extraction. IDRL is a method which iteratively refines the dataset based on a trained discriminator. The paper proves both a theoretical iterative update guarantee and empirically shows that this method has superior performance compared to primal RL and dual RL offline methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"This paper does a good job with outlining the main issues with current dual RL algorithms and provides a theoretically grounded solution.\", \"While the idea is simple, it is well explained and well founded.\", \"The proposed method also has strong empirical results.\"], \"weaknesses\": [\"It is unclear whether this method will suffer from poor generalization to other states which may have been ignored during dataset filtering.\", \"Further, this method seems to be computationally more expensive compared to other methods. It would be nice if this was discussed.\"], \"questions\": [\"How does this policy generalize to states that were filtered out?\", \"How does filtering the dataset in round $k$, change the approximation of previously removed s,a pairs in later rounds?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Rebuttal\", \"comment\": [\"Dear authors, I thank you for your response; now I have a better understanding of IDRL's performance.\", \"I have a few questions on the theory.\", \"Theorem 1 says that we need a decoupling approach from Eq. (5) s.t. $\\\\omega^\\\\ast(s,a) = \\\\omega^\\\\ast(s) \\\\omega^\\\\ast(a|s)$ based on Proposition 1. Is this interpretation correct?\", \"Theorem 2 manifests an optimization problem for conditional visitation $\\\\omega^\\\\ast(a|s)$ from Theorem 1. Is this correct?\", \"Theorem 3 is \\\"highly built on the Theorem 3 in Li et al. (2024).\\\" What could be the technical novelty in this theorem proving?\"]}", "{\"comment\": \"Dear authors,\\n\\nThank you for your detailed response. Based on your response and the reviews from other reviewers, I would like to take some additional time to carefully reevaluate the submission and finalize my recommendation, and further discuss with the other reviewers.\"}", "{\"comment\": \"We thank the reviewer for their thoughful review of our work. We address the reviewers questions and concerns below:\\n\\n>There should be a comparison of compute costs (e.g., run time, memory usage), given the substantial amount of modifications introduced (e.g., additional updates and iterative dataset refinement).\\n\\nWe acknowledge the increased computational requirements of IDRL, as noted in the limitations section of our paper. To provide additional clarity, we present a runtime comparison of IDRL with several baseline offline RL algorithms below. On D4RL datasets, IDRL introduces only one additional iteration. During each iteration, we train $Q$, $V$ for half the training steps and $U$, $W$ for the other half. As a result, the total runtime is approximately twice that of IQL or SQL.\\n\\n| | IQL/SQL | CQL | Diffusion-QL | IDRL\\n| --------------- | ----| ----| ---------- | ---- \\n|Run Time (training+evaluation) | ~5h| ~8h| ~8h | ~11h\\n\\n\\n>How does correcting the visitation distribution address the fragmented trajectory problem?\\n\\nThe reviewer is correct in noting that states filtered with $w^*(a|s)$ are also filtered out by $w^*(s, a)$. The key distinction is that $w^*(a|s)$ does not account for the dynamics of the environment, resulting in a much higher degree of incompleteness (trajectory-level incomplete). Theoretically, filtering with $w^*(s, a)$ guarantees a valid visitation distribution (trajectory-level complete), ensuring that there are no trajectories with missing transitions. As the reviewer points out in Fig. 2e, some fragmentation is observed empirically, but the amount is much smaller compared to using $w^*(a|s)$, and it does not lead to performance collapse during training.\\n\\n\\n>Line 240 states that Deep RL algorithms are prone to overestimation errors caused by fragmented trajectories. Is this conclusion based on a previous study? To the best of my knowledge, the \\\"stitching\\\" challenge (which is a task design factor of D4RL) requires offline RL algorithms to assemble sub-trajectories in order to solve a task [1].\\n\\nWe agree with the reviewer that offline RL algorithms can stitch together trajectories to solve a task. However, such algorithms are typically trained on complete trajectories with few or no missing transitions, ensuring that every transition (except the last) is supported by at least one subsequent transition and has a valid Bellman backup target. In our work, fragmented trajectories refer to unsupported transitions that lack a valid backup target due to dataset filtering.\\n\\nTo support our claim that fragmented trajectories lead to divergence issues, we ran IQL on randomly selected **transitions** from the original complete dataset. Specifically, we randomly selected 10K transitions from medium and medium-replay datasets in the Hopper and Walker environments and observed the learned value functions of IQL. The results, available at [link 1](https://ibb.co/bNvpF2k), [link 2](https://ibb.co/R3kwqm6), [link 3](https://ibb.co/tDVRhWw), and [link 4](https://ibb.co/Ykt1ThM), show divergence or overestimation in these cases.\\n\\n>(Line 263, 283) Which equation are you referring to? I assume it is Equation 9?\\n\\nThanks for pointing this out, we have corrected the reference to Equation 9.\\n\\n>Without distribution correction, one might expect the algorithm to produce results similar to conventional Dual-RL methods (e.g., IQL). However, the average score in Table 2 seems to be significantly worse (56.8 vs. 77.8 of IQL on Mujoco). A more detailed ablation study may help.\\n\\nThe ablation study uses two iterations to isolate the effect of filtering with $w(s, a)$. As mentioned earlier, running the second iteration with transitions filtered by $w(a|s)$ tends to cause divergence and degrade performance. Results for running one iteration without distribution correction can be inferred from the performance of f-DVL in Dual-RL [1], which closely aligns with IQL:\\n\\n| | IQL | f-DVL | IDRL w/ w(a\\\\|s) (f-DVL with M=2)\\n| --------------- | ----| ---- | ---------- \\n|Mean Score (Mujoco) | 77.8 | 75.7 | 56.8\\n\\n[1] [Dual RL: Unification and New Methods for Reinforcement and Imitation Learning](https://arxiv.org/pdf/2302.08560)\\n\\n\\n>The proof of Theorem 1 lacks clarity for readers not familiar with Fenchel-Rockafellar duality, as the authors have omitted some details (e.g., solving for w(s)). A more detailed explanation would be helpful.\\n\\n\\nThank you for pointing this out. The solution for $w(s)$ can be easily derived by setting $d^D$ to $d^*$ in the first term of Obj.(9). This is valid based on Lemma 2, which holds true for $d^*$. Taking the derivative and setting it to zero provides the solution for $w(s)$.\\n\\n\\nPlease let us know if any further questions remain. We hope the reviewer can reassess our work with these clarifications.\"}", "{\"title\": \"response to rebuttal\", \"comment\": \"Thank you for addressing my questions. I will keep my score. I still believe that policy generalization to filtered out states would be a problem that could be further explored in this work. Indeed, many environments are stochastic which can lead to polices being in states that were filtered out.\"}", "{\"comment\": \"We thank the reviewer for their helpful suggestion, which has greatly contributed to improving our paper.\\n\\nWe would like to provide an explanation **from the perspective of trading off between correct generalization and broader generalization to unseen states**, which may offer a more insightful answer to your question. Our paper presents the (discriminator-weighted) imitation learning perspective on solving offline RL. Our main claim is that while using more data (i.e., without filtering states) may increase robustness to unseen states, it comes at the risk of incorrect generalization due to low-quality transitions. Conversely, IDRL uses less data, which might reduce robustness to unseen states, but ensures that the generalization is correct. This trade-off is explicitly reflected in the theoretical analysis of IDRL, where the method iteratively finds the optimal balance. The theoretical bound for IDRL is expressed as:\\n\\n$$\\nV\\\\left(\\\\pi\\\\right) = V\\\\left(D\\\\right) - \\\\mathcal{O}\\\\left(\\\\frac{|\\\\mathcal{S}| H^2}{N_{D} + ... }\\\\right).\\n$$\\n\\nThis bound illustrates that the performance depends on both the size and the quality of the filtered dataset.\\n\\nDue to time constraints, we conducted experiments on the toy case presented in the paper to empirically validate this claim. Specifically, we compared the policies learned by using $w(a | s)$ (dualRL) and $w(s, a)$ (IDRL) on dataset states. Results are here (https://ibb.co/4j67Vth). The results demonstrate that IDRL ensures correct generalization, which is more critical than achieving broader but incorrect generalization (dualRL). Importantly, these findings also extend to more complex settings, as evidenced by the experimental results in our paper. Since behavior cloning errors caused by function approximation often lead to encountering unseen states during testing, the superior performance of IDRL highlights its ability to achieve correct generalization.\\n\\nIt is worth noting that other approaches based on weighted behavior cloning also rely on using only part of the dataset (e.g., assigning small or zero weights to certain transitions when training the policy). For instance:\\n - SQL (https://openreview.net/forum?id=ueYYgo2pSSU) derives a sparse learning objective in principle.\\n - IQL assigns near-zero weights when the advantage is negative.\\n - Another ICLR 2025 submission (https://openreview.net/forum?id=elTJBP7Fbv, although achieving high review scores (8866) while performing worse than IDRL), also assigns some transition weights to zero via a new objective.\\n\\nWhat differentiates IDRL from these algorithms is that IDRL provides the first imitation learning perspective to justify why filtering data is both useful and necessary.\"}", "{\"comment\": \"Please let us know if you have any further questions as the discussion period is ending soon. We would appreciate it if the reviewer could reassess our work in light of the clarifications provided and with a deeper understanding of our contributions.\"}", "{\"comment\": \"We thank the reviewer for the effort engaged in the review phase and the constructive comments. Regarding the concerns, we provide the detailed responses separately as follows.\\n\\n>This paper misses some literature in RL trained with weighted loss, such as EDP, and QVPO [1, 2].\\n\\nWe appreciate the reviewer bringing this to our attention and will include references to these works in the revised version of the paper. However, we would like to emphasize that the primary contribution of our paper is introducing **the first**, **optimal discriminator-weighted** imitation view of solving offline reinforcement learning, which is distinct from the methodologies presented in the mentioned literature.\\n\\n>The reviewer believes the author should provide more explanation on how the additional variance introduced by the training of the U and W networks affects the overall stability of the algorithm.\\n\\nThank you for raising this important point. Based on our empirical evaluations, which span 24 datasets across 7 environments, we did not observe any divergence or instability during the training of $U$ and $W$. Theoretically, the learning of $U$ and $W$ is expected to be stable because their objectives are convex, which under mild assumptions guarantees convergence to the optimal solution. Additionally, the training processes for $U$ and $W$ are independent of the training of $Q$ and $V$; this can be viewed as a second phase of learning akin to training another set of $Q$ and $V$, which is similarly stable.\\n\\nThat said, we acknowledge that potential instabilities may arise in broader cases due to the function approximation setting, where even $Q$-learning does not have guaranteed convergence. \\n\\n\\n>In Algorithm 1, the reviewer wonders whether the W network is updated based on (12) rather than (10) on line 10?\\n\\nWe apologize for the typo in Algorithm 1 and thank the reviewer for pointing it out. The W network is updated based on (10), not (12).\\n\\n\\n>The reviewer is confused about the value of M used in the experiments, and considers further clarification is needed here.\\n\\nSorry for the confusion, $M$ is the iteration number used in IDRL, as mentioned in Algorithm 1, line 2. We use M=2 on D4RL datasets and M=3 on corrupted demonstrations, we have made it more clear in the revised version of the paper.\\n\\n\\nPlease let us know if any further questions remain. We hope the reviewer can reassess our work with these clarifications\"}" ] }
9JE3HogPCw
Hadamard Representations: Augmenting Hyperbolic Tangents in RL
[ "Jacob Eeuwe Kooi", "Mark Hoogendoorn", "Vincent Francois-Lavet" ]
Activation functions are one of the key components of a deep neural network. The most commonly used activation functions can be classed into the category of continuously differentiable (e.g. tanh) and piece-wise linear functions (e.g. ReLU), both having their own strengths and drawbacks with respect to downstream performance and representation capacity through learning (e.g. measured by the number of dead neurons and the effective rank). In reinforcement learning, the performance of continuously differentiable activations often falls short as compared to piece-wise linear functions. We provide insights into the vanishing gradients associated with the former, and show that the dying neuron problem is not exclusive to ReLU's. To alleviate vanishing gradients and the resulting dying neuron problem occurring with continuously differentiable activations, we propose a Hadamard representation. Using deep Q-networks, proximal policy optimization and parallelized Q-networks in the Atari domain, we show faster learning, a reduction in dead neurons and increased effective rank.
[ "Representation Learning", "Reinforcement Learning", "Activation Functions" ]
Reject
https://openreview.net/pdf?id=9JE3HogPCw
https://openreview.net/forum?id=9JE3HogPCw
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zo4sfTkX0z", "x2ULiLLEZ4", "vF95Pqn9Id", "rGz9RqnO2h", "ovjD6Dd4GA", "oZJkyNnLAc", "koVeeYN8Lt", "igKJNpAVyF", "fDUf4MQAMM", "e4R83eIPCa", "bIFJLjMhoX", "aYMZlWMhc6", "ZYeZVb6YzA", "Y44z8G6rJo", "WLCN4KopXb", "PxupFygKAx", "McI3Le7PXU", "MAy09XAMt5", "LtjGe4Q1v5", "KKxOQ3LVvD", "JsuL0cIf06", "IPZU8nBnjL", "HUr4D5T7qf", "AErjWetUU6", "5oqvVmpsv5", "5CDgw1MLjE", "4mFyJC5W0Q" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1732543026868, 1732198403347, 1730668412758, 1732482522707, 1733216535025, 1733227954863, 1732790794571, 1730697316724, 1732657301454, 1732195957823, 1732875306758, 1732908787862, 1730748487631, 1732543622560, 1732201742518, 1737523469901, 1732790744897, 1732843693160, 1733140072806, 1730682707919, 1732614198775, 1732195198202, 1732791547266, 1732791875053, 1732194575217, 1732791390474, 1735371033261 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1811/Authors" ], [ "ICLR.cc/2025/Conference/Submission1811/Authors" ], [ "ICLR.cc/2025/Conference/Submission1811/Reviewer_Vagz" ], [ "ICLR.cc/2025/Conference/Submission1811/Reviewer_Vagz" ], [ "ICLR.cc/2025/Conference/Submission1811/Reviewer_HMtR" ], [ "ICLR.cc/2025/Conference/Submission1811/Authors" ], [ "ICLR.cc/2025/Conference/Submission1811/Authors" ], [ "ICLR.cc/2025/Conference/Submission1811/Reviewer_HMtR" ], [ "ICLR.cc/2025/Conference/Submission1811/Reviewer_S1DP" ], [ "ICLR.cc/2025/Conference/Submission1811/Authors" ], [ "ICLR.cc/2025/Conference/Submission1811/Authors" ], [ "ICLR.cc/2025/Conference/Submission1811/Reviewer_S1DP" ], [ "ICLR.cc/2025/Conference/Submission1811/Reviewer_S1DP" ], [ "ICLR.cc/2025/Conference/Submission1811/Authors" ], [ "ICLR.cc/2025/Conference/Submission1811/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1811/Authors" ], [ "ICLR.cc/2025/Conference/Submission1811/Reviewer_Vagz" ], [ "ICLR.cc/2025/Conference/Submission1811/Authors" ], [ "ICLR.cc/2025/Conference/Submission1811/Reviewer_3nuW" ], [ "ICLR.cc/2025/Conference/Submission1811/Reviewer_HMtR" ], [ "ICLR.cc/2025/Conference/Submission1811/Authors" ], [ "ICLR.cc/2025/Conference/Submission1811/Authors" ], [ "ICLR.cc/2025/Conference/Submission1811/Authors" ], [ "ICLR.cc/2025/Conference/Submission1811/Authors" ], [ "ICLR.cc/2025/Conference/Submission1811/Authors" ], [ "ICLR.cc/2025/Conference/Submission1811/Area_Chair_3VcY" ] ], "structured_content_str": [ "{\"title\": \"Second response to Reviewer Vagz\", \"comment\": \"We thank the reviewer for the quick response.\\n\\n*1*. As this is only a minor change to the paper, we have no problem with the recommended phrasing and have changed it accordingly in line 133.\\n\\n*4*. We are now running tests comparing against Concatenated ReLU on the 51-game atari domain for PQN. We will update the new Fig. 10 tomorrow with the Crelu added. \\n\\nFurthermore, as reviewer 3nuW had also recommended, we have run additional evaluations in PQN using Tanh Hadamard on **all** the hidden layers in the network. Interestingly, we have noticed a significant increase over the baseline PQN in the Median Human-Normalized scores over 51 Atari environments. The updated figure can be seen in Fig.10 on page 9. (Note that, as said before, we will add the Crelu scores tomorrow).\\n\\nFinally, we have now added a figure in Appendix C.5 showing what happens when, in DQN on 8 atari environments, the baseline ReLU is combined with normal hyperbolic tangent CNN activations.\"}", "{\"title\": \"Response to Reviewer 3nuW\", \"comment\": \"We thank reviewer 3nuW for the review of our paper.\\n\\n1. \\u201c*The lack of a diverse range of network architectures tested limits the generalizability of the findings. The paper should test HR on various networks and test HR on different layers. It would be better if HR were a universal activation function suitable for all layers, rather than only suitable for the last layer.*\\u201d\\n\\nIn the Limitations section on line 533, we have explained that preliminary experiments showed a very strong preference for ReLU activations in the convolutional section of our network. As the networks used for most Pixel-based environments only have 1 hidden linear layer, this is the reason for our choice of this final hidden layer. Furthermore, we believe that this is the most important hidden layer as the Q-values or policies are usually linearly extracted from here. However, we agree with the reviewer that it would make sense to test the Hadamard representation on more layers. We are running additional experiments using a Hadamard representation on our convolutional hidden layers as well. We hope to add this to our Appendix at the beginning of next week.\\n\\n2. \\u201c*Absence of detailed comparisons with other novel activation functions that have emerged recently.*\\u201d\\n\\nWe have conducted experiments on our original 8 Atari games comparing the Hadamard representation to the novel Rational [1] activation function. We have updated Figure 7a in the main paper comparing against this activation. Furthermore, we have updated figure 19 in Appendix D.2 to show the individual runs of the Rational activation.\\n\\n3. \\u201c*The paper did not conduct tests on diverse tasks, e.g. MuJoCo, DMControl. using only Atari does not demonstrate that this is a universal issue in reinforcement learning.*\\u201d\\n\\nPreliminary experiments have tested our hypothesis on a pixel maze environment, after which we experimented in the Atari domain. We respectfully feel that state-based continuous environments that do not support DQN are out of the scope of our paper. However, to make our evaluation more complete, we have done additional experiments using PQN [2] on the (nearly) full Atari suite of 51 games. The median Human-Normalized scores can be found in our updated manuscript in Figure 10.\\n\\n4. \\\"*As mentioned in Line 279, the experiments are all performed on 8 Atari games. Testing on only 8 Atari games does not sufficiently demonstrate a stable improvement in performance.*\\\"\\n\\nSee our response to 3.\\n\\n5. \\\"*In the PREVENTING DYING NEURONS section in Line 191, the analysis is not related to reinforcement learning task at all. So, claiming that HR is an activation function particularly suitable for reinforcement learning feels somewhat forced.*\\\"\\n\\nWe agree that this might not have been evident from the old manuscript. We now have added a section in our introduction which explicitly states that the dying neuron phenomenon is stronger in RL as compared to supervised learning. We leave further tests in supervised learning for future work. However, due to reviewer HMtR, we have noticed that modern LLM's also use (half-linear) multiplicative representations. We have cited to the paper using these architectures [3] in the algorithm section.\\n\\n*Questions*\\n*1*. See our answer to Weakness 2.\\n\\n*2*. In our opinion, the biggest challenges that arise might be finding the perfect combination of activation functions to use in a Hadamard representation. As seen in related work in LLM's [4], there are a lot of possible combinations. Furthermore, as our paper shows, the ReLU does not gain the same benefits as the hyperbolic tangent. As most algorithms use ReLU's in their network architecture, changing all these ReLU's to Hadamard hyperbolic tangents fundamentally changes the representation layers and might require different hyperparameter settings (learning rate, network width).\\n\\n*3*. See our answer to Weakness 3.\\n\\nWe hope to have adressed most of the reviewer's concerns. We kindly ask the reviewer to review the updated manuscript in light of our revisions, and reconsider the assigned score. If there are any more questions, please do not hestitate to send another message.\\n\\n\\n[1] Delfosse et al. Adaptive Rational Activations to Boost Deep Reinforcement Learning. ICLR 2024\\n\\n[2] Gallici et al. Simplifying Deep Temporal Difference Learning. Arxiv 2024\\n\\n[3] Dauphin et al. Language modeling with gated convolutional networks. PMLR 2017\\n\\n[4] Noam Shazeer. GLU Variants Improve Transformer. 2020\"}", "{\"summary\": \"The paper investigates the role of activation functions on performance in reinforcement learning. Continuously differentiable, such as Tanh, can have advantages, including bounded output and smooth gradients. Practically, however, ReLU activations are more preferred. The authors investigate the reason and identify that the number of saturating tanh activations for all inputs increases with experience throughout the training. This decreases the network\\u2019s representation power (measured by the effective rank). To resolve this problem, the authors introduce Hadamard representations (HR) with Tanh. The authors showed that HR with Tanh experiences less saturation and mitigates loss in effective rank, which also reflects on performance. Through a series of experiments, the paper demonstrates the effectiveness of HR with Tanh.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper presents a novel approach to have more effective representations for reinforcement learning. Based on insights from previous works on dying/dormant neurons and their relationship to reduced network capacity, the authors present the Hadamard representations that experience less dormancy and thereby maintain the network representational capacity. I think this research is fundamental and important, and the proposed method is very simple to use and integrate into a wide range of methods, further enhancing its potential impact.\", \"weaknesses\": [\"Some definition inaccuracies:\", \"The authors claim that the dying neuron problem is not exclusive to ReLU but also extends to Tanh and provide a definition where the activation output $|\\\\alpha_i| \\\\approx 1$. This definition is problematic for two reasons: 1) Intuitively, the neuron is not \\u2018dead\\u2019 if its output is not zero, and 2) the definition doesn\\u2019t work for ReLU. Instead, I suggest making this definition about saturating neurons, which would only work for tanh and sigmoid. The figures measuring the fraction of dead units (e.g., Figure 6a) need to reflect the fraction of dead or saturated units instead.\", \"The authors mentioned that the definition of a dead/saturating neuron applies when we test the neuron in all data in the buffer. Is this quantity actually being measured in the experiments? I am asking this because going through all the data in the buffer at each evaluation time seems very computationally expensive, so I was wondering if some approximation is made in measuring this quantity in the experiments section (e.g., sampling a fixed small number of samples from the buffer instead).\", \"Relatively limited empirical evaluation:\", \"The empirical evaluation could be more substantial. The authors used 8 Atari environments with only five independent runs (with some overlapping error bars). I encourage the authors to use a larger number of independent runs (e.g., 10) to have more statistically significant results. Adding more environments would be great, although I recognize the hurdle of running extra Atari games.\", \"More relevant activation function baselines are needed. Specifically, concatenated ReLU has been shown to improve plasticity in reinforcement learning [1] in addition to adaptive rational activations [2]. I think adding these two baselines is very relevant to understanding the effectiveness of HR Tanh against other recently investigated activation functions.\", \"**Minor issues:**\", \"The discussion in lines 207-212 needs revising.\", \"In line 103, $r_t$ should be $r_{t+1}$.\", \"In line 205, $h^\\\\prime(x)$ should be $z^\\\\prime(x)$.\", \"\\u201cStronly\\u201d in line 357 is a typo\", \"&nbsp;\", \"&nbsp;\", \"Overall, I would like this paper to be accepted. There are a few fixable issues, so I\\u2019m willing to increase my score if the authors address those issues based on my feedback in a paper revision.\", \"&nbsp;\", \"&nbsp;\", \"**References:**\", \"[1]. Delfosse, Q., Schramowski, P., Mundt, M., Molina, A., & Kersting, K. (2024). Adaptive Rational Activations to Boost Deep Reinforcement Learning. International Conference on Learning Representations.\", \"[2]. Abbas, Z., Zhao, R., Modayil, J., White, A., & Machado, M. C. (2023). Loss of plasticity in continual deep reinforcement learning. Conference on Lifelong Learning Agents (pp. 620-636).\"], \"questions\": [\"The authors mentioned in line 367 that they considered addition operation instead of Hadamard product and showed it performs poorly. There was no reason or postulate for why this is the case. Can the author provide a probability analysis similar to the one given on page 5?\", \"If the score is normalized with respect to the ReLU activation baseline score, why doesn\\u2019t the final ReLU baseline reach 1 at the end of training in Figure 7a and Figure 9a but slightly less than 1?\", \"Why did the authors use HR Tanh only for the last layer? What happens if you use it in all layers? The authors did mention that the success might not translate to convolutional layers. Can the authors present an experiment showing that and suggest what the reason might be? Additionally, why wasn't HR Tanh applied for the last two fully connected layers in the architecture?\", \"In line 13, \\u201clinear-unit functions\\u201d should be \\u201cpiece-wise linear functions\\u201d\", \"I don\\u2019t see the reason for the paragraph starting at line 467. It is not relevant to the conclusion of the paper, and hence it should be removed.\", \"In some environments, such as Seaquest and SpaceInvaders, HR hurts performance compared to ReLU. Is there a reason for why?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your detailed response!\", \"comment\": \"I would like to thank the authors for their detailed response. Here is my response:\\n\\n**Weaknesses:**\\n\\n1. My issue is that the authors consider saturated and dead units the same while the former may give large output but the slope is zero (e.g., 1) and the latter gives zero output and the slope is zero. They are not the same. I suggest the authors say something like \\u201ccan be classified as saturated or dead if..\\u201d. Thus, we cannot say a saturated ReLU.\\n2. Thank you for explaining this and adding the information in the paper.\\n3. Adding results on more Atari environments is highly appreciated. This increases my confidence in your approach.\\n4. Thank you for comparing against rational activation functions. I think comparing against concatenated ReLU needs to be added as well since it shares similarity with your Hadamard Tanh. Concatenated ReLU has been shown to improve plasticity which reflects on performance in reinforcement learning agents. Please refer to the rational activation paper for comparisons in the single RL tasks domain. The authors can compare against Concatenated ReLU and/or PeLU to show the effectiveness of their Hadamard Tanh. \\n\\n**Questions:**\\n\\n3. I thank the authors for running the additional experiment. Please let me know when the results are ready.\"}", "{\"title\": \"Response to authors\", \"comment\": \"1. The methodology proposed in the original paper was to add Hadamard representations to a single layer. As compared to standard architectures (i.e, ReLU) this seemed to have some performance gains and promise in a small set of handpicked Atari tasks. As detailed in my review, the limited evaluation made me question the method's actual generality across other tasks. The newly added results are, instead, doing something quite different from a methodology perspective: doubling the number of parameters in each layer (something that was not even ever explicitly stated in the previous rebuttal revision). Now, there are so many potential confounding factors that have been added (e.g., faster learning due to more parameters) and downsides (doubling inference time and memory) that is quite difficult for me to draw any conclusive evaluation of the proposed methodology from these new results. I believe it would have been much more logical for the authors to collect/report the results for their much cheaper original architecture, which I hope would have still shown some improvements.\\n\\n2. I am fully aware that there exist some other papers that also evaluate very few seeds (even though these papers, such as Munchausen RL, focus on less stochastic problem settings after much more expensive training runs of hundreds of millions of steps). I do not think that the fact this paper is not the only one with such a limitation invalidates my concerns.\\n\\n3. Table 2 in your paper shows the actual fraction of dead neurons. The relationship seems to me far from quadratic.\"}", "{\"comment\": \"Thank you for your reply.\\n\\n1. We are not doing something different from a methodology perspective. Instead of using our technique on 1/4 layers in DQN and PPO, we have now also tested it on 4/4 layers in PQN. Adding Hadamards to the CNN activation adds negligible parameters, as seen in the difference between row 1 and row 2 on the Table below (We can add this table to the paper, if you'd like). Furthermore, as we have stated before, CReLU also doubles the parameters but reduces performance in PQN compared to the baseline. We believe there is a too strong focus on the amount of parameters we use in our research, as we compare with equal-parameter baselines and have repeatedly stated that training speed is relatively unaffected.\\n\\n| **Architecture** | **Convolutional Parameters** | **MLP Parameters** | **Output Layer Parameters** | **Total Parameters** |\\n|------------------------------------------|-------------------------------|---------------------|------------------------------|----------------------|\\n| **Conv `tanh_HR` + MLP `tanh_HR`** | 155,968 | 3,214,336 | 3,078 | **3,373,382** |\\n| **Conv `ReLU` + MLP `tanh_HR`** | 77,984 | 3,214,336 | 3,078 | **3,295,398** |\\n| **Conv `CReLU` + MLP `CReLU`** | 77,984 | 3,211,776 | 3,078 | **3,292,838** |\\n| **Conv `ReLU` + MLP `ReLU` (Baseline)** | 77,984 | 1,607,168 | 3,078 | **1,688,230** |\\n\\nAlso, the claim of confounding factors due to more parameters is something we thought would be important to clarify for Reviewers, which is why our original manuscript contained Fig. 7b and appendix C.2, and we recently extended these ablations with different learning rates. This shows that simply scaling parameters only slightly works when you explicitly re-tune the learning rate $\\\\alpha$, but **decreases** performance when keeping the same learning rate. Note that we do not change any hyper-parameters for Hadamard Representations. We would appreciate it if the Reviewer could comment on this. \\n\\n2. You are claiming that a lot of the top conference RL algorithms provide limited evaluations on Atari. Although this is up for discussion, we do not believe we should be judged for copying the evaluation methods used in these conferences. Furthermore, you claim that training for more frames in M-DQN reduces evaluation stochasticity. Could the reviewer explain this claim more clearly?\\n\\n3. Thank you for pointing this out. This is why we use the term \\\"correlate\\\" in line 308 in our paper. A clear correlation with 1: reducing dying neurons for *tanh* 2: increasing dying neurons for *ReLU* and 3: equal dying neurons for *Sigmoid*, is, in our opinion, a fair empirical result correlating with our probability analysis. If you'd like, we could more prominently explain the difference between our probability analysis and the empirical results in our paper.\"}", "{\"comment\": [\"Thanks you for your response.\", \"We have now changed Fig. 7b in the main text to incorporate the 1024-dimensional latent with the lower (5e-5) learning rate.\", \"We see your point. Although resetting values for tanh's might have a more profound instant effect, as the difference between initialization value and saturation value is generally larger than with a ReLU. In the short-term, we are willing to add some additional experiments to the paper incorporating this. For now however, we have added CReLU as a baseline for the new 51 Atari Games PQN experiments on **Page 8**. (We couldn't find a jax-based implementation of Rational activations in order to add it to the PQN experiments).\", \"In Fig. 6, we define a tanh or sigmoid neuron as dead (or collapsed, saturated), when the neuron's KDE exhibits a spike larger than 20, as can be seen in Fig. 3b. We further explain this in Appendix B.1.\", \"Based on all the reviewers comments, we have now made quite some changes since the original manuscript, and would kindly ask you to review them. To summarize, the biggest changes are:\", \"Added Rational Activations to the DQN experiments.\", \"Added PQN experiments on 51 games, testing Hadamard representations on all layers and significantly outperforming CReLU. Interestingly, moving from a hyperbolic tangent to a Hadamard hyperbolic tangent in PQN more than **doubles** performance.\", \"Added a large section on page 9, stating the effects of dead ReLU's and dead tanh's on the subsequent layer, as an insight into the activation's performance correlation with dying neurons.\"]}", "{\"summary\": \"This work aims to counteract the dead neuron and rank-collapse phenomena in RL. It proposes the 'Hadamard representation': simply duplicating the final encoder layer in the encoder of the 'Nature DQN' architecture and taking the elementwise product between the two outputs. Empirically, the results show an overall moderate increase in learning speed, together with lower dead neurons and higher rank-collapse as averaged over 8 Atari games.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Overall, I found the paper well-written and easy to parse through.\\n\\n2. Already in the introduction, the paper concisely introduces the problem settings and effectively conveys the tradeoff between ReLU and Tanh with results (fig. 1) and toy examples (fig. 2).\\n\\n3. Evaluation is conducted both with canonical off-policy and off-policy architectures/algorithms (Nature DQN and PPO) widely used in the broader literature.\", \"weaknesses\": \"Major:\\n\\n1. The experiments are conducted on a very small fraction of only 8 hand-picked Atari games. While the performance of the proposed method is slightly higher than ReLU on average, this does not hold consistently even across these 8 games (as reported in the Appendix). Moreover, while these environments are notably sensitive to stochasticity, evaluation is carried out with only 5 random seeds I believe this to be very much insufficient for a paper with a simple architectural contribution that should be thoroughly validated. Thus, I am very much not convinced of the generality of these results, even across the full set of Atari environments, and would strongly encourage to expand this section. In case Atari is too demanding, I would suggest considering other more computationally-friendly benchmarks such as the 16 environments in the highly-parallelizable Procgen benchmark.\\n\\n2. Even in the results of this paper, there does not seem to be a clear relationship between dead neurons/effective rank and performance, (e.g., ReLU outperforms Tanh while having a higher number of dead neurons). Thus, even analytically, I feel the paper in its current states leaves many unresolved questions and provides very little novel information.\\n\\n3. I believe the claims about the probabilities of neurons 'dying' such as: \\\"Taking a product of hyperbolic tangent activated neurons thus reduces the probability of neuron saturation from p to p^2.\\\" (ln242-243) are not correct, and potentially misleading, as they make independence assumptions that are not even reported by the paper's own results in Appendix C.3.\", \"minor\": \"Figures 3 and 5 show 32 very small graphs each with the value distribution of 16/512 hand-picked neurons at two hand-picked points during training. I did not find these figures convey much meaningful information and would suggest replacing them with a graph showing the percentage of dead neurons for tanh/relu/and Hadamard representations throughout training.\\n\\nI would suggest adding Subsection numbers.\", \"questions\": \"I would appreciate it if the authors could address, in their response, the criticism raised above. In addition:\\n\\n- In the MLP block of several modern LLMs such as Llama, the hidden representation is computed as (using this paper's notation) f(A_1x + B_1) * (A_1x + B_2). According to the paper's own analysis in Section 3, this simpler version should entirely prevent the dead neuron phenomenon with respect to the input activations. I wonder if the authors have considered and/or have this version.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. I appreciate the work you've put in the rebuttal.\\n\\nAs you find a different learning rate performs better, please use that in Figure 7 and anywhere else you're showing results for \\\"Latent Dim 1024\\\". I've increased my score based on this new result.\\n\\nThe argument that \\\"Redo does not really improve DQN performance in the normal setting\\\" is not convincing. The reason Redo doesn't help could be because there are not too many dormant units in the normal setting. But in your case, you see an improvement in performance when the number of dead tanh units is reduced. I suspect a resetting baseline like ReDO will improve performance in this case.\\n\\nIn Figure 6, when do you call a tanh or sigmoid unit to be dead?\"}", "{\"title\": \"Response to Reviewer HMtR\", \"comment\": \"We thank reviewer HMtR for the rigorous review of our paper.\\n\\n1. \\u201c*The experiments are conducted on a very small fraction of only 8 hand-picked Atari games. While the performance of the proposed method is slightly higher than ReLU on average, this does not hold consistently even across these 8 games (as reported in the Appendix). Moreover, while these environments are notably sensitive to stochasticity, evaluation is carried out with only 5 random seeds I believe this to be very much insufficient for a paper with a simple architectural contribution that should be thoroughly validated. Thus, I am very much not convinced of the generality of these results, even across the full set of Atari environments, and would strongly encourage to expand this section. In case Atari is too demanding, I would suggest considering other more computationally-friendly benchmarks such as the 16 environments in the highly-parallelizable Procgen benchmark.*\\u201d\\n\\nWe understand the reviewer\\u2019s point about the limited evaluation. Although we do believe that our paper provides insights into the dying neuron phenomenom and an activation function\\u2019s correlation with it, we have additionally ran a set of experiments on the full Atari suite. We have evaluated the Hadamard representation on a very recent vectorized version of DQN called Parallelizable Q-Network (PQN) [1]. Using PQN, we were able to run the (nearly) full atari suite of 51 games, and compared to tanh and ReLU.\\n\\n2. \\u201c*Even in the results of this paper, there does not seem to be a clear relationship between dead neurons/effective rank and performance, (e.g., ReLU outperforms Tanh while having a higher number of dead neurons). Thus, even analytically, I feel the paper in its current state leaves many unresolved questions and provides very little novel information.*\\u201d\\n\\nWe agree with the reviewer\\u2019s point that there remains an analytical gap as to the strong ReLU performance as compared to Tanh. We have added a section in the main paper that goes more into depth on this subject. This section is highlighted in red and can be found on page 9. Specifically, we argue that, in contrast to the ReLU, dying hyperbolic tangents generate a bias on the next layer.\\n\\n\\n3. \\u201c*I believe the claims about the probabilities of neurons 'dying' such as: \\\"Taking a product of hyperbolic tangent activated neurons thus reduces the probability of neuron saturation from p to p^2.\\\" (ln242-243) are not correct, and potentially misleading, as they make independence assumptions that are not even reported by the paper's own results in Appendix C.3.*\\u201d\\n\\nYou are correct. We will state these independence assumptions. We have slightly rewritten the corresponding explanation in the main paper as well as in Appendix C.3.\\n\\n4. \\u201c*In the MLP block of several modern LLMs such as Llama, the hidden representation is computed as (using this paper's notation) f(A_1x + B_1) * (A_1x + B_2). According to the paper's own analysis in Section 3, this simpler version should entirely prevent the dead neuron phenomenon with respect to the input activations. I wonder if the authors have considered and/or have this version.*\\u201d\\n\\nWe were not aware of this version, as we had only seen these multiplicative interactions before in the Highway Network paper [3]. We have now added a reference to Gated Linear Unit [4] in our algorithm section, as we found that the function you proposed is derived from [4] (And our function is similar to it). Although we think this could also be a good activation, we do not believe that this version would completely prevent the dead neuron phenomenon. The f in f(A_1x + B_1) * (A_2x + B_2) represents the Swish function. The Swish also saturates into 0 for decreasing inputs, meaning that the final product will remain 0 if the Swish-activated representation saturates into 0.\\n\\n5 (minor): To allow readers to visualize the problems with activations in hidden layers, we respectfully chose to keep figures 3 and 5 intact. In the experiment section, we already have quantitative dying neurons plots.\\n\\n\\nWe hope to have taken away most of the reviewer's issues with the old manuscript. We kindly request that you review the updated manuscript in light of these clarifications and revisions, and update the score accordingly. If there are any more clarifications needed please do not hesitate to reply.\\n\\n\\n[1] Gallici et al. Simplifying Deep Temporal Difference Learning. Arxiv 2024\\n\\n[2] Delfosse et al. Adaptive Rational Activations to Boost Deep Reinforcement Learning. ICLR 2024\\n\\n[3] Srivastava et al. Highway Networks. 2015\\n\\n[4] Dauphin et al. Language modeling with gated convolutional networks. PMLR 2017\"}", "{\"comment\": [\"Dear Reviewer S1DP,\", \"As the review period is coming to an end, we want to summarize the changes we have made:\", \"Added the [Rational](https://openreview.net/forum?id=g90ysX1sVs) activation to our DQN experiments on 8 Atari Games.\", \"Extended our evaluation by adding [PQN](https://arxiv.org/pdf/2407.04811v2), and testing on 51 Atari Games for 5 seeds (Median Human-Normalized scores are available in Fig. 10, Individual Game scores can be seen in Appendix D.2)\", \"Compared against the [CReLU](https://proceedings.mlr.press/v232/abbas23a/abbas23a.pdf) activation on the 51 Atari Games.\", \"Added a theorem on page 9 to show the difference of effects on the next layer between dying hyperbolic tangents and dying ReLUs.\", \"Minor changes throughout the paper based on all the reviewers' suggestions. Changes can be seen in Red.\", \"We would kindly ask you to review the final manuscript based on the changes made.\"]}", "{\"comment\": \"Thank you for your response. I appreciate the changes made in the manuscript, they certainly make the paper stronger.\\n\\nHowever, a missing comparison with a selective reinitialization baseline is a critical flaw. This paper shows that the dying neuron problem exists in RL even with tanh activation and proposes a solution. However, there is prior work that has shown the dying neuron problem in RL (Sokar et al.), and they've proposed a solution based on selective reinitialization. For the paper to be accepted, it must include a comparison with the previously proposed solution.\"}", "{\"summary\": \"This paper studies the dying neuron problem with tanh-type activations. The paper's first result is that bounded activation functions like tanh perform worse than ReLU-type activations. It is argued that the poor performance is probably due to the vanishing-gradient/dying-neuron problem faced by the tanh-type activations. The paper then proposes to solve the dying-neuron problem using Hadamard representations. Finally, it is shown that Hadamard representations outperform the standard DQN architecture.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is generally well-written. The proposed solution is simple to use and novel. The usefulness of the proposed method is shown to be more effective than standard DQN.\", \"weaknesses\": \"The biggest issue with the paper is the weak empirical evaluation\\u2014particularly the following two points.\\n* The number of hidden units in Hadamard Representations is twice that of the base system. However, the comparison with the base system with 1024 hidden units is not done correctly. The base system with 1024 hidden units is not tuned. At least the learning rate of the base system should be tuned. I suspect that something like 5e-5 could be a good value for the learning rate for the base system with twice the number of hidden units. Using default hyper-parameters is insufficient when the architecture is changed by making it twice as wide. The paper has to convincingly show that Hadamard representations provide a benefit over a well-tuned base system with twice as many hidden units. \\n* Missing baseline. The paper does not compare Hadamard representations to resetting baseline like Redo [1] and continual backprop[2]. Why not just reset the units that have saturated in the tanh network? \\n\\n[1] Sokar et al. The dormant neuron phenomenon in deep reinforcement learning. ICML 2023\\n[2] Dohare et al. Loss of plasticity in deep continual learning. Nature 2024\", \"questions\": \"See Weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Addition to the general message to Reviewers:\", \"comment\": \"As per comment of Reviewers 3nuW and Vagz, we have run additional experiments on the 51 Atari suite for PQN, using both full Hadamard representations (CNN + MLP) as well as only MLP Hadamard representations. We have updated Fig.10 on page 9 with the Median Human-Normalized scores. We show a significant improvement over the novel PQN baseline using the full Hadamard representation over 51 games.\\n\\nIn addition, also as per reviewer requests, we are running the Crelu baseline on the 51 Atari environments, and plan on adding it to the PQN results in Fig.10 tomorrow before the deadline!\"}", "{\"title\": \"Response to Reviewer Vagz\", \"comment\": \"We thank reviewer Vagz for the review, and the positive note at the end. We will reply to all the weaknesses and questions the reviewer has noted.\\n\\n1. \\\"*The authors claim that the dying neuron problem is not exclusive to ReLU but also extends to Tanh and provide a definition where the activation output . - - (Omitted for response length)*\\\"\\n\\nYou are right about the definition not working for ReLU. We have changed the definition and the explanation for it in line 137. However, what do you mean that intuitively, the neuron is not 'dead' if its output is not zero? We think of dying neurons as having no variation according to the input observation, meaning they are basically collapsed. We feel like a dead ReLU is a saturated ReLU, just like a dead Tanh is a saturated Tanh (Only into 1 end. so always only 1 or only -1). Please let us know if you have any other thoughts on this.\\n\\n2. \\\"*The authors mentioned that the definition of a dead/saturating neuron applies when we test the neuron in all data in the buffer. Is this quantity actually being measured in the experiments? - \\u201c\\u201d \\u201c\\u201d.*\\\"\\n\\nThanks for the observation, we indeed forgot to mention our approximation to this. We sample a batch from the buffer to calculate the dead neurons. We have added this explanation in Line 138 in the revised Manuscript.\\n\\n3. \\\"*The empirical evaluation could be more substantial. - \\\"\\\" \\\"\\\"*\\\"\\n\\nAs this was an overlapping theme across reviewers, we have added experiments using a parallelizable version of DQN called PQN [1] on the (nearly) full Atari suite of 51 games. We report the median Human-normalized scores on page 9 of the new manuscript.\\n\\n4. \\\"*More relevant activation function baselines are needed. - \\\"\\\" \\\"\\\"*\\\"\\n\\nWe have run additional experiments on our 8-environment DQN baseline, where we compared against the recent novel Rational [2] activation over 40M frames. We have updated figure 7a and Fig. 19 accordingly. As for the concatenated ReLU [3], this paper itself does not report any performance improvement over the non environment-resetting baseline (See Fig. 6 of [3]), which is why we have added the paper to related work but not compared against it. We hope this clarifies things.\\n\\n*Questions*\\n\\n*1*. \\\"*The authors mentioned in line 367 that they considered addition operation instead of Hadamard product and showed it performs poorly. There was no reason or postulate for why this is the case. Can the author provide a probability analysis similar to the one given on page 5?*\\\"\\n\\nWe believe that multiplicative representations also have different function approximation properties as compared to additions. As for the dying neuron analysis, this should give the same results as for a product of hyperbolic tangents, i.e: The final representation is only dead if both neurons are dead. However, due to poor performance of these additive representations, we have not looked further into depth on their exact functioning.\\n\\n*2*. \\\"*If the score is normalized with respect to the ReLU activation baseline score, why doesn\\u2019t the final ReLU baseline reach 1 at the end of training in Figure 7a and Figure 9a but slightly less than 1?\\\"\\n\\nIf you look closely, the top of the uncertainty of the ReLU curve should hit 1. We normalize with respect to the highest and lowest score achieved by the ReLU per environment.\\n\\n*3.* \\\"* Why did the authors use HR Tanh only for the last layer?*\\\"\\n\\nPreliminary results showed a strong preference for CNN's towards the ReLU. However, for more clarity, we are now running additional experiments showing tanh and hadamard representations on the CNN's aswell. We hope to add them to the Appendix at the beginning of next week. As to the second part of your question, the architecture only has 1 hidden linear layer. Or do you mean the Q-values?\\n\\n*4.* - Thanks for noticing, we changed this throughout the manuscript.\\n\\n*5.* - Agreed. We have removed this paragraph and slightly rewritten this section.\\n\\n*6.* \\\"*In some environments, such as Seaquest and SpaceInvaders, HR hurts performance compared to ReLU. Is there a reason for why*\\\"\\n\\nPossibly due to a low-frequency/simplicity bias for these environments, but we still see substantial improvement over the tanh. However, we are hesitant to draw causal conclusions for performance differences per environment.\\n\\n- Note that due to response length limitations, we have visually shortened some of your questions.\\n\\nWe hope to have alleviated the reviewer's concerns for the paper. In light of our response and the revised manuscript, we kindly ask the reviewer to reconsider the given score.\\n\\n\\n[1] Gallici et al. Simplifying Deep Temporal Difference Learning. Arxiv 2024\\n\\n[2] Delfosse et al. Adaptive Rational Activations to Boost Deep Reinforcement Learning. ICLR 2024\\n\\n[3] Abbas et al. Loss of Plasticiy in Continual Deep Reinforcement Learning. CoLLAs 2023\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"1.\\n- Like in [Gulcehre et al. 2022](https://openreview.net/pdf?id=HFfJWx60IT), we originally focused on the final hidden layer in all architectures. We argued that this was the most important representation, and by focusing on this layer we could do a more concise qualitative analysis like in [Gulcehre et al. 2022](https://openreview.net/pdf?id=HFfJWx60IT). However, as per request of reviewer Vagz and 3nuW, we decided to additionally run tests using a Hadamard Representation in all hidden (CNN + MLP) layers. This is what we did for 51 Atari games using the PQN algorithm. In this evaluation, we see that using a Hadamard Representation in every layer **significantly increased performance over the baseline and over CReLU.**\", \"we_have_now_updated_the_manuscript_to_describe_this_better_on_page_6_line_276\": \"\\\"*In line with [Gulcehre et al. 2021], for our qualitative experiments, a Hadamard Representation is only applied to the last hidden layer $z_{t} \\\\in \\\\mathbb{R}^{512}$.*\\\" And further on line **282**: \\\"*Finally, the PQN experiments are conducted with an HR on all hidden layers, showing the Median Human-Normalized scores on 51 Atari games.*\\\" Interestingly, changing from normal hyperbolic tangents to Hadamard hyperbolic tangents more than **doubles the performance** in PQN.\\n- Thank you for pointing this out. Yes, using a full Hadamard representation nearly doubles the incoming (not outgoing) parameters of the hidden layer. To make this more clear, we have now added this prominently in our Limitations section in **Line 518**: \\\"*Using a Hadamard representation will double the incoming weights connected to a hidden layer. However, in Fig.10, a comparison against the CReLU is shown, which equally doubles the network's parameters. Furthermore, recent work shows that scaling baselines in Atari often leads to reduced performance [Obando-Ceron et al. 2024a](https://arxiv.org/pdf/2402.08609)[Obando-Ceron et al. 2024b](https://arxiv.org/pdf/2402.12479).*\\\"\\n\\nTo add to this, we notice relatively similar training times for Hadamard Representations as for normal representations. In contrast, the Rational activation baseline takes us around 30% longer to train due to the additional learnable component, while using fewer parameters than the Hadamard Representation. Furthermore, we have now compared PQN-Baseline with PQN-CReLU and PQN-Hadamard , where PQN - CReLU has 3,366,886 parameters, PQN-Hadamard has 3,376,070 parameters and PQN-Baseline has 1,688,550 parameters. **PQN-Hadamard outperforms both**!\\n\\n2. The environments we are not using are: *MontezumaRevenge-v5, Pitfall-v5, PrivateEye-v5, Skiing-v5 and Solaris-v5 and Pooyan-v5*. These were acquired by simply taking the Full Atari Suite (57 games) minus the hard-exploration games. As you pointed out, we had forgotten to add the plots of the 51 individual games, as we have now done in **Appendix D.2**. The per-game analysis also shows convincing outperformance of Hadamard Representations to CReLU and the baseline. As per the seed comment here and in your original review: We understand that 5 seeds might be relatively low in general Deep Learning, but we are not different from most of the top-tier RL papers working in Atari. For instance: [CReLU](https://arxiv.org/pdf/2303.07507) uses 5 seeds, [Munchausen-DQN](https://arxiv.org/pdf/2007.14430) uses 3 seeds, [Rational Activations](https://openreview.net/pdf?id=g90ysX1sVs) uses 5 seeds, [ReDo](https://proceedings.mlr.press/v202/sokar23a/sokar23a.pdf) uses 5 seeds, [InFeR](https://openreview.net/pdf?id=ZkC8wKoLbQ7) uses 3 seeds. **Does this alleviate your concerns**?\\n\\n3. We apologize, but we do not understand what you mean with a contradiction of the empirical results. Our proposed dying neuron probability differences using a Hadamard representation (Table 1) strongly correlate with the empirical results (Table 2). **If possible, could you please explain more clearly how this analysis is invalid, so that we can use your feedback to improve our reasoning?**\\n\\nDue to concerns about limited evaluations in the initial feedback of multiple reviewers, we had added an additional evaluation of 51 Atari games. To summarize, we now have empirical results on (simple) **Supervised Learning**, **DQN** on 8 Atari games, **PPO** on 8 Atari games and **PQN** on 51 Atari games, where we significantly outperform CReLU. **We would be happy to know whether this empirical evaluation still seems limited, or what would be needed to further convince you?**\"}", "{\"comment\": \"Thank you for your efforts to address my concerns. I raised my score accordingly.\"}", "{\"comment\": \"Thank you for appreciating our changes. Please consider that the Hadamard Representation (HR) also has function approximation benefits (Fig. 2), to which we credit a part of the performance increase. We therefore understand the concern, but believe we are already making a substantial contribution to the field without the specific ReDo baseline. Coming back your point on using ReDo for *tanh*: We think that ReDo and the HR might even be mutually beneficial in the case of *tanh*, as (Fig.6) shows that using HR does not eliminate all dying hyperbolic tangents. As said before, we want to run the additional ReDo experiments in the short term for the camera-ready version.\"}", "{\"summary\": \"The paper proposes a novel activation function based on Hadamard representations to address the limitations of traditional activation functions in reinforcement learning. It highlights issues like vanishing gradients and dying neurons, demonstrating that the proposed method improves learning efficiency and representation capacity in deep Q-networks and proximal policy optimization.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The proposed Hadamard representation effectively addresses dead neuron issue with existing activation functions, particularly in reinforcement learning contexts.\\n2. The experimental results show clear improvements in learning speed and performance metrics, providing solid empirical evidence for the claims made.\\n3. The paper presents a well-defined problem and clearly articulates its contributions, making the paper easy to understand.\", \"weaknesses\": \"1. The lack of a diverse range of network architectures tested limits the generalizability of the findings. The paper should test HR on various networks and test HR on different layers. It would be better if HR were a universal activation function suitable for all layers, rather than only suitable for the last layer.\\n2. Absence of detailed comparisons with other novel activation functions that have emerged recently.\\n3. The paper did not conduct tests on diverse tasks, e.g. MuJoCo, DMControl. using only Atari does not demonstrate that this is a universal issue in reinforcement learning.\\n4. As mentioned in Line 279, the experiments are all performed on 8 Atari games. Testing on only 8 Atari games does not sufficiently demonstrate a stable improvement in performance.\\n5. In the PREVENTING DYING NEURONS section in Line 191, the analysis is not related to reinforcement learning task at all. So, claiming that HR is an activation function particularly suitable for reinforcement learning feels somewhat forced.\", \"questions\": \"1. How do Hadamard representations compare with other recent activation functions not covered in the paper?\\n2. What potential challenges might arise when applying this method to different neural network architectures?\\n3. What is the result if HR is tested on various tasks, e.g. MuJoCo, DMControl, and more tasks on Atari?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thanks for your response and for running some additional experiments.\\n\\nHowever, I do not find my initial criticism to have been fully addressed:\\n\\n1. In the caption of Figure 10, it reads that the Hadarmard representations (in contrast to the other results) are being applied at every layer of the network:\\n- I would really appreciate if the authors would clarify why the divergent implementation from their previous applications.\\n- Most importantly (please, correct me if I happen to be wrong in my understanding): doesn't this imply the number of parameters is doubled for each layer? This seems like a major limitation that is never discussed and that makes the PQN comparison unfair.\\n\\n2. The new PQN results also lack clear details. The authors mention they do not evaluate Atari's hard exploration games, yet, I could not find what exactly these games are. Moreover, only the median results are reported over 5 seeds, without per-task results. \\n\\n3. As the authors seemed to have agreed that the independence assumptions about neurons dying contradict the paper's own empirical results, I do not think adding a single sentence \\\"Lastly, we make an independence assumption between two individual neurons.\\\" makes the paper's analysis any more valid.\\n\\nOnce again, I would like to state that I appreciate the research direction and its potential value. However, it seems to me that paper in its current form is still not sufficient, especially from an empirical perspective. Furthermore, much information and limitations seemed to not be clearly stated in the text (something which I found to be worsened in the current revision). For these reasons, I will not be modifying my score at this moment.\"}", "{\"title\": \"Response to Reviewer S1DP\", \"comment\": \"We thank reviewer S1DP for the review of our paper. We will reply to both Weakness proposed by the reviewer.\\n\\n1. \\u201c*The paper has to convincingly show that Hadamard representations provide a benefit over a well-tuned base system with twice as many hidden units.*\\u201d\\n\\nTo accommodate for the reviewers\\u2019 valid concerns, we have tested the 1024 latent dimensional setup for three different learning rates, and added these to appendix C.2 in our revised manuscript. Over three different learning rates (1e-5, 5e-5, 1e-4) with a 1024 dimensional latent state, it shows that performance increases when decreasing the learning rate, but eventually becomes worse again. Our Hadamard representation still proves to be stronger.\\n\\n2. \\u201c*Missing baseline. The paper does not compare Hadamard representations to resetting baseline like Redo [1] and continual backprop[2]. Why not just reset the units that have saturated in the tanh network?*\\u201d\\n\\nWe understand the reviewers concern of missing baselines. However, a deeper dive into Redo [1, Fig. 10] shows that Redo does not really improve DQN performance in the normal setting, but only when you change the algorithm by increasing the gradient steps per frame from 0.25 to 1. Further work in [3, Fig. 12] also shows that Redo (resetting) can even harm performance. For this reason, although Redo remains interesting related work, we did not implement their algorithm as a baseline but use their work for fundamental insights. \\n\\nAs for [2], we thank the author for this addition which we will add to our related work section. However, as this paper was published 7 days before the ICLR 2025 submission deadline, we were unfortunately not aware of this paper and did not compare with it. To accommodate the reviewer and compare against a suiting baseline, we have run experiments comparing our Hadamard Representation to a novel ICLR 2024 learnable Rational Activation [4] and have added this comparison in our main paper. \\n\\nWe kindly request that you review the updated manuscript in light of these clarifications and revisions and reconsider your score.\\n\\n[1] Sokar et al. The Dormant Neuron Phenomenon in Deep Reinforcement Learning. ICML 2023\\n\\n[2] Dohare et al. Loss of plasticity in deep continual learning. Nature 2024\\n\\n[3] Delfosse et al. Adaptive Rational Activations to Boost Deep Reinforcement Learning. ICLR 2024\"}", "{\"title\": \"Update to experiments\", \"comment\": \"We have now updated the PQN experiments to include the [CReLU](https://proceedings.mlr.press/v232/abbas23a/abbas23a.pdf). Over 51 Atari games, our plots show significant improvement over both the baseline and the CReLU. We have also added the individual game scores in Appendidx D.2. Interestingly, switching from hyperbolic tangents to Hadamard hyperbolic tangents more than doubles the score in PQN.\"}", "{\"title\": \"Final comment to reviewer 3nuW\", \"comment\": \"Dear reviewer 3nuW,\\n\\nFollowing your (and other reviewers) comments, we want to remind you that we have added the [Rational](https://openreview.net/forum?id=g90ysX1sVs) activation to our DQN experiments, and the [CReLU](https://proceedings.mlr.press/v232/abbas23a/abbas23a.pdf) to our 51 Atari Game PQN experiments. In PQN, we show significant improvement over the CReLU, as well as over the just released baseline. Interestingly, switching from normal hyperbolic tangents to Hadamard hyperbolic tangents more than doubles the performance in PQN!\\n\\nThere is also a section on page 9 explaining the difference between the effects of dying ReLUs and dying tanhs on the next layer in a network, providing insights into the poor performance of hyperbolic tangents.\\n\\nNext to this, a lot of minor changes have been made and highlighted in Red. We would kindly ask you to review the paper under the current changes, and reconsider the assigned score.\"}", "{\"title\": \"Message to all Reviewers.\", \"comment\": \"We thank all reviewers for their reviews. Based on your reviews, we have run additional experiments and made significant changes (highlighted in red) to our manuscript. The most important changes are summarized here:\\n\\nTo adress concerns regarding comparison with a valid baseline, we have added a comparison with the ICLR 2024 Rational [1] Activation function in our original 8-environment DQN setting. This activation function has been added to the main plotting results in the Experiments section, as well as the per-game score in Appendix D.2.\\n\\nTo address multiple reviewers\\u2019 concerns about the lack of Atari environments, we have run the (nearly) full Atari suite of 51 games for 5 seeds on a new parallelizable version of DQN called \\u201cPQN\\u201d [2]. Here, we compare with ReLU, tanh, and the tanh Hadamard. As per standard practice, we have added the median-human normalized scores to our experiment section, showing consistent results across the nearly full Atari suite.\\n\\nFinally, as reviewer HMtR pointed out, there seems to be a slight gap in understanding why the ReLU might be less affected by performance issues correlating with dead neurons, as opposed to the hyperbolic tangent. We have added an additional section in the main paper on page 9 to give insights into this problem. Specifically, we show that dead hyperbolic tangents transform weights into biases, while dead ReLU\\u2019s resemble network pruning.\\n\\nWe will additionally reply to each reviewer individually.\\n\\n[1] Gallici et al. Simplifying Deep Temporal Difference Learning. Arxiv 2024\\n\\n[2] Delfosse et al. Adaptive Rational Activations to Boost Deep Reinforcement Learning. ICLR 2024\"}", "{\"title\": \"Final message to all reviewers (Nov 28)\", \"comment\": \"Dear reviewers,\\n\\nBased on the reviews, we have done a significant amount of additional experiments since the first manuscript. We have compared against the [Rational](https://openreview.net/forum?id=g90ysX1sVs) activation in DQN on 8 Atari games, and compare against the [CReLU](https://proceedings.mlr.press/v232/abbas23a/abbas23a.pdf) in PQN on 51 Atari games. In PQN, we significantly outperform CReLU, which, like the Hadamard, has nearly double the parameters of the baseline. Interestingly, in PQN, switching from normal hyperbolic tangents to Hadamard hyperbolic tangents more than **doubles** the performance, without any hyperparameter changes. Individual games for PQN can now also be seen in Appendix D.2.\\n\\nFurthermore, we have added a large section on Page 9 explaining the difference in the effects on the next layer for dying ReLUs and dying tanhs. Specifically, we show that dying tanhs essentially turn weights into biases.\\n\\nBased on each reviewer, we have also made minor changes throughout the text. We have highlighted all changes in Red.\\n\\nWe hope to have clarified the reviewers' concerns, specifically in the empirical evaluation of the Hadamard representation.\"}", "{\"metareview\": \"In this paper, the authors proposes a novel activation function based on Hadamard representations (HR) to address limitations of traditional activation functions in reinforcement learning, such as vanishing gradients and dying neurons. The authors demonstrate improved learning efficiency and representation capacity using HR in Deep Q-Networks (DQN) and Proximal Policy Optimization (PPO).\\n\\nThe major concerns raised by the reviewers for the first draft lies in \\n\\n1, Limited evaluation and comparison: the empirical study in the first draft lies in limited Atari games, and mising the other activation functions. \\n\\n2, Weak justification in RL: the analysis in the \\\"Preventing Dying Neurons\\\" section is not related to RL tasks, making the claim that HR is particularly suitable for RL feel forced.\\n\\nIn the rebuttal period, the authors completed more Atari games comparison, partially addressed the first weakness. However, it is not clear whether this is universal applicable for other environments, without experiemnts on MuJoCo and DMC suite. Meanwhile, it is not clear whether the \\\"Dying Neurons\\\" is only for RL and supervised learning. I suggest the authors further investigate the issue and improve the draft.\", \"additional_comments_on_reviewer_discussion\": \"In the rebuttal period, the authors completed more Atari games comparison, partially addressed the limited evaluation and comparison issue. The draft has been largely improved.\"}" ] }
9JCNPFL1f9
Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark
[ "Tsung-Han Wu", "Giscard Biamby", "Jerome Quenum", "Ritwik Gupta", "Joseph E. Gonzalez", "Trevor Darrell", "David Chan" ]
Large Multimodal Models (LMMs) have made significant strides in visual question-answering for single images. Recent advancements like long-context LMMs have allowed them to ingest larger, or even multiple, images. However, the ability to process a large number of visual tokens does not guarantee effective retrieval and reasoning for multi-image question answering (MIQA), especially in real-world applications like photo album searches or satellite imagery analysis. In this work, we first assess the limitations of current benchmarks for long-context LMMs. We address these limitations by introducing a new vision-centric, long-context benchmark, "Visual Haystacks (VHs)". We comprehensively evaluate both open-source and proprietary models on VHs, and demonstrate that these models struggle when reasoning across potentially unrelated images, perform poorly on cross-image reasoning, as well as exhibit biases based on the placement of key information within the context window. Towards a solution, we introduce MIRAGE (Multi-Image Retrieval Augmented Generation), an open-source, lightweight visual-RAG framework that processes up to 10k images on a single 40G A100 GPU—far surpassing the 1k-image limit of contemporary models. MIRAGE demonstrates up to 13% performance improvement over existing open-source LMMs on VHs, sets a new state-of-the-art on the RetVQA multi-image QA benchmark, and achieves competitive performance on single-image QA with state-of-the-art LMMs. Our dataset, model, and code are available at: https://visual-haystacks.github.io.
[ "Large Multimodal Models", "Visual Question Answering" ]
Accept (Poster)
https://openreview.net/pdf?id=9JCNPFL1f9
https://openreview.net/forum?id=9JCNPFL1f9
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xzuRcFRkGg", "xDCtnYkIAI", "wwUVO4YW3k", "tTnGZPOjBO", "tJy75a1QVI", "tCWMQlp3p5", "pUxOwZxxdw", "jpyAQBfQxd", "ctlXyfueYk", "Z4oczHY6Rc", "Y23ZeS3Cdj", "NmBC1n3KEz", "KocrV3u0k9", "GLxHbZpUwD", "F5j4uoLKT2", "3kXqKyrzEh" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "decision", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732252064590, 1733163021288, 1732748412188, 1732656779524, 1732653872832, 1732780295026, 1732641214029, 1731910668315, 1731908720092, 1730692932301, 1734914388406, 1730508989402, 1737523602327, 1732641075539, 1730498195933, 1731906002713 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3847/Authors" ], [ "ICLR.cc/2025/Conference/Submission3847/Authors" ], [ "ICLR.cc/2025/Conference/Submission3847/Authors" ], [ "ICLR.cc/2025/Conference/Submission3847/Authors" ], [ "ICLR.cc/2025/Conference/Submission3847/Reviewer_io8N" ], [ "ICLR.cc/2025/Conference/Submission3847/Reviewer_eUWk" ], [ "ICLR.cc/2025/Conference/Submission3847/Authors" ], [ "ICLR.cc/2025/Conference/Submission3847/Authors" ], [ "ICLR.cc/2025/Conference/Submission3847/Authors" ], [ "ICLR.cc/2025/Conference/Submission3847/Reviewer_kpNk" ], [ "ICLR.cc/2025/Conference/Submission3847/Area_Chair_JzTu" ], [ "ICLR.cc/2025/Conference/Submission3847/Reviewer_eUWk" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3847/Reviewer_kpNk" ], [ "ICLR.cc/2025/Conference/Submission3847/Reviewer_io8N" ], [ "ICLR.cc/2025/Conference/Submission3847/Authors" ] ], "structured_content_str": [ "{\"title\": \"Follow-Up on Rebuttal for \\\"Visual Haystacks\\\"\", \"comment\": \"Dear Reviewers,\\n\\nWe're following up on the rebuttal for our paper, \\\"Visual Haystacks: A Vision-Centric Needle-In-A-Haystack Benchmark.\\\" We appreciate the time and effort you've invested in reviewing our work.\\n\\nIn our rebuttal and the updated paper, we've thoroughly addressed the concerns and suggestions raised in the initial reviews. If you find that all your questions have been resolved, we kindly ask you to consider reflecting this in the scores. Should you have any additional questions or require further clarification, we are eager to engage in more discussion during the official ICLR discussion period.\\n\\nThank you once again for your thoughtful feedback and consideration.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thank you for your thoughtful review and valuable feedback on our work, and your openness to further discussion. Regarding recognition vs. reasoning, VHs intentionally focuses on foundational skills as a diagnostic unit test to evaluate core model capabilities before tackling more complex tasks (though we agree, an iteration of the dataset could be expanded to far more complex visual reasoning tasks, perhaps in many different languages and cultural contexts). While VHs may not fully simulate real-world scenarios, we believe its diagnostic nature offers important insights for advancing LMM capabilities. The retriever module in MIRAGE addresses efficiency and distractor challenges, a necessity for real-world large-scale tasks. While the retriever may be less useful in single image datasets, we believe that multimodal retrieval-augmented generation will be necessary in the future, and MIRAGE's retriever enables MIRAGE to be the first, and only, open-source LMM capable of handling more than 10,000 input images. Thank you once more for your time and thoughtful feedback; we deeply value your insights and look forward to refining our work further based on your suggestions.\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"Dear Reviewer,\\n\\nAs we are now midway through the discussion period, we wanted to send a friendly reminder about our response. We believe that our response, along with the revised version of the paper\\u2014particularly the clarification on VHs benchmark dataset design and MIRAGE's performance\\u2014addresses the key points raised in your review.\\n\\nIf you have any further questions or concerns, we would be more than happy to address them during the discussion period. Thank you for your time and feedback.\\n\\nBest,\\n\\nThe Authors\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thanks for the response. We will clarify all issues mentioned in the review/rebuttal in the final revision.\"}", "{\"comment\": \"Thank you for your response. The authors have addressed my major concerns. I'll maintain my score.\"}", "{\"title\": \"Reviewer response\", \"comment\": \"I thank the efforts made by the author. The rebuttal address lots of my concerns.\\n\\nHowever, I remain my concerns around (1) the proposed benchmark requires a strong recognition among all the input images, rather than true visual reasoning; (2) the designed retriever module is ad-hoc and unnecessary to some tasks, where the author avoid to responding \\\"many of the tested single image dataset used in this paper, do not need this retriever module at all.\\\". \\n\\nI'd like to draw the attention to AC on this manner and open to discuss further with all other reviewers. \\n\\nRegarding the score, I prefer to increase my score and reduce my confidence score. Also, for a all-6-score paper, I recommend AC to pay attention to potential weak points. I will not fight for its acceptance.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thanks for the response. We will clarify all issues mentioned in the review/rebuttal in the final revision.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We appreciate the reviewer for providing thoughtful feedback and for highlighting our paper\\u2019s contributions, including the creation of a novel and insightful benchmark dataset and the development of the MIRAGE framework. Below, we respond to each of the concerns:\\n \\n**[W1, W2] Limited Question Types and Templates**\\n \\nOur VHs benchmark is intentionally designed with binary questions and simple templates. The rationale is to isolate and evaluate basic skills, visual retrieval and basic cross-image reasoning, without introducing confounding factors, like reasoning the logic in questions. By keeping the tasks straightforward, we provide a diagnostic unit test allowing researchers to pinpoint whether deficiencies lie in visual retrieval or reasoning. While more complex and diverse question templates are valuable for simulating real-world scenarios, they can obscure the diagnostic clarity of model performance. While expanding VHs to include richer templates is a logical next step, our results show that current LMMs struggle with even this simple test, so we strongly believe that such an expansion is out of scope for this initial effort. This point was briefly mentioned in L150-155 and Appendix D of our submission and further clarified in Appendix D of the updated paper.\\n \\n**[W3] Performance Drop in General VQA Tasks**\\n\\nWe would like to emphasize that the primary focus of this paper is on large-scale multi-image QA tasks, where MIRAGE achieves SOTA results on the VHs benchmark (a unit test for these capabilities) and a real-world benchmark dataset, RetVQA, among all open-source solutions. While MIRAGE shows slightly reduced performance on some single-image QA tasks as detailed in Table 1, its main advantages are:\\n1. **Scalability**: MIRAGE is the only framework capable of processing thousands of images in a single query, greatly surpassing the input capacity of both proprietary and open-source models.\\n2. **Performance**: MIRAGE excels in multi-image tasks and, despite a slight drop in some single-image tasks like TextVQA, it outperforms LLaVA-v1.5-7B in some others like MMB(-CN), and rivals proprietary models in some cases.\\n3. **Efficiency**: MIRAGE achieves an 18x token compression and gets faster runtime in Table 6 (B). While this may slightly impact single-image QA performance, it is a strategic trade-off that enables MIRAGE to excel in large-scale multi-image tasks.\\n\\nThis clarification was added to Section 5 of the updated paper.\\n\\n**[W4] MIRAGE\\u2019s Visual-RAG Approach**\\n\\nWhile MIRAGE functions as a visual-RAG framework with a retriever and an LMM, it is an end-to-end trained \\\"single-model architecture,\\\" as detailed in Section 4.1 and Figure 5(A) of the submission paper. Thus, the \\\"retriever\\\" in this context should be understood as a component that primarily de-selects irrelevant details to enhance long-context reasoning. Based on this, we believe that our comparison with other \\u201csingle-model LMMs\\u201d on VHs is both fair and valid. We have addressed this further in L310-318 of the updated paper.\\n \\nTo assess the efficacy of the de-selection component, we compare the retrieval accuracy between MIRAGE and CLIP in Figure 6(A) and the QA accuracy between MIRAGE and CLIP+LLaVA in Table C.2 of the original submission (updated to Table C.3). These results highlight that such naive combinations fall short of optimal performance, emphasizing the significance of MIRAGE\\u2019s architecture.\\n \\n**[W5] Real-World Relevance of the VHs Benchmark**\\n\\nWhile the VHs benchmark serves as a diagnostic unit test (as mentioned in [W1]) and may not fully replicate real-world scenarios, we argue that the ability to find images containing specific objects is a foundational skill essential for various large-scale multi-image reasoning applications. These include searching for objects of interest in personal photo albums, identifying patterns in medical image databases, monitoring deforestation using satellite imagery, mapping urban changes with autonomous navigation data, and understanding consumer behavior through retail surveillance footage. \\n \\nThis argument is addressed in the updated Lines 33\\u201339 of the submission paper. We are happy to have further discussion if the reviewer could suggest specific tasks or datasets that represent real-world long-context challenges.\\n \\n**[Q1] Clarification on MIRAGE and Q-Former (Table 1 and 2)**\\n\\nTo clarify, the implementation of Q-former is consistent across both tables and MIRAGE does utilize Q-former for image token compression. Table 1 reflects the full MIRAGE framework's performance, incorporating the Q-former compressor, a retriever module, and the Llama-v3.1 LLM. In contrast, Table 2 in the original submission (Table C.2 in the updated paper) presents an ablation study focused on Q-former's effectiveness in a simpler setup using LLaVA-v1.5 (lmsys/vicuna-v1.5-7b) without the retriever. Adjustments have been made to the paper to clarify these distinctions.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We appreciate the reviewer for providing valuable comments and for highlighting our strengths in developing a meaningful visual NIAH benchmark, uncovering interesting findings when evaluating LMMs, and providing rigorous experiments. Below, we address each of the concerns:\\n\\n**[W1] Focus on Recognition Tasks**\\n\\nWe want to clarify that object recognition\\u2014retrieving key visual content from a large collection of data\\u2014is a key skill for real-world long-context visual understanding tasks like searching through photo albums or analyzing medical and satellite imagery. Just as the NIAH benchmark (easily solvable with regex) serves as a basic unit test for LMMs, VHs\\u2014despite potentially being solvable with an object detector as shown in Figure 2 and 3 of the submission\\u2014can also act as a diagnostic tool for LMMs.\\n\\nAdditionally, VHs assess cross-image reasoning, where LMMs often show significant performance declines when integrating information across multiple images, as detailed in Figure 3(A) and Appendix C.4. These points were briefly discussed in Appendix D of the submission and have been elaborated upon in the updated paper.\\n\\n**[W2] Multi-Needle Challenges and Dataset Design**\\n\\nWe acknowledge that performance on the multi-needle track sometimes surpasses the single-needle one for image sets larger than 20. We'd like to clarify that the performance on the multi-needle track is not necessarily lower than that of the single-needle track. The multi-needle track poses questions in the format: \\\"Q: For all images with <anchor object>, do ALL/ANY of them contain <target object>? A: Yes/No.\\\" We can then categorize each data point into two cases:\\n1. The ANY-Yes/ALL-No QA pairs: These cases are easier in the multi-needle track since retrieving at least one correct image can suffice. Conversely, the single-needle task demands precise retrieval, posing a greater challenge for models prone to false negatives.\\n2. The ANY-No/ALL-Yes QA pairs: These are more difficult in the multi-needle track than the single-needle one, as the model must retrieve all relevant images and integrate their information, demanding stronger cross-image reasoning.\\n\\nThe results in Figure C.3 of the updated paper support the above explanation, showing that the benchmark effectively reveals nuanced model behaviors rather than indicating a design flaw. We have included this discussion in Appendix C.4.\\n \\n**[W3] Relevance of the Retriever Module**\\n\\nWe believe that the retriever-LMM collaboration in our MIRAGE framework is well-motivated and is not ad-hoc in design. \\n1. Retriever: As there can be a lot of irrelevant visual content in the input, a retriever, filtering images based on relevance before reasoning occurs in the LMM, can make the whole system more efficient in terms of runtime (See Figure 6 (B) of the submission paper) and prevent the LMMs suffering from visual distractors \\u2013 a limitation for current LMMs nowadays as shown in Figure 1 of the submission paper.\\n2. LMM: Given a small number of images, LMMs can be a fundamental and critical processor for the visual reasoning and next token prediction task for the actual answer generation.\\n\\nThis framework makes MIRAGE robust for both the VHs unit tests and realistic large-scale multi-image QA tasks. These points were briefly addressed in Section 4.1 of the submission and further emphasized in L310-318 in the updated paper.\\n \\n**[W4] Performance on Other Benchmarks**\\n\\nThe primary focus of the paper is on basic capabilities\\u2014visual retrieval and basic cross-image reasoning\\u2014for large-scale multi-image QA. MIRAGE achieves SOTA results on the VHs benchmark (a unit test for these capabilities) and on the existing RetVQA dataset among all open-source solutions, indicating its superiority in multi-image QA.\\n \\nIn addition to the main focus, we've included seven common single-image QA benchmarks as a bonus. On them, MIRAGE performs on par with existing solutions as shown in Table 1. Updates for MME and SEED-Bench are also reflected in Table 1. For CHAIR [1], noted as a metric for hallucination detection, we have included an advanced hallucination benchmark POPE in Table 1 of the submission.\\n\\n[1] Rohrbach, Anna, et al. \\\"Object hallucination in image captioning.\\\" EMNLP 2018.\\n \\n**[Q2] Analyses on Failure Cases**\\n\\nWe have added an analysis of failure cases for Gemini v1.5 and GPT-4o in Figure C.2 of the updated paper. In summary, these models tend to struggle with small and non-salient objects due to the lack of a filtering module. \\n \\n**[Q3] Code Licensing Issues**\\n\\nBoth the MIT and Apache 2.0 licenses are permissive and compatible with one another (i.e. neither is a copy-left license or restricts wide release). We plan to release our code under the MIT license and retain Apache 2.0 for any code originally licensed by LLaVA. The LLaVA LICENSE file was included in our anonymous repository to comply with Apache 2.0. We appreciate the reviewer's concern and are open to discussing any specific issues further.\"}", "{\"summary\": \"The authors presents Visual Haystacks (VHs), a new vision centric benchmark designed to assess the performance of Large Multimodal Models (LMMs) in the multi-image question answering (QA) task. In addition, the author proposes a new visaul-RAG framework, MIRAGE, to enhance the task performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Novel Multi-Image QA Benchmark: The authors introduce an interesting multi-image QA benchmark, Visual Haystacks, designed around a vision-centric \\\"needle-in-a-haystack\\\" scenario, providing a fresh and challenging setting for the LMM evaluation.\", \"Comprehensive Model Evaluation: The paper conducts a thorough evaluation of LMMs on the VHs benchmark, uncovering important insights into current models, such as vulnerability to visual distractors, challenges with multi-image understanding, and tendencies toward positional visual bias.\", \"Novel Visual RAG Framework: The authors introduce a novel visual RAG framework that combines a compressor and a retriever. The compressor efficiently processes up to 10,000 images on a single 40GB A100 GPU, while the retriever identifies the top-k most relevant images for a given question, enhancing the framework\\u2019s scalability and efficiency.\"], \"weaknesses\": \"* Limited Object Diversity: The authors constructed the VHs benchmark using objects from the COCO dataset, which contains only 80 object categories. This limited selection may restrict the diversity and comprehensiveness of the benchmark, potentially affecting its ability to evaluate models across a broader range of visual scenarios.\\n\\n* Restricted Question Diversity: The authors appear to rely on a few simple templates to generate questions, which may restrict the variety of question types in the benchmark.\\n\\n* More like Object Detection than QA Reasoning: Many questions in the benchmark (e.g., \\\"For the image with a truck, is there a dog?\\\") seem to primarily assess the model\\u2019s object detection abilities rather than its visual QA reasoning skills. It is questionable if the benchmark requires the advanced visual QA reasoning skills from the models.\\n\\n* Missing Related Work: The paper does not reference several recent multi-image QA benchmarks, for example: \\n 1. CompBench: A Comparative Reasoning Benchmark for Multimodal LLMs\\n 2. MANTIS: Interleaved Multi-Image Instruction Tuning\\n 3. MUIRBENCH: A Comprehensive Benchmark for Robust Multi-Image Understanding. \\n\\nAdditionally, a similar multi-image retrieval approach was introduced in \\\"ColPali: Efficient Document Retrieval with Vision Language Models\\\", but this work was also not cited.\", \"questions\": [\"Please see the weakeness. In addition,\", \"How many templates were used to generate questions?\", \"What advantages does the VHs benchmark offer compared to recent multi-image QA benchmarks?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The submission introduces a new benchmark \\\"Visual Haystacks\\\" for measuring the quality of large multimodal models (LMMs) at the task of multi-image reasoning. Along with the benchmark dataset, it also introduces a RAG framework (MIRAGE) that can process a magnitude larger number of images compared to prior work. This uses a retrieval-based relevance filter as well as compression of image features to fit into the context length of the LMMs.\\n\\nAfter the initial round of reviews, this submission received scores of 6, 6, 6. The reviewers did not find sufficient reason to reject this submission during the discussion, and the consensus remained positive. The AC recommends acceptance, and requests the authors to use the constructive feedback from reviewers to update the submission.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal, reviewers raised concerns about the simplicity of the proposed benchmark - mostly consisting of binary answers and templated questions, as well as lack of diversity in concepts covered. The reviewers were satisfied with the explanation provided by the authors - that using simple questions helped remove confounding factors introduced by the need to understand complex questions, and focused purely on visual reasoning. Further, this benchmark focuses on common objects instead of exploring the long tail of visual concepts.\\nDuring the discussion, the reviewers did not feel that the shortcomings of the submission merited rejection.\"}", "{\"summary\": \"This paper addresses the limitations of Large Multimodal Models (LMMs) in multi-image question answering, where handling large visual contexts does not ensure effective retrieval and reasoning across images. Current benchmarks reveal biases and challenges in MIQA, such as poor cross-image reasoning and sensitivity to information placement. To overcome these, the authors propose \\\"Visual Haystacks (VHs),\\\" a vision-centric benchmark that tests retrieval and reasoning over multiple images, highlighting models' struggles with visual distractors and multi-image reasoning. They also introduce MIRAGE, an open-source Multi-Image Retrieval Augmented Generation framework capable of handling up to 10,000 images on a single GPU, achieving significant improvements over existing models and setting new standards in MIQA benchmarks like RetVQA. Key contributions include VHs, systematic LMM evaluation, and MIRAGE's scalable MIQA capabilities.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"I generally feel the direction is important to our community where design meaningful Visual Haystack benchmark for evaluating VLM.\", \"Some interesting points are discovered when evaluating models on the proposed benchmark. Since random guess could achieve 50% accuracy in the proposed benchmark, some open-sourced VLMs performance significantly drop even the Haystack size is very small. However, those models maintain high scores in some public evaluation-datasets.\", \"Some detailed experiments are conducted such as needle position and running time.\", \"The proposed benchmark are made publicly available under MIT license, which is good for community.\"], \"weaknesses\": [\"Benchmark construction is still mainly centered around recognition tasks, based on benchmark design principles listed in Line129~138. Basically, it requires a strong recognition among all the input images, rather than true visual reasoning.\", \"Based on the Figure 2 and 3, certain models, such as Gemini, GPT and the proposed MIRAGE, consistently perform better on the proposed multi-needle challenges compared to single-needle tasks. However, the multi-needle challenges are intentionally designed to be more difficult, as they demand additional reasoning across multiple images. Does this mean failure in designing the benchmark?\", \"Since the benchmark is constructed in way of examining recognition, therefore the proposed method contain ad-hoc modules, such as \\\"a retriever module then calculates relevance scores, ensuring that only the most relevant images are passed to the LLM for final reasoning.\\\" Does this design hold for general visual reasoning tasks? For example, many of the tested single image dataset used in this paper, do not need this retriever module at all.\", \"The proposed framework achieved not-very-good performance on some of the tested datasets. Also, there are many datasets that not being tested such as SEED, MME, and CHAIR.\"], \"questions\": [\"Could you please address the points raised in the above weakness?\", \"Could you please add some randomly sampled failure cases made by GPT or Gemini? Sometimes failure cases can tell more than good cases.\", \"Could you please address the ethics concerns around the code license?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"Is it possible for the author to release their code under the MIT license, considering it is derived from the Apache 2.0-licensed LLaVA codebase? Could the author elaborate this point?\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for your response. The authors addressed my concerns, and I will maintain my score.\"}", "{\"summary\": \"This paper introduces a long context, visual needle in a haystack benchmark which composed of 1k yes/no questions changeling the model to reasoning and find the target object in the images. It evaluated on both open-source and closed-source LMMs and reveal several critical findings such as susceptibility to visual distractors, difficulty in multi-image reasoning, and a bias in image positioning. It introduces a new baseline called MIRAGE (Multi-Image Retrieval Augmented Generation) for better handling of VH tasks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper introduced a new visual needle in a haystack benchmark which composed of 1k yes/no questions.\\n2. Evaluated on both open-source and close-source models and gained three insightful findings. \\n3. Introduced a new baseline called MIRAGE for better handling of visual haystack tasks.\", \"weaknesses\": \"1. The questions are only limited to yes/no questions.\\n2. The question template are very limited, seems only three. \\n3. MIRAGE has a significant performance drop in 4 out of 7 general VQA tasks. \\n4. The approach of MIRAGE, deselecting unrelated (distracting) images somehow circumvents the VH challenge, as the this challenge lies in how model can reasoning in long context. \\n5. The task of finding a target object seems still not simulating a real world scenario of long context visual reasoning task.\", \"questions\": \"1. I'm confused about the difference between the MIRAGE model in Table 1 and the Q-former Model in Table. Doesn't MIRAGE utilize Q-former.\\n2. See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"We appreciate the reviewer for highlighting our paper\\u2019s contributions, including the construction of a novel benchmark dataset, the rigorous evaluation of various LMMs, and the development of an innovative visual-RAG framework. Below, we address each of the concerns:\\n\\n**[W1, W2, Q1] Limited Object and Question Diversity**\\n\\nWhile we understand that some of the questions are simple to human users, the primary objective of the Visual Haystacks (VHs) benchmark is to serve as a fundamental unit test for assessing LMMs' multi-image processing capabilities. As noted in L150-155 of both the submission paper and the updated paper, we deliberately constructed the dataset using COCO objects and straightforward (one for the single-needle track and two for the multi-needle track) templates to isolate and evaluate core retrieval and reasoning skills without additional confounding variables (such as out of domain data, as it is likely that most models are highly familiar with COCO objects).\\n\\nWhile we believe that expanding the object and question diversity is important (such as adding an open object set, adding questions that reason over actions and attributes, exploring multiple languages, or adding multi-step inference questions), our results presented in this paper demonstrate that models struggle with even this simple test, so we strongly believe that such an expansion is out of scope for this initial effort. We hope to consider such expansions for the following versions of the benchmark. These limitations and future direction are included in Appendix D of the paper, for both the updated version and the submission version.\\n\\n**[W3] More like Object Detection than QA Reasoning**\\n\\nWe would like to clarify that detection\\u2014retrieving key information from a large collection of data\\u2014is a fundamental skill for enabling real-world long-context visual understanding tasks, like searching through photo albums or analyzing medical and satellite imagery in large databases. Just as the NIAH benchmark (easily solvable with regex) serves as a basic unit test for LLMs in long-context NLP tasks, we believe VHs offers a similarly essential unit test for evaluating LMMs in long-context visual understanding. While VHs could theoretically be addressed using an object detector (which we have already included as a baseline in Figure 2 and 3 of the submission), they represent a useful diagnostic tool/unit test for assessing these models as mentioned above. This point was briefly mentioned in L30-L50 and Appendix D of our submission and further clarified in Appendix D of the updated paper.\\n\\nAlso, it's important to emphasize that our dataset is not solely focused on detection. It also includes a basic assessment of cross-image reasoning. In Figure 3 (A) of the submission paper and the updated Appendix C.4, we observe that LMMs experience significant performance degradation in the multi-needle track compared to the single-needle track, where models must integrate information across multiple images.\\n\\nAs previously mentioned, VHs is designed to diagnose LMMs on two basic capabilities\\u2014visual retrieval and reasoning. We believe that expanding the scope of the benchmark, while valuable future work, is beyond the scope of this current contribution.\\n\\n**[W4] Missing Related Work**\\n\\nWe thank the reviewer for pointing out these relevant contemporary works. We've adjusted the related work section to add these.\\n\\n**[Q2] The advantages of VHs over existing multi-image benchmarks**\", \"vhs_offers_two_main_advantages_over_existing_multi_image_benchmark_datasets\": \"1. **Simplicity**: VHs is specifically designed to evaluate LMMs\\u2019 capabilities in visual retrieval and basic cross-image reasoning without introducing additional confounding factors, such as out-of-distribution images or complex language reasoning. By maintaining this simplicity, VHs serves as a diagnostic tool for LMMs, where the single-needle track focuses on visual retrieval performance, and the multi-needle track evaluates basic cross-image reasoning. In contrast, existing multi-image benchmarks often aim to address domain-specific or real-world applications, making the questions inherently more complex. While this complexity may reflect harder real-world challenges, it can make diagnosing specific LMM abilities difficult, as answering a single question often requires multiple intermixed capabilities.\\n2. **Scale**: Existing multi-image benchmarks like RETVQA include fewer than 30 images per question. The three datasets mentioned by the reviewer contain even fewer images, with fewer than 10 images per question. In comparison, VHs scales up to 10K images per question, far exceeding the size of existing benchmarks. This large-scale setting better mirrors real-world large-scale multi-image QA tasks, like photo album searching or analyzing medical and satellite imagery in large databases.\\n\\nWe have clarified these points in the related work section of the updated paper. Thank you again for raising this issue.\"}" ] }
9Ieq8jQNAl
Reward Learning from Multiple Feedback Types
[ "Yannick Metz", "Andras Geiszl", "Raphaël Baur", "Mennatallah El-Assady" ]
Learning rewards from preference feedback has become an important tool in the alignment of agentic models. Preference-based feedback, often implemented as a binary comparison between multiple completions, is an established method to acquire large-scale human feedback. However, human feedback in other contexts is often much more diverse. Such diverse feedback can better support the goals of a human annotator, and the simultaneous use of multiple sources might be mutually informative for the learning process or carry type-dependent biases for the reward learning process. Despite these potential benefits, learning from different feedback types has yet to be explored extensively. In this paper, we bridge this gap by enabling experimentation and evaluating multi-type feedback in a wide set of environments. We present a process to generate high-quality simulated feedback of six different types. Then, we implement reward models and downstream RL training for all six feedback types. Based on the simulated feedback, we investigate the use of types of feedback across ten RL environments and compare them to pure preference-based baselines. We show empirically that diverse types of feedback can be utilized and lead to strong reward modeling performance. This work is the first strong indicator of the potential of multi-type feedback for RLHF.
[ "Reinforcement Learning", "RLHF", "Machine Learning", "Multi-Type Feedback" ]
Accept (Poster)
https://openreview.net/pdf?id=9Ieq8jQNAl
https://openreview.net/forum?id=9Ieq8jQNAl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zPCSthf1nU", "vo34p4RxLh", "uFgSHVx0lh", "sHEp6oP0h3", "l84ilNeO6h", "invy0XFf3i", "iBkfzGM8IG", "fOByylBeCm", "f3t6ipxjmC", "cddzA9g6Vk", "WSWQhLQO7C", "Sf4r4YcErR", "LhKrvtpQXm", "IAwC63Q6wN", "FpXWEO3xsn", "AwOMlTVZqj", "AlAnJ9PuJV", "9GkFvSd28m", "7bIQjMgQIj", "0W8LrGjAUg" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732445077943, 1730711454648, 1732673839244, 1732444965921, 1732445088700, 1734581469433, 1730610963459, 1732988633302, 1732444935814, 1732776310835, 1730215984424, 1732562368993, 1737524213892, 1732522749605, 1730387556446, 1732607196271, 1733199083784, 1732654337205, 1732444997443, 1732551761515 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12768/Authors" ], [ "ICLR.cc/2025/Conference/Submission12768/Reviewer_PQUp" ], [ "ICLR.cc/2025/Conference/Submission12768/Reviewer_PQUp" ], [ "ICLR.cc/2025/Conference/Submission12768/Authors" ], [ "ICLR.cc/2025/Conference/Submission12768/Authors" ], [ "ICLR.cc/2025/Conference/Submission12768/Area_Chair_Tq4S" ], [ "ICLR.cc/2025/Conference/Submission12768/Reviewer_rXcY" ], [ "ICLR.cc/2025/Conference/Submission12768/Reviewer_vPP3" ], [ "ICLR.cc/2025/Conference/Submission12768/Authors" ], [ "ICLR.cc/2025/Conference/Submission12768/Authors" ], [ "ICLR.cc/2025/Conference/Submission12768/Reviewer_vPP3" ], [ "ICLR.cc/2025/Conference/Submission12768/Reviewer_vPP3" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12768/Reviewer_rXcY" ], [ "ICLR.cc/2025/Conference/Submission12768/Reviewer_oTK8" ], [ "ICLR.cc/2025/Conference/Submission12768/Reviewer_PQUp" ], [ "ICLR.cc/2025/Conference/Submission12768/Reviewer_oTK8" ], [ "ICLR.cc/2025/Conference/Submission12768/Authors" ], [ "ICLR.cc/2025/Conference/Submission12768/Authors" ], [ "ICLR.cc/2025/Conference/Submission12768/Reviewer_oTK8" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for their comments and helpful suggestions. We have summarized the steps for a revised version in a summary comment. We still want to answer the points raised in this review briefly:\\n\\n\\\"If these reward models are large or computationally expensive.\\\"\\n-> We see our ensemble approach as a proof of concept at this stage; in the future, using multiple heads on top of a shared model (e.g., compared to bootstrapped DQN) might be a practicable solution, and we want to encourage future exploration strongly, we raise the issue of expanding our approach for LLMs\\n\\n\\\"The resulting reward signal cannot incorporate information that is only deducible from considering multiple feedback types at once. For example, some patterns or generalizations may only be apparent when considering all feedback.\\\"\\n-> Can the reviewer kindly clarify this point? As multiple reward functions are queried during inference (which might have been trained on the same instances with different feedback), we see the possibility of incorporating different information\\n\\n\\\"If there are only small amounts of one type of feedback, its corresponding reward model might be very inaccurate, and if the uncertainty for the uncertainty-weighting is not well calibrated.\\\"\\n-> This is a very important issue that we want to address with our uncertainty-weighted approach, and we would acknowledge that the models at very early training stages might be very inaccurate due to random initialization; possible strategies might be the use of pre-training or initialization techniques\\n\\nThe paper only considers synthetically generated feedback and does not evaluate/verify with real human feedback.\\n-> We acknowledge this as a weakness of the paper. An extension of our approach by fitting the characteristics/noise levels with human data is an intriguing future expansion of our work, and we have noted this in future work. However, we would argue that our study still provides novel insights beyond the toolkit, as the utility of multi-type feedback has not been shown before on a conceptual or empirical level.\\n\\n\\\"The paper focuses on continuous control tasks and does not test on discrete or contextual bandit environments.\\\"\\n-> We have added Highway-Env as a discrete RL environment and found results to be consistent with previous results\\n\\n\\\"For example, in the language modeling setting, interpreting your demonstrated responses better than random tokens would probably not improve your pre-trained model compared to supervised fine-tuning.\\\"\\n-> The reviewer is absolutely correct that this pattern does not need to hold across more complex environments, although we find it very reliable in our experiments. This kind of adaptation is possible with our framework, and we would like to encourage future work on this.\"}", "{\"summary\": \"This paper thoroughly investigates learning from different types of human feedback, including defining various types of human feedback, detailing how to collect synthetic feedback for research purposes, and training reward models based on synthetic feedback. It also analyzes the performance of joint training with multiple feedback types and examines the effectiveness of different feedback types.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.Studying different types of human feedback is extremely important and can promote research development in the RLHF community.\\n2.Provides methods for generating various types of synthetic feedback for future RLHF research. \\n3.For the first time, it proposes training with multiple feedback sources while considering human noise. \\n4.The various analyses of reward models in the main paper and the appendix are comprehensive.\", \"weaknesses\": \"1.The workload of this paper is substantial, covering many key points, which results in relatively preliminary research on each type of feedback. The characteristics of different feedback types are not well demonstrated. Can you describe several key feedback types or explain which feedback types are more suitable for specific scenarios?\\n2.The first half of the paper is well-written, but the experimental organization in the latter half is chaotic, making it difficult to draw clear conclusions.\\n\\nOverall, I believe this paper is highly valuable, but the current version appears somewhat hasty.\", \"questions\": \"1. Figure 3 obscures the text.\\n2. Some figures are not analyzed, and there are more important results in the appendix, requiring a restructuring of the paper. \\n3. Mean and standard deviation are not provided. \\n4. Why is the correlation of some reward functions so low? \\n5.Is the method for training reward models online? (i.e., continuously updated online with new samples)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detailed response, good job!\"}", "{\"comment\": \"We thank the reviewer for their comments and helpful suggestions. We have summarized the steps for a revised version in a summary comment. We still want to answer the points raised in this review briefly:\\n\\n\\u201cLimited Analysis of Feedback Types and Noise Effects\\u201d\\n-> We have tried to improve the investigation of noise effects on the reward functions (Section 4.5) and now draw more robust conclusions: We find that reward functions show different drop-off behavior when increasing the noise level\\n\\n\\u201cReward Ensemble Approach is Underdeveloped\\u201d\\n-> We agree that this is not a fully developed method, and have made it more evident in our phrasing that the ensemble approach should be treated as a proof of concept, however, we have extended both details on the methods and provided additional results\\n\\n\\u201cInsufficient Details as a Software Library, What are the hardware requirements and other support needed, along with the expected training time and latency based on these specifications?\\u201d\\n-> We have added some details on hardware requirements in the reproducibility statement and plan to add some additional code details for the final revision\\n\\n\\u201cTypos and Formatting Issues\\u201d\\n-> We have tried to address these issues in the revision\\n\\n\\u201cHow diverse is the distribution of trajectories in line 197? Can you provide quantification of the exploration?\\u201d\\n-> We find that this method provides good coverage of the state space and will provide a figure in the appendix comparing it to random exploration\\n\\n\\u201cHow can you ensure it is fair to compare different feedback types given that noise is generated separately for each type?\\u201d\\n-> We tried to harmonize the noise generation scheme for each feedback type to ensure reproducibility (i.e., not using naive preference switching, which is not compatible with scalar reward functions), however, we acknowledge that the noise generation scheme for demonstrative feedback is indeed separate from the other feedback types\\n\\n\\u201cAre there any results available for environments beyond Mujoco?\\u201d\\n-> We have provided additional results for Highway-Env and MetaWorld, which are broadly in line with existing experiments\"}", "{\"comment\": \"\\\"The feedback methods, except one, all use the same fixed segment length. This may be sub-optimal if different feedback methods work better with different-sized segments.\\\"\\n-> We will add this to the limitations and suggestions for future work and choose this segment length mainly based on the justification given in previous work that longer segments are easier to label for humans\\n\\n\\\"The reward model correlation analysis is only done on one environment and is averaged over the whole trajectory.\\\"\\n-> We thank the reviewer for this suggestion; in response, we have added more details and a temporal analysis of reward functions\\n\\n\\\"If there are not multiple random seeds, then very little of the results presented can be assumed to be statistically significant.\\\"\\n-> We acknowledge that we should have provided these in the initial version of the main paper (as all experiments were run over multiple seeds). We have just provided them in the appendix. We have added error bars/confidence intervals back to the reworked figures in the main paper.\\n\\n\\\"In Figure 3, not showing the score for learning from the ground truth reward as a function of environment time steps limits comparisons between learning from the feedback modes vs. the ground truth itself.\\\"\\n\\n-> Again, this was an oversight on our part. We indicate the expert performance based on the ground-truth reward function, but it was difficult to see within the figures. We have updated the figures to improve visibility.\\n\\n\\\"The reward models are pre-trained before being used to train a policy. This is very atypical for RLHF and limits the conclusions that can be drawn from the results.\\\"\\n-> While not typical, this approach has been used in previous work and simplified the analysis space. However, we acknowledge this as a weakness and plan to expand this in future work.\\n\\n\\\"It is not clear how amounts of each type of feedback are controlled to be comparable. It seems the amount of each type available is very dependent and sensitive to parameters of the synthetic generation process.\\\"\\n-> Can the reviewer kindly clarify this point? We try to exactly match the number of queries/feedback instances for each type of feedback (i.e., 10.000 preference pairs, demonstration segments, ratings, etc. for our experiments) and average our results over five datasets to control for the composition of feedback datasets\\n\\n\\\"For uncertainty-weighted ensembles, the uncertainty of an ensemble trained on feedback that only constrains reward differences (e.g., comparative) is not well-calibrated.\\\"\\n-> We will note this as a limitation\\n\\n\\\"I do not believe some of the claims in the discussion and conclusion are well-supported.\\\"\\n-> We tried to sharpen our claims based on the additional results in section 4\\n\\nErrata\\n-> We thank the reviewer for pointing out these errors. We have tried to address them for a revision carefully\"}", "{\"metareview\": \"This paper presents a framework and toolkit for investigating learning from different types of human feedback in reinforcement learning settings, along with implementations for six feedback types and their corresponding reward models. The reviewers praised the paper's contributions in standardizing multi-feedback RLHF research and its thorough empirical analysis across environments. Through revisions, the authors have addressed major concerns by adding experiments on discrete action spaces, providing more detailed analysis of reward correlations with error bars, and improving documentation of dataset generation. While limitations remain regarding sim-to-real transfer and validation with actual human feedback, the paper makes a valuable contribution by establishing a foundation for studying multi-feedback RLHF and providing tools for future research.\\n\\n Given the paper's technical merit and the authors' thorough revisions, I recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The major remaining limitation raised by Reviewer oTK8 regarding validation with real human feedback is a valid concern. However, I agree with other reviewers that this represents an important direction for future work rather than a critical flaw, given the significant technical challenges and resource costs involved in large-scale human feedback collection.\\nTo maximize impact, I encourage the authors to **fulfill their commitment to open-source the codebase** with comprehensive documentation.\\nThe current simulation-based framework makes meaningful contributions by establishing foundational tools and insights that can guide future research incorporating human feedback.\"}", "{\"summary\": \"The paper develops a lightweight software library for generating six feedback types\\u2014rating, comparative, demonstration, corrective, descriptive, and descriptive preference\\u2014in the field of Reinforcement Learning from Human Feedback (RLHF). The library is compatible with established RL frameworks, including Gymnasium, Stable Baselines, and Imitation.\\n\\nThe paper introduces noise into the synthetic feedback. Experimental results are presented on basic Gym Mujoco locomotion tasks. In these experiments, the authors compare the learned reward functions and the agent\\u2019s performance across different feedback types, both with and without noise. Additionally, the authors present a joint reward feedback method, i.e., an ensemble of rewards, and compare it to single feedback type baselines. This approach performs well in the HalfCheetah-v5 environment but is less effective in Walker2d-v5.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"With the development of RLHF, there has been an exponential growth of research, especially in interdisciplinary applications, such as large language models (LLMs). I appreciate that it is important to have a standard library of feedback types commonly found in established RL frameworks. This is undoubtedly helpful for both new and experienced researchers in this rapidly evolving field. The choice of RL methods (e.g., PPO) and environments used in the paper are standard and widely accepted in the literature.\\n\\nFurthermore, by encompassing multiple feedback types, this library has the potential to standardize RLHF research, making studies more comparable and reproducible across the field. Unlike prior work that often focuses on single feedback types or limited noise modeling, this toolkit provides a broader, more robust framework, which could foster progress in handling realistic feedback conditions in reinforcement learning.\", \"weaknesses\": \"1) Limited Analysis of Feedback Types and Noise Effects: The paper provides only a shallow analysis of the results across different feedback types and the impact of adding noise. The authors introduce Gaussian noise as a way to simulate realistic inconsistencies in human feedback, which is a valid approach. However, they assume that the added noise will uniformly challenge the agent\\u2019s learning process, yet they provide limited empirical support to demonstrate the nuanced effects of this noise. In reinforcement learning, robustness to noise is complex and context-dependent; simply adding noise doesn\\u2019t necessarily simulate real-world variability comprehensively. This is particularly relevant in cases where the agent encounters unfamiliar scenarios, as it may lack the generalization needed to adapt successfully, which is not addressed here. It would be beneficial if the authors quantified the noise across feedback types and analyzed how this type-specific noise impacts learning stability and performance. Without such quantification, comparing the robustness of different feedback types remains somewhat speculative and may not provide fair insights. As highlighted by Casper et al. (2023), inconsistencies in human feedback are a fundamental limitation in reinforcement learning from human feedback, underscoring the need for a systematic approach to evaluating robustness in such noisy environments.\", \"reference\": \"Casper, Stephen, et al. \\\"Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback.\\\" Transactions on Machine Learning Research, 2023.\\n\\n2) Reward Ensemble Approach is Underdeveloped: The paper attempts to improve reward learning through a reward ensemble (combining different feedback types), which is an intriguing direction. However, the ensemble approach lacks depth and does not yield substantial improvements. The authors themselves acknowledge that rewards from different feedback types cannot be averaged simply, and yet, the methods presented for combining them are fairly straightforward. Given the challenging nature of ensemble reward learning, the results are not strong enough to warrant this as a core contribution. As such, this component appears more exploratory than foundational, and it could benefit from either more sophisticated ensemble techniques or a more extensive evaluation to validate its potential utility.\\n\\n3) Insufficient Details as a Software Library: As a contribution to reinforcement learning tools, the paper lacks critical details expected from a software library description. While the library supports diverse RL environments beyond the basic Gym Mojoco tasks, only these tasks are presented in the main text. Further, there is little mention of essential details such as hardware requirements, training time, or memory usage, which are crucial for reproducibility and practical use by researchers. Although some hyper-parameter details are provided in the appendix, this is insufficient for a full understanding of the library\\u2019s operational requirements and expected performance.\\n\\n4) Typos and Formatting Issues\\n\\n- Figure 4b title should read \\u201cRL Episode Returns.\\u201d\\n\\n- Incomplete sentences in the paragraph near line 380.\\n\\n- Hidden text in line 356.\", \"questions\": \"1) How diverse is the distribution of trajectories in line 197? Can you provide quantification of the exploration?\\n\\n2) Is there any analysis for Figure 2, specifically why demonstrative and corrective feedback appear significantly different from other methods?\\n\\n3) How can you ensure it is fair to compare different feedback types given that noise is generated separately for each type?\\n\\n4) What are the hardware requirements and other support needed, along with the expected training time and latency based on these specifications?\\n\\n5) Are there any results available for environments beyond Mujoco?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No concerns.\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the additional figures and appendices, they are quite interesting.\\n\\nBased on this, and additional changes made, I have raised my score.\"}", "{\"comment\": \"We thank the reviewer for their comments and helpful suggestions. We have summarized the steps for a revised version in a summary comment. We still want to answer the points raised in this review briefly:\\n\\n\\u201cCan you describe several key feedback types or explain which are more suitable for specific scenarios?\\u201d\\n-> We have significantly expanded on the analysis of different feedback types in section 4, including a more detailed analysis of the relationship between reward model performance and environment and effectiveness for different scenarios.\\n\\n\\u201cThe first half of the paper is well-written, but the experimental organization in the latter half is chaotic, making it difficult to draw clear conclusions.\\u201d\\n-> We have reworked the second half to improve the readability and significance of the results\\n\\n\\u201cFigure 3 obscures the text.\\u201d\\n-> We have tried to resolve all layouts and text issues\\n\\n\\u201cSome figures are not analyzed, and more important results are in the appendix, requiring a paper restructuring.\\u201d\\n-> We have reworked the figures, in particular summarizing more key results without relying on the appendix for critical insights\\n\\n\\u201cMean and standard deviation are not provided.\\u201d\\n-> We have added these also in the main paper figures, although we want to note that they were consistently reported in the appendix and discarded for reasons of visual clarity before\\n\\n\\u201cWhy is the correlation of some reward functions so low?\\u201d\\n-> We have tried further investigating this in the paper (Figure 4 and Section 4.4). We conclude that low correlation is especially prevalent for corrective and demonstrative feedback, which rely on expert policies, i.e., are more independent from the ground-truth reward function; however, intriguingly, even very low-correlated reward functions can be very effective for training RL agents.\\n\\n\\u201cIs the method for training reward models online?\\u201d\\n-> Our library has both capabilities; however, we have not reported any online training results in the paper, as it would add another layer of complexity. We have therefore removed references to online training to avoid confusion and now only refer to pre-training of reward functions\"}", "{\"comment\": \"We appreciate the reviewer for sharing their honest concerns, as this has been very helpful in formulating the limitations and future work section of our manuscript.\", \"we_would_like_to_give_some_final_points_of_consideration\": [\"We try to openly acknowledge this limitation in our work, and the approach is well suited to integrate data from human annotations in the future, as well as enable experimentation in this area.\", \"We have stated that the feedback dataset generation process is an important step which has an influence on downstream results. As a remedy, we have put considerable effort in the documentation of the feedback dataset (See Appendix B). We think that a transparent view of the dataset composition contributes to reproducible research. Existing work has sometimes struggled to provide these insights (e.g., distribution of prompts/queries/feedback values, etc.). We would like to contribute to setting a standard by clearly communicating the process, underlying assumptions, and resulting datasets, and provide tools for other researchers. We have tried to strengthen this contribution in the final revision by (1) Addressing this point more directly in the manuscript, and highlighting the tools for dataset analysis as core elements of the library, and (2) Extending the existing Appendix B, e.g. to document the effect of introduced noise.\"]}", "{\"summary\": \"This paper explores reward learning utilising multiple different types of feedback, covering evaluative, instructional, and descriptive forms.\\n\\nThey propose a method on how to generate these forms of feedback synthetically, how to learn from these feedback methods in isolation, and also how to combine these reward signals to learn from many types of feedback simultaneously.\\n\\nAdditionally, they investigate the different properties of these feedback methods such as their correlation, and ability to provide a reward signal for learning an RL policy.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"Research is well motivated.\\n\\nExisting work is nicely leveraged and inter-twinned to form basis of paper without re-inventing the wheel.\\n\\nPaper is reasonably well written, with ideas are clearly communicated.\\n\\nThe proposed method to generate synthetic feedback seems good.\\n\\nThe proposed method to learn from multiple feedback types, namely training separate reward models and combining their scores, is clean, simple, and easy to understand.\\n\\nThe noise model used seems reasonable.\\n\\nPaper considers alternative formulations of human feedback, namely regret-based.\\n\\nExploring the reward model correlations is good analysis.\\n\\nFor the most part, the paper tests on many environments with many noise levels, giving strong evidence for some of their claims.\", \"weaknesses\": \"Learning a reward model for each feedback type presents some limitations not considered by the authors.\\n* If these reward models are large or computationally expensive (e.g. in an foundation model finetuning setting where each one may be foundation model sized), then having one for every feedback type might not be very scalable or practical.\\n* The resulting reward signal cannot incorporate information only deducible from considering multiple feedback types at once.\\nFor example, there may be some patterns or generalisations only apparent when considering all feedback.\\n* If there's only small amounts of one type of feedback, its corresponding reward model might be very inaccurate, and if the uncertainty for the uncertainty-weighting is not well calibrated, it could lead to that reward model making the overall reward signal worse.\\n\\nThe paper only considers synthetically generated feedback and does not evaluate / verify with real human feedback.\\n\\nThe paper focuses on continuous control tasks, and does not test on any discrete or contextual bandit environments.\\nThis is important to test as this more closely corresponds to the LLM fine-tuning setting, a key application of RLHF and related methods.\\n\\nInterpreting the demonstrations as being preferred only to random trajectories seriously limits the information extracted from them compared to standard demonstration learning approaches like MaxEnt.\\nDespite the authors claims suggesting this was \\\"stronger than sampling against other sub-optimal rollouts\\\", the amount of interesting state space explored by random policies decreases exponentially as the environment gets larger and more complicated.\\nFor example, in the language modelling setting, interpreting your demonstrated responses as better than random tokens would probably not improve your pre-trained model compared to supervised fine-tuning.\\n\\nThe feedback methods, except one, all use the same fixed segment length.\\nThis may be sub-optimal if different feedback methods work better with different sized segments.\\n\\nThe reward model correlation analysis is only done on one environment and are averaged over the whole trajectory.\\nIt's not clear how consistent these correlations are across different environments, and it may be interesting to see how these correlations change over the course of a typical trajectory (E.g. maybe some are well correlated to begin with, and then become de-correlated).\\n\\nGraphs have no error bars, and it's not clear how many random seeds have been averaged over in the RL experiments.\\nIf there are not multiple random seeds, then very little of the results presented can be assumed to be statistically significant.\\n\\nThe reward models are pre-trained before being used to train a policy.\\nThis is very atypical for RLHF and limits the conclusions that can be drawn from the results.\\n\\nIn figure 3, not showing the score for learning from the ground truth reward as a function of environment time steps limits comparisons between learning from the feedback modes vs the ground truth itself.\\n\\nIt's not clear how amounts of each type of feedback are controlled to be comparable.\\nIt seems the amount of each type available is very dependent and sensitive to parameters of the synthetic generation process.\\n\\nFor uncertainty-weighted ensembles, the uncertainty of an ensemble that has been trained on feedback that only constrains reward differences (e.g. comparative) is not well-calibrated.\\n\\nI do not believe some of the claims in the discussion and conclusion are well-supported.\\n* On point (2), line 495/496, combining rewards is only tested on two environments and only performs well in one of them.\\n* Line 523/524, the comparison of these characteristics has been very limited.\\n* Line 526 to 528, learning from multiple feedback types being \\\"very effective\\\" is not supported by the evidence presented in figure 6.\\n\\n## Errata:\\n* Figure 2 caption and subcaptions disagree on what environment the results are from (Swimmer-v5 vs HalfCheetah(-v5)).\\n* Figure 3 partially covers some text\\n* The paragraph at the start of section 4.4 conflicts with itself: \\\"instead of continuously adapting ... is continuously updated\\\"\\n* Line 380, sentence abruptly ends and is unfinished\\n* The axes and legends for figures 2,3,4, and 5 (especially 3), are too small to clearly read, especially on a printed copy of the paper.\\n* The lighter shaded lines in figures 4, 5, and 6 are hard to see.\\n* There is no reference to figure 4 in the main text of the paper, nor analysis of what it shows.\\n* Line 454, \\\"As stated before, each individual reward model ... is an ensemble in itself\\\". This does not appear to have been stated before.\\n* Line 467, the text refers to the \\\"Hopper-v5\\\" environment, but the figure under discussion, 6, only contains the HalfCheetahv5 and Walker2d-v5 environments.\", \"questions\": \"Does the method of reward modelling proposed work to learn both an RL policy and a reward model *without pre-training the reward model*?\\n\\nWhen utilising demonstrative and preference feedback, a common method is training first on the demonstrations, e.g. using MaxEnt or SFT / Behavioural cloning, and then fine-tuning on the preferences with RLHF.\\nIt would be interesting to see a comparison to this baseline method.\\n\\nHow do reward function correlations vary across environments and across trajectories?\\n\\nIt would be good to run multiple seeds and plot mean and standard error/deviation in the figures.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the clarification and the updated paper. The changes made largely address my concerns and improve the strength of the paper, thus I have altered my rating.\", \"to_clarify_some_things_from_my_review\": \"\\\"\\\"The resulting reward signal cannot incorporate information that is only deducible from considering multiple feedback types at once. For example, some patterns or generalizations may only be apparent when considering all feedback.\\\" -> Can the reviewer kindly clarify this point?\\\"\\nWhat was meant here is that there may be some aspect of reward that might only be deducible by a single reward model learning from multiple feedback types simultaneously. As a slightly contrived toy example, consider a 2D real-valued state space with two reward functions, r1 and r2, both trained on a different preference. r1 sees ([1, 0] > [-1, 0]) and r2 sees ([0, 1] > [0, -1]). r1 implements r1([x, y]) = 1 if x > 0 else 0, and r2 implements r2([x, y]) = 1 if y > 0 else 0. Now consider r3 which sees both preferences and learns r3([x,y]) = 1 if x+y > 0 else 0. (r1+r2)/2 != r3. Whilst the learnt rewards share similarities, they are somewhat different due to generalising differently. This is only a minor point, but one I thought worth pointing out. Note the different preferences are stand-ins for different types of reward, I'm aware in practice for this example a single preference model would see both the preferences.\\n\\n\\\"\\\"It is not clear how amounts of each type of feedback are controlled to be comparable. It seems the amount of each type available is very dependent and sensitive to parameters of the synthetic generation process.\\\" -> Can the reviewer kindly clarify this point?\\\"\\nYour explanation clarifies what I was trying to get at, thank you.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for the detailed and thoughtful revisions. The additional experiments and clarifications comprehensively address my concerns, particularly regarding the analysis of noise effects, broader environments, and reproducibility details. These improvements significantly enhance the manuscript\\u2019s clarity, impact, and applicability, and I appreciate the authors\\u2019 efforts in strengthening the work. I have raised my score accordingly.\"}", "{\"summary\": \"The paper presents a benchmark or toolkit to use to simulate multiple types of feedback in the context of learning preferences from human feedback as well as analyzes one approach to combining multiple sources of feedback (ensemble of rewards). The different sources of feedback include ratings (scalar scores per trajectory), binary preference labels, demonstrations, corrections, descriptions of high value state-action pairs, and descriptions of high value features. The paper describes in detail how each sources of feedback is simulated using an environment's ground truth reward as the human proxy. The bulk of experiments focus on comparing the reward models learned from the different sources of feedback by looking at correlation between learned reward functions and downstream policy performance according to the ground truth reward. Policy performance and reward model accuracy are evaluated in the face of feedback noise, with how noise is applied varying between the different sources of feedback. For evaluating how to combine the multiple sources of feedback, the authors present two approaches to combining an ensemble of reward models, where different mini reward model ensembles are trained from different sources of feedback. The primary take away is that there is no one best source of feedback across tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper addresses an important problem, which is providing a toolkit/benchmark for people to use to research learning from different types of feedback and how to combine them. The approach is similar to what has already been used to learn from binary preference labels alone, which makes it an easy toolkit for people to pick up and understand how it works.\", \"The experiments and results demonstrate that accounting for multiple source of feedback is neither straightforward nor trivial, and work is needed to understand the strengths and benefits of each.\", \"-The proposed toolkit does not rely on actual humans in the loop making it something many people can use easily for initial proofs of concepts.\", \"The toolkit provides mechanisms to have different sources of noise, which is crucial as we know human feedback is noisy. The code is note yet provided, but from the description in the paper, it seems like running experiments with different amounts of noise should be fairly straight forward.\"], \"weaknesses\": [\"**High-level overview:**\", \"There are two main weaknesses of this paper discussed at a high level here, but more details below. The work is valuable and necessary, but the needed level of rigor isn't there yet.\", \"(1) the toolkit it not validated against any studies involving humans, therefore it is impossible to know if the conclusions drawn in the paper reflect characteristics of the feedback or of the implementation\", \"(2) the text in the paper, especially in the second half describing experiments and results, contradicts itself and presented results making it difficult to draw conclusions and have take aways or learnings.\", \"**The details:**\", \"The way the paper is written suggests that both the multi-feedback toolkit and the method for combing the multiple sources of feedback are both equal contributions. I would push back that the main contribution is the toolkit and the proposed method is rather trivial with no true baselines to show its benefit or contribution. It would have been better to limit the feedback combination methods as a tool for proof of concept showing how the toolkit can be used.\", \"The authors draw conclusions about the usefulness of different types of feedback (e.g. Figure 2 reward model correlation and demonstrative and corrective having the worst correlation with the ground truth reward). However, in the absence of a study to validate the toolkit against human feedback makes it impossible to validate if this is a function of the feedback, or the choices they have made in how the feedback is implemented.\", \"Only one environment is evaluated therefore it is not possible to assess how general the proposed toolkit and feedback simulation method are. At least one other should be included. MetaWorld is popular for the preference learning and would be a natural fit as it is included in [BPref] (https://github.com/rll-research/BPref).\", \"There are multiple places in the paper where the authors seem to contradict either themselves or the presented results:\", \"at the start of Section 4.4, the first sentence sounds like there was not online reward model adaptation, but then the last two sentences to that first paragraph make it sound like there was: \\\"...we utilize simple pre-trained reward models instead of continuously adapting the models...\\\" versus \\\"The reward model is continuously updated...\\\"\", \"the text in the main body does not mentioned that the reward model correlations for other tasks beyond half cheetah have are weaker with different patterns among the feedback types. The full nuance of the results are not represented, and instead strong language and conclusions are drawn about a single task.\", \"the statement on lines 325 - 326 \\\"...all feedback types individually can learn reward functions that closely match the ground-truth reward...\\\" is not true for all feedback types according to the Figure 2 and those in the appendix. In Figure 2, demonstrative and corrective have a correlation of 0.61 and 0.5, which is a medium match. In Figure 25, these feedback types have correlations as low as a 0.18.\", \"on lines 410 - 411, it is stated that \\\"...no feedback type is obviously more or less robust to noise.\\\" however, then looking at Figure 4(b) and Figure 5(b), the performance gap between learning curves across different amounts of noise varies across feedback types, which suggests that some are more sensitive that. others. To back up the claim, \\\"obviously\\\" needs to be quantified to make it clear the threshold for what qualifies as \\\"more or less robust\\\".\", \"section 5 opens by stating that different feedback types struggle in different scenarios, but all of the discussion in Section 4 talks about how there is no clear difference between the different sources of feedback.\", \"The first half of the paper motivating and describing the multiple feedback toolkit is very well written, methodical, and easy to follow. However, there is a switch part way through the paper when it transitions to talking about experiments with the toolkit. Here the presentation and writing changes drastically with things like:\", \"Figure 3 overlapping the text of the main body\", \"an incomplete sentence (page 8 line 380)\", \"what seems to be misconnected analysis and results figures (e.g. figures 4 and 5 - the text seems to talk about figure 4, but references figure 5 and figure 4 is not mentioned in the paper as far as I can tell)\", \"mislabelled and titled figures where the captions disagree with the figure axis/title (e.g. Figure 2 - HalfCheetah-v5 versus Swimmer-v5; Figure 5 - swimmer v5 and walker2d-v5 versus half cheetah-v5; and figure 4 - \\\"reward model validation curves\\\", but it looks like policy returns)\", \"Figure 4 left plot, the y-axis scale are on different scales, making it tricky to compare across feedback sources.\", \"using both a : and - on page 8 line 429\", \"it is stated that results are over 5 random seeds, but there are no standard deviation results\", \"there are numerous places with typos (e.g. \\\"as the be considered\\\" line 256) that need to be addressed.\", \"not all lines in Figure 3 are labelled nor described.\"], \"questions\": [\"In Figure 5, the results are described as averaged over \\\"3 feedback datasets\\\", what are the sources of the different feedback datasets? Are these different random seeds? Earlier it was stated that 5 random seeds were used.\", \"How well do the feedback ensemble rewards correlate with the ground truth reward function?\", \"In Section 4.2 you talk about sampling against random behavior being stronger than agains sub-optimal rollouts for demonstrative feedback. This is an interested conclusion and I would have expected the opposite. Why do you think this is the case?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors' outstanding efforts during the rebuttal. It is gratifying to see that the quality of the revised manuscript has greatly improved, but there are still some unclear areas. I hope the authors can continue to check and optimize the manuscript in subsequent versions. Additionally, I have reviewed the opinions of other reviewers and believe that most of their concerns have also been addressed. I appreciate the authors' efforts on this work, and I have raised the score from 6 to 8 with confidence 5.\\n\\nNonetheless, I still have a few minor questions: 1. Is the legend missing in Figure 2? I cannot understand this figure. 2. Considering this is a systematic paper, will all the code be open-sourced in the future? This will greatly enhance the impact and contribution of the work. 3. Have the authors attempted real human annotation? Annotating so much feedback could be costly. I suggest adding a more in-depth discussion on AI feedback and fine-grained feedback in future work, such as [1][2][3][4]. \\n\\n[1] Liu J, Yuan Y, Hao J, et al. Enhancing Robotic Manipulation with AI Feedback from Multimodal Large Language Models[J]. arXiv preprint arXiv:2402.14245, 2024.\\n[2] Wang Y, Sun Z, Zhang J, et al. Rl-vlm-f: Reinforcement learning from vision language foundation model feedback[J]. arXiv preprint arXiv:2402.03681, 2024.\\n[3] Dong Z, Yuan Y, Hao J, et al. Aligndiff: Aligning diverse human preferences via behavior-customisable diffusion model[J]. arXiv preprint arXiv:2310.02054, 2023.\\n[4] Lee H, Phatale S, Mansoor H, et al. Rlaif: Scaling reinforcement learning from human feedback with ai feedback[J]. 2023.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thank you for your response and the additional details included in the paper about the dataset generation process. However, the extra details do not address my concerns about the size of the sim-to-real gap between the conclusions people would draw using this toolkit and those that hold up in the \\u201creal world\\u201d. For example, conclusions about the value and benefits of different sources of feedback (the main focus of Section 4). I read the section you added about real human data. My concern is not that real human data has to be included in the toolkit, but that the design decisions you have made lead to a small sim-to-real gap. There is great value in simulators, but the validation is important.\"}", "{\"comment\": \"We thank the reviewer for their comments, and really appreciate the encouraging and productive discussion, which was crucial to improve our manuscript.\", \"we_would_like_to_answer_your_questions\": \"1. Thank you for pointing that out, we will add a legend/schematic, and will improve the spacing between subfigures , to improve readability of the plot. This plot shows the final episode reward/success rates for all evaluated environments across the investigated feedback types and baseline methods (with the colored areas indicating min./max. performance of different seeds)\\n2. Yes! We plan to contribute the code as a lightweight library for future research. As we outline in the future work section, we see multiple research opportunities and want to enable future research in this area by open-sourcing the code.\\n3. Thank you for the suggestion! Enabling the integration of feedback from diverse human annotation sources is indeed one of our prime motivations. We see this work on reward model as one key component of AI systems for multi-type feedback (another being user interfaces). Multiple feedback types open up the possibility to learn from the most efficient, suitable and informative feedback. We therefore plan to ensure interoperability of our library to new and existing systems for the collection and processing of human feedback annotation data. We will gladly add a discussion of these point in our final revision.\"}", "{\"comment\": \"We thank the reviewer for their comments and helpful suggestions. We have summarized the steps for a revised version in a summary comment. We still want to answer the points raised in this review briefly:\\n\\n\\u201cIt would have been better to limit the feedback combination methods as a tool for proof of concept showing how the toolkit can be used.\\u201d\\n-> We acknowledge this as a very fair comment and agree with this statement; we have thus adapted our stated contribution to mark it as a proof of concept and have, in turn extended the analysis of reward functions in section 4\\n\\n\\u201cthe absence of a study to validate the toolkit against human feedback makes it impossible to validate if this is a function of the feedback or the choices they have made in how the feedback is implemented.\\u201d\\n-> An extension of our approach by fitting the characteristics/noise levels with human data is an intriguing future expansion of our work, and we have noted this in future work. However, we would argue that our study still provides novel insights beyond the toolkit, as the utility of multi-type feedback has not been shown before on a conceptual or empirical level.\\n\\n\\u201cOnly one environment is evaluated; therefore, it is impossible to assess how general the proposed toolkit and feedback simulation method is.\\u201d\\n-> We have added additional results for Highway-Env and Metaworld\\n\\n\\u201cThe first sentence sounds like there was no online reward model adaptation.\\u201d\\n-> We have fixed this inconsistency in the text and now correctly refer only to offline training\\n\\n\\u201cthe text in the main body does not mention that the reward model correlations for other tasks beyond half cheetah have are weaker with different patterns among the feedback types.\\u201d\\n-> We have significantly expanded on the discussion of correlation and additional investigations of the reward function behavior\\n\\n\\u201call feedback types individually can learn reward functions that closely match the ground-truth reward.\\u201d\\n-> We have adapted this claim and now give more detailed conclusions and results\\n\\n\\u201c.no feedback type is more or less robust to noise.\\u201d\\n-> Again, we have extended this discussion with updated results and figures. Importantly, we want to note that all feedback types behave relatively well with low noise levels and find that reward modeling performance (w.r.t. the reward function) does not necessarily translate to downstream RL performance.\\n\\n\\u201cSection 5 opens by stating that different feedback types struggle in different scenarios.\\u201d\\n-> We sharpened this wording and acknowledge that it was vague in the previous version\\n\\nFormatting Issues\\n-> We carefully went through the raised issues and tried to fix them for the revision\"}", "{\"title\": \"Response to Authors Rebuttal\", \"comment\": \"Thank you for your responses and the work you have done to update the paper. The extent to which the benchmark transfers to feedback from real humans is a big concern from me. In the absence of results validating that learnings can transfer, my worry is people will use this tool to reach conclusions and develop algorithms that have a massive sim-to-real gap. Therefore, I am not able to change my score.\"}" ] }
9IMQJ8HmIq
Dual-cycle Consistency Learning for Weakly Supervised Phrase Grounding
[ "Pengyue Lin", "Ruifan Li" ]
Weakly supervised phrase grounding (WSPG) aims to localize objects referred by phrases without region-level annotations. The state-of-the-art methods use vision-language pre-trained (VLP) models to build pseudo labels. However, their low quality could result in the ineffectiveness of the subsequent learning. In this paper, we propose a novel WSPG framework, Dual-cycle Consistency Learning (DCL). Firstly, we propose a vision-modal cycle consistency to localize the referred objects and reconstruct the pseudo labels. To provide a conditional guidance, we propose a visual prompt engineering to generate marks for input images. To further avoid localizing randomly, we design a confidence-based regularization to filter out redundant information in image and pixel levels. Secondly, we propose a language-modal cycle consistency to correctly recognize the referred objects. To correct their positions, we provide phrase-related boxes as supervision for further learning. Extensive experiments on benchmark datasets show the effectiveness of DCL, as well as its excellent compatibility with various VLP models. The source code will be available at GitHub after double-blind phase.
[ "Weakly supervised phrase grounding", "visual grounding", "visual consistency learning", "textual consistency learning" ]
https://openreview.net/pdf?id=9IMQJ8HmIq
https://openreview.net/forum?id=9IMQJ8HmIq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wFXbhqZJ3f", "sqaY1MErb1", "s5JaRw8jo5", "cvgeIpmBS5", "bhZUk4DY6j", "Tl4OqbbnPl" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730644534790, 1730273016393, 1730629173458, 1731452820617, 1730733998414, 1730606966627 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1594/Reviewer_VMCv" ], [ "ICLR.cc/2025/Conference/Submission1594/Reviewer_UgBx" ], [ "ICLR.cc/2025/Conference/Submission1594/Reviewer_NUmi" ], [ "ICLR.cc/2025/Conference/Submission1594/Authors" ], [ "ICLR.cc/2025/Conference/Submission1594/Reviewer_G5SS" ], [ "ICLR.cc/2025/Conference/Submission1594/Reviewer_MRKd" ] ], "structured_content_str": [ "{\"summary\": \"This paper addresses the challenge of Weakly Supervised Phrase Grounding (WSPG) by introducing a novel framework called Dual-cycle Consistency Learning (DCL). DCL enhances pseudo label reconstruction through vision-modal cycle consistency and employs visual prompt engineering for improved guidance. Additionally, it integrates language-modal cycle consistency to accurately identify referred objects and strengthens localization with phrase-related supervision. Experimental results demonstrate DCL's effectiveness and its compatibility with various vision-language pre-trained models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The proposed Dual-cycle Consistency Learning strategy is interesting.\\n\\n2. Extensive experiments validate its effectiveness and compatibility. \\n\\n3. Ablation studies highlight the contributions of various modules, while qualitative results further illustrate their functionality.\", \"weaknesses\": \"1. The writing is difficult to follow and requires further refinement.\\n\\n2. Figure 2 is also confusing, particularly in illustrating the overall pipeline and the relationships among different modules.\\n\\n3. The paper states that pseudo labels serve as conditional guidance, providing not only supervision but also category-level details. What specifically do these category-level details refer to? Could the authors offer additional explanation or analysis?\\n\\n4. How does the vision-modal cycle consistency address potential incompleteness?\\n\\n5. In Lines 232\\u2013233, how is the bounding box of the pseudo label extracted? How does the highlighted area of the pseudo label indicate the confidence in the label's quality, especially considering the presence of small objects to be grounded?\\n\\n6. Can the method handle phrases that include spatial descriptions, such as \\\"the girl on the right\\\"? The language-modal cycle consistency mechanism seems to overlook this aspect.\\n\\n7. What are the trainable parameters and model size of the proposed method? How does this compare to previous approaches?\", \"questions\": \"See Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a Weakly Supervised Phrase Grounding (WSPG) framework, named Dual-cycle Consistency Learning (DCL). DCL utilizes vision-modal cycle consistency to localize objects and language-modal cycle consistency to correctly recognize objects. DCL utilizes various prompt engineering techniques to generate visual prompts based on pseudo labels. The effectiveness of the proposed method is demonstrated through comparison experiments and ablation studies.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper is well-organized, and the expression is clear, making it easy to follow.\\n2.\\tThe figures in the paper are clear, making it easy to understand the framework.\\n3.\\tThe related work section of this paper is comprehensive.\", \"weaknesses\": \"1.\\tAbout the training and inference cost. Table 11 in the supplementary material shows that DCL requires nearly twice the time of VPT to achieve comparable performance to VPT. For example, in the setting of \\u201cg + VGG + BLIP\\u201d, the IPS (image per GPU second) of DCL is 3.99 while the IPS of VPT is 8.77. Would it be possible to reduce the training and inference cost of DCL further?\\n2.\\tAbout the image-level confidence. In Equation 4, the authors use the proportion of the pseudo label\\u2019s area as the image-level confidence. It would be better if the authors could provide a detailed explanation to clarify the reason for this approach.\\n3.\\tAbout the vision-model cycle consistency. In the vision-model cycle, the authors utilize pseudo labels A as the constraints of HR. However, since pseudo label A contains noise (e.g., redundant information), how to ensure the quality of the generated HR?\\n4.\\tAbout the experiment settings. In Table 11 of the supplementary material, what does the \\\"Backbone\\\" column represent? For example, what does the meaning of \\\"Stable diffusion + VGG + BLIP\\\" setting? Additionally, this paper lacks a detailed analysis of performance under different Backbone settings.\\n5.\\tAbout the recovery module. In Equations (1) and (2), the authors use Dgnd to denote both the grounding network and the recovery module. Are these two networks sharing weights, i.e. are they the same network? Additionally, why is it necessary to enhance the similarity between Hr and the pseudo labels A instead of directly using H?\", \"questions\": \"My questions are as described in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper argues the low-quality of pseudo labels in weakly supervised phrase grounding methods using the VLP pre-trained models. To solve the problem, the paper proposed a dual-cycle consistency learning framework. Three types of low-quality pseudo labels are categorized, incompleteness, redundancy and misrecognition. The vision-modal cycle consistency is proposed to prevent incompleteness and redundancy by localizing the referred object and reconstructing the pseudo labels.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The method seems correct\", \"The writing and organization seem clear\", \"SOTA results are achieved and good ablation studies\"], \"weaknesses\": [\"Although with the good analysis of three error types, the major issue is the lack of theoretical insights or validated experimental proofs.\", \"We don\\u2019t understand the statistics off these error types and how exactly the proposed modules tackle them. The effectiveness of the evaluations need more statistical validations\", \"The novelty consists of integration of several engineering techniques. The idea of consistency learning is commonly used to reconstruct the pseudo labels or validate the pseudo labels. The exact reason of why the proposed method works need more investigations.\"], \"questions\": \"see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"In this paper, the author propose a novel framework, Dual-cycle Consistency Learning (DCL) for WSPG. They propose a vision-modal cycle consistency to learn to ground the referred objects in the process of reconstructing the pseudo labels. The consistency prevents incompleteness and redundancy problems.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The structure is complete.\"], \"weaknesses\": \"Q1. The definition of \\\"pseudo labels\\\" in this paper is somewhat strange. According to the existing literature, pseudo labels should refer to text labels.\\n\\nQ2. There are many VLP-based grounding works that are not cited and discussed, such as CLIP-VG [1], CLIPREC [2], RefCLIP [3], etc.. Similarly, there are many weakly supervised grounding efforts that are not cited and discussed, just such as QueryMatch [4], PPT [5], etc.\\n\\nQ3. The \\\"double-loop consistent learning\\\" proposed in this paper is a bit far-fetched, and it does not implement cycle consistency to some extent, which is quite different from the traditional concept of consistency learning such as CyCo.\\n\\nQ4.Prompt has currently been widely used, and the prompt engineering proposed in this paper is not innovative.\\n\\n--\\n\\n[1] CLIP-VG: Self-paced Curriculum Adapting of CLIP for Visual Grounding. TMM 2023.\\n\\n[2] CLIPREC: Graph-Based Domain Adaptive Network for Zero-Shot Referring Expression Comprehension. TMM 2023.\\n\\n[3] Refclip: A universal teacher for weakly supervised referring expression comprehension. CVPR 2023.\\n\\n[4] QueryMatch: A Query-based Contrastive Learning Framework for Weakly Supervised Visual Grounding. MM 2024.\\n\\n[5] Part-Aware Prompt Tuning for Weakly Supervised Referring Expression Grounding. MMM 2024.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No Ethics Concerns.\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the problem of weakly supervised phrase grounding, which aims to localize objects referred by phrases without region-level annotations. As the auto-generated pseudo labels are usually of low quality, the authors propose a dual-cycle consistency learning (DCL) approach. In the proposed approach, the vision-modal cycle consistency is designed to localize the referred objects, whereas the language-modal cycle consistency is designed to recognize the referred objects. Experiments on multiple benchmark datasets show the effectiveness of the proposed approach.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The problem of low quality pseudo labels (incompleteness, redundancy, and misrecongition) in weakly supervised phrase grounding is well motivated.\\n2. The overall idea of two cycle consistencies seems reasonable.\", \"weaknesses\": \"The major weaknesses of this paper are its poor (terrible) writting quality and less convincing experimental results.\\n\\n1. The proposed algorithms look ad-hoc and lack solid technical contributions. While the high-level idea of the two cycle consistencies seems reasonable, it is unclear why and how the proposed algorithm can help address the low quality issues of auto-generated pseudo labels. More specifically, although three quality issues (incompleteness, redundancy, and misrecongition) of pseudo labels are listed in the introduction section, it's hard to see the connection how they are addressed by the proposed algorithm. After reading the introduction and the method sections twice, I still find it difficult to understand the algorithm intuition. \\n\\n2. The method section needs significant writting improvement. It includes many details without high level intuition explained. The mathematical notations are terrible, often used without explanation, making it hard to follow. To list a few: \\na) L158: What are the dimensionalities of $\\\\mathcal{E}\\\\_{\\\\text{img}}(I)$ and $\\\\mathcal{E}\\\\_{\\\\text{txt}}(T)$? They have almost the same notations. But $\\\\mathcal{E}\\\\_{\\\\text{img}}(I)$ seems to keep the image shape $W \\\\times H$, whereas $\\\\mathcal{E}\\\\_{\\\\text{txt}}(T)$ is just a vector. \\nb) L194: How is the prompt function $\\\\mathcal{P}\\\\_{\\\\text{img}}(I,A)$ defined and implemented? \\nc) L203: In Eq. (3), what does $n$ (and $N$) mean? Is it the index for pixels or for training images? \\nd) L232: exact --> extract. How to extract a bounding box $B(A)$ from the psedudo label $A$? \\ne) L236: Eq. (4) is totally messed up. What does it mean to compute the max of a single value as in $\\\\text{max}(A(\\\\alpha, \\\\beta))$? And what exactly are the meanings of $\\\\alpha$ and $\\\\beta$? Are they pixel coordinates, or pixel values at a given position? As this equation is messed up and looks very ad-hoc, it's hard to understand this confidence-based regularization and vision-modal cycle consistency. \\nf) L240: I understand that $IC$ is a scalar. Is $PC$ also a scalar? Where are $\\\\alpha$ and $\\\\beta$ gone as in Eq. (3)? \\ng) In both Eq. (5) and Eq. (6), $n$ and $N$ are used without definition. \\nh) L262 and L266: How do you find negative samples $T_N$? It seems that you only treat \\\"*image of colorful patches*\\\" as a negative sample? If this is true, it is too ad-hoc and of little help to contrastive learning. \\ni) L271: Primary object identified as \\\"the second recognized noun\\\" is too ad-hoc and may be wrong for many cases. \\nj) L292: \\\"we select the cluster whose semantic similarity is closest to ...\\\" From this description, it seems that you only select one cluster, but you ended up getting $K$ clusters. \\nk) L307: Eq. (9) is hard to follow, as $S_{text}$ and $S_{score}$ are not explained. Without being able to understand this equation, it's hard to understand language-modal cycle consistency.\\nl) L366: As image blur is used in the experiments, to make the paper self-explained, please briefly descibe how it is computed. \\n\\n3. While the experimental results seem having better results than previous works, this reviewer found that for the ALBEF-based methods the paper missed a result from [1], also cited in the paper as APR (Zeng et al., 2024). APR [1] reported better results based on ALBEF than this paper: \\nALBEF (point accuracy): 75.04 (VG), 84.49 (Flickr), 69.26 (ReferIt) \\n[1] Investigating compositional challenges in vision-language models for visual grounding, CVPR 2024\", \"questions\": \"1. The writting of this paper needs significant improvement. There are many grammatical errors, unclear sentences, and mathmatical notations in the paper. The current writting quality is far below the acceptance bar of ICLR.\\n\\n2. The proposed algorithm looks ad-hoc, lacking clear intuition how to address the low quality issues of pseudo labels. As a result, three low quality issues are listed in the introduction section, but without being explicitly addressed. \\n\\n3. The experimental results miss important baselines, making the results questionable. \\n\\n4. As a general comment, the purpose of weakly supervised phrase grounding is to be able to leverage larger scale of training data. The paper should have a comparison with SOTA supervised phrase grounding algorithms and discuss whether it is possible, and under what kind of conditions, e.g. using more training data, that weakly supervised phrase grounding can outperform supervised algorithms.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
9ILaEDrwWY
Shapley Is Not All You Need: Sobol's Total Indices for Feature Selection and Performance Loss Estimation
[ "Zonghan Zhang", "Zhiqian Chen" ]
The selection of pertinent features constitutes a pivotal step in developing interpretable machine learning models, particularly when handling high-dimensional data, where the combinatorial interactions among features must be considered. The Shapley value, a concept originating from cooperative game theory, has gained recognition as a method for quantifying feature importance. However, the Shapley value often fails to precisely reflect the variance reduction that occurs when a feature is removed from the model. As the number of features increases, these challenges are further exacerbated by the high computational complexity of computing the exact Shapley value. Additionally, the common approximation techniques used to calculate the Shapley value are not model-agnostic. To address these gaps, we propose utilizing Sobol's total indices, a variance-based sensitivity analysis technique, as a more efficient and robust alternative to Shapley values. In this paper, we present both theoretical and empirical studies comparing these two methods. Sobol's total indices provide several key advantages. It captures both main effects and interactions, offering a more accurate importance measure than Shapley values. Its computation scales linearly with the number of features, making it suitable for high-dimensional problems. Additionally, it is derived from the data itself, ensuring complete model-agnosticism. Experiments on synthetic and real-world datasets demonstrate that feature selection using Sobol's total indices achieves better predictive performance than Shapley-based selection while requiring significantly less computational time. Our findings suggest that Sobol's total indices are a promising alternative to Shapley values, offering greater computational efficiency, comprehensiveness in accounting for interactions, and robustness in estimating variance. This represents a favorable substitute, particularly for high-dimensional feature selection.
[ "Feature selection", "Shapley values", "Sobol indices", "global sensitivity analysis", "machine learning" ]
https://openreview.net/pdf?id=9ILaEDrwWY
https://openreview.net/forum?id=9ILaEDrwWY
ICLR.cc/2025/Conference
2025
{ "note_id": [ "q6GDgtC5jm", "kpfDsvV4KF", "cNcmhRovYT", "DnmHjQrLp8", "8pVfwVPhCG", "2uQ6zC8PPN" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1732419874454, 1730451693640, 1730164130143, 1730884025830, 1730842463730, 1730530483165 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8533/Authors" ], [ "ICLR.cc/2025/Conference/Submission8533/Reviewer_cM5K" ], [ "ICLR.cc/2025/Conference/Submission8533/Reviewer_zHjh" ], [ "ICLR.cc/2025/Conference/Submission8533/Reviewer_JcmM" ], [ "ICLR.cc/2025/Conference/Submission8533/Reviewer_vLGY" ], [ "ICLR.cc/2025/Conference/Submission8533/Reviewer_saRr" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We appreciate the reviewers' effort and time spent providing feedback. We have decided to withdraw this work from this venue.\"}", "{\"summary\": \"In this paper, the authors claims the drawbacks of the Shapley value and claims the superiority of Sobol's total index. Specifically, the authors pointed out that the Shapley value suffers from high computational complexity and lacks an appropriate way to account for the impact of feature removal. Additionally, they noted that some Shapley value implementations are not model-agnostic. The authors argued that Sobol's total index is a preferable alternative that does not have these limitations.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"A strength of this study is that it demonstrates the potential effectiveness of Sobol's total index as a feature attribution method. Sobol's total index is a simple approach, and, as the authors claim, it is highly advantageous in terms of computational complexity. In situations where Sobol's total index is effective, this advantage is likely to be significant.\", \"weaknesses\": \"The weakness of this study is that it only contrasts the Shapley value with Sobol's total index despite many feature attribution methods have already been proposed. Furthermore, as various studies indicate, feature attribution methods do not estimate a single, unified concept of \\\"feature importance\\\"; rather, each method defines and measures \\\"feature importance\\\" differently. Because these definitions vary, the usefulness of each method depends heavily on its targeted application. A method may be unsuitable for one purpose yet useful for another.\\n\\nThis study, however, does not account for the existing diverse literature of feature attribution methods. Instead, it narrowly compares the Shapley value and Sobol's total index from the authors' specific perspective of utility. While the Shapley value is indeed a popular method and identifying its limitations is valuable, this does not necessarily mean that Sobol's total index is inherently useful. To support the utility of Sobol's total index, it is essential to compare it with other major feature attribution methods from multiple perspectives. The evaluation in the current study remains highly limited.\", \"questions\": \"What are the advantages of Sobol's total index over other major feature attribution methods besides the Shapley value?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper compares two classical measures, the Shapley values and the Sobol\\u2019s total indices, in the context of feature selection. It shows the superiority of the Sobol's total indices over the Shapley values for feature selection.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The experimental comparisons showed that the Sobol\\u2019s total indices might be better than the Shapley values for feature selection.\", \"weaknesses\": \"The motivation of this paper is unclear: the feature selection has important applications when handling high-dimensional data, but we do not use the Shapley value for such high-dimensional data due to its time complexity. I do not understand why this paper chooses the Shapley values for feature selection despite the fact that there are so many faster alternative methods as discussed in Section 5. The motivation for handling low-dimensional data is not described. If the data is low-dimensional, we can directly compute Equation (1) instead of estimating it with the Sobol's total indices.\\n\\nIt should also be noted that this paper is not the first paper to compare the Shapley values and the Sobol's total indices, and this paper should discuss what are already known and what are new insights found in this paper. Examples of existing work include:\\n+ B. Iooss and C. Prieur, Shapley effects for sensitivity analysis with correlated inputs: comparisons with Sobol' indices, numerical estimation and applications, International Journal for Uncertainty Quantification, 2019.\\n+ B. Vuillod et al., A comparison between Sobol's indices and Shapley's effect for global sensitivity analysis of systems with independent input variables, Reliability Engineering & System Safety, 2023.\\n\\nThe proof for Inequality (9) is missing. Moreover, this inequality does not hold, because we have $\\\\Delta(x_i)=0$, $\\\\phi_i=0.25$, and $S_{T_i}=0$ according to the numbers in Table 3 for the Synthetic Correlated Dataset.\\n\\nThe discussion on the Shapley values on the synthetic datasets (lines 351-387) is inappropriate. The Shapley values are not designed to estimate the performance loss, and therefore the obtained values 0.25 are perfectly fine.\\n\\nThe experimental results on the real datasets are not convincing to support the claim that the Sobol's indices are better. I do not see significant difference between the prediction performances in Fig. 3, taking into account the confidence intervals. In addition, Tables 5-7 do not contain any information on the confidence intervals.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes using Sobol's total indices instead of Shapley values for feature selection in machine learning models for three reasons:\\n\\nSobol's total indices capture both main effects and interactions, offering a more accurate importance measure than Shapley values. \\n\\nSobol's total indices are more computationally efficient as the computation scales linearly with the number of features.\\n\\nSobol's total indices are fully derived from the data itself.\\n\\nThis paper compares Shapley values and Sobol\\u2019s total indices for feature selection in both synthetic and real-world datasets to showcase Sobol\\u2019s total indices' advantages on feature selection.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The diagram (fig1), together with the argument in sections 1 and 2, gives readers a clear sense/insight into how these two metrics work and how they differ.\\n\\n2. The code is lightly annotated and easy to understand.\\n\\n3. The comparison between the two metrics is thorough ranging from all aspects, like definition, computation, interpretation, practical usage, etc.\", \"weaknesses\": \"1. One minor suggestion on notation: for variables with bar or hat on the top, maybe it is better to use \\\\widehat or \\\\widebar accordingly based on the width of the variable. For example: accuracy in eq 7 and \\\\delta(x_i) in line 157-158.\\n\\n2. Eq 8 is a bit misleading. I believe you are suggesting the difference in terms of R^2 and the difference in terms of accuracy right? Maybe writing \\\\delta = R^2 or \\\\delta = accuracy is not the best way.\\n\\n3. Based on your code (function sobol_total_indices in method.py), it seems that the way you calculate (3) is by grouping y based on each unique value of x for each feature of interest. Should you treat continuous and categorical variables the same? if feature x is continuous, likely, group_y will only have length 1 or some very small size for each unique x value. Wouldn't this be an issue? \\n\\n4. It would be much clearer if the authors could provide a detailed procedure (maybe an algorithm) on how \\\\\\n (1) experiments are conducted.\\\\\\n (2) how synthetic datasets are generated.\\n\\n5. I don't think this is a fair comparison between Shapley and Sobol. Shapley considers the average effect of one feature under all possible model selections while Sobol only measures the difference between the full model and the leave-one-feature-out model. They are not essentially taking care of the same thing. I am not sure about this part. Please correct me if I am wrong. \\n\\n6. My last question is about the experiment design in general. From my perspective, there is a bit of disconnection between your claim and the experiments you do. \\n\\n(1) For Table 6. I believe the concrete dataset is well-known for nonlinear relationships. Derived features like water-to-cement ratio and water-to-aggregates ratio are more important than those raw features. Also, since the paper emphasizes performance on high-dimensional datasets, wouldn't it be more reasonable to generate more interaction features first and then do the same analysis?\\n\\n(2) similar to my previous point, I don't think any of the realistic datasets you include are high-dimensional. I believe it would be way more convincing if there could be high-d examples. \\n\\n(3) Quote from Section 6: \\n\\n\\\"While Sobol\\u2019s total indices work better for feature selection tasks, they might not interpret a machine-learning model as well as the Shapley values do. Therefore, based on our findings, we conclude that Sobol\\u2019s total indices are better suited for feature selection in machine learning applications.\\\" \\n\\nI agree with the interpretation part. But the second half might need more justification. Since you are already doing feature selection and more specifically backward feature selection on linear regression and decision trees, would it be better to include AIC, BIC (for LR), or Mean Decrease in Impurity (for trees) into the competition and see if Sobol can still outperform these classic metrics?\", \"questions\": \"Please see the Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies Sobol' total indices as an alternative to Shapley values for feature selection in high-dimensional data. Through arguments and empirical studies that highlight that Sobol\\u2019 provides a more accurate measure of feature importance, particularly with highly correlated features or large interaction effects, via inclusion of their variance contributions. It is mentioned that interactive variances cannot fully be captured for with Shapley values, that tend to average over all features providing a more balanced view of how variance is shared amongst features. The authors conduct empirical experiments using both simulated and real datasets to determine that Sobol\\u2019 indices offer better computational efficiency and improved performance in feature selection as opposed to Shapley values, which are largely used today.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"First, the paper is overall well-written and straightforward to follow, with clear explanations and a bolded logical structure that enables easier reading. Second, the authors provide a number of different experiments in comparing Sobol\\u2019 indices to Shapley values across a range of datasets, from simulations of known correlation structures (aligning with their argument that capturing interactive variance losses via Sobol\\u2019 indices is stronger than that of Shapley), to studies on runtimes. These studies thus allow the reader to observe the performance of each method in different contexts, which to the best of my knowledge has not been empirically studied in much detail before. Last, the visualizations, including those of runtime comparisons and accuracy trends, are clear and effective in illustrating these comparisons.\", \"weaknesses\": \"There are a number of flaws in the presented methodology, those which may stem from my own confusion regarding certain definitions and paragraphs.\\nMy main critique is that I am struggling to see the novelty of results in this paper beyond the empirical studies which largely reflect what has been theoretically proved in the literature over the past decade or so. Specifically, it is well known that shapley values are derived from sobol indices such that Sobol\\u2019 indices bound shapley values, which can indeed be seen by definition and variance inclusion. Both metrics measure different but related aspects of features, but this has been well studied in the literature, and the idea of using Sobol\\u2019 indices is not new by construction. \\nSecond, there appears to be some confusion in newly defining accuracy and other ML terminologies as they relate to feature importance. For example, from the sentence beginning \\u201cThe actual performance loss for regression models and classification models are scaled differently\\u201d: Regression models typically output continuous values, whereas classification models classify discrete classes with assigned probabilities, thus it is to be expected that performance losses are scaled differently with respect to the underlying probability measure (which may include the empirical to align with model generality) encoded into the loss. Even between two different regression models, the performance losses can be different. I am thus a little confused as to why this paragraph is included in the paper, not what it is trying to convey to the reader, but perhaps I am missing something here. The claim that these models require different scaling adjustments is to me, not sufficiently justified, and thus this section would benefit from a clearer rationale for why scaling adjustments are necessary in this study. \\nThird, the authors argue against the need for the efficiency axiom in Sobol\\u2019 indices, where Shapley values conform to efficiency (and other nice theoretical properties including symmetry and additivity) to not just capture enough of the variance rather to provide a fair attribution of feature contributions across all subsets of features. The authors argue that efficiency is not needed when computing feature importance due to the focus on evaluating how much a feature contributes to overall performance as opposed to dividing predictive power equitably across features. However, I believe this stems from the goal of the practioner and state of the training data, as opposed to the philosophical definition of feature importance. An example here is whether to use \\\\ell_1 or \\\\ell_2 penalizations on a loss function to predict feature weights. For highly correlated features, one may prefer the \\\\ell_2 to balance all features with similar weights reflecting their correlations, but when there are more features than data points, one may prefer to use the \\\\ell_1 to zero out some correlated features to ensure computational robustness, effectively providing importance via sparsity. I therefore disagree with the authors\\u2019 views on not needing efficiency, as I believe it is highly problem dependent and the choice of the practitioner (who may not have this choice, in for example, the case where there are more features than data points). \\nForth, the simulated correlated and XOR datasets, while powerful, only really test basic interactions. Such experiments would be stronger with additional datasets that may capture more complex or higher-order interactions, that align with the authors\\u2019 claims about Sobol indices\\u2019 interaction-handling capabilities.\\nLast, the authors claim that the efficiency of computing the Sobol\\u2019 indices is much higher than that of Shapley and provide nice empirical comparisons. However, this section does not include comparisons with the authors\\u2019 aforementioned Shapley approximations (such as Kernel SHAP) which is normally used to handle high dimensionality in real datasets. Further, the claim that the authors make on line 210 that such methods including Kernel SHAP and Tree SHAP are often model dependent is incorrect and misleading. Kernel SHAP is nonparametric and model agnostic.\", \"questions\": \"1) Please can you describe the paragraph beginning with \\u201cThe actual performance loss for regression models and classification models are scaled differently\\u201d in more detail, and what its utility is in the broad scope of the paper?\\n2) Please can you give a more balanced overview of Shapley vs Sobol in the sense of either theoretical and/or emprical properties hosted by one and not the other, or both at the same time?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper tries to address feature selection in interpretable machine learning, with a focus on high-dimensional data. It advocates for Sobol\\u2019s total indices, arguing that they more accurately capture the variance reduction associated with feature removal while being model-agnostic and computationally efficient, scaling linearly with the number of dimensions.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper raises a good point regarding variable selection, noting that Sobol\\u2019s indices offer more accurate assessment of variable reduction compared to Shapley values.\\n2. The authors offer clear and intuitive explanations of the phenomena discussed.\\n3. The running time is an advantage compared to Shapley values, especially in high dimension.\", \"weaknesses\": \"1. The table oversimplifies the limitations of Shapley values-based methods. Model-dependent approaches like TreeSHAP do not suffer from factorial growth in time complexity, and methods such as SAGE [1] are model-agnostic. The table should present a more balanced view of the strengths and weaknesses of different Shapley-based methods rather than exclusively highlighting their disadvantages.\\n2. It\\u2019s better to explicitly mention how the Shapley values are calculated as there are many methods available. Current popular methods like SAGE are significantly more efficient than the method used in empirical experiments, which required 9111.088 seconds. \\n3. It\\u2019s better to highlight Table. 5-7 or include plots like Fig. 3, 4 for a more straightforward comparison. Notably, in Table 7, Shapley values begin to outperform Sobol\\u2019s indices overall when feature number reaches 9, suggesting Shapley value may have a better selection ability in this data. \\n4. The related work section in Section 5 is somewhat confusing, as it deviates from the central discussion. A more cohesive organization could improve the article\\u2019s flow.\", \"editing_issues\": \"1. broken line at line 378, page 8\\n2. broken line at line 432, page 9 \\n3. Fig4. touches the left border\\n\\n[1]. Covert, Ian, Scott M. Lundberg, and Su-In Lee. \\\"Understanding global feature contributions with additive importance measures.\\\"\\u00a0Advances in Neural Information Processing Systems\\u00a033 (2020): 17212-17223.\", \"questions\": \"1. Eq (8) doesn\\u2019t seem to be well defined given the definition based on the preceding paragraph, and I couldn\\u2019t find a proof for Eq (9), could the authors clarify?\\n2. Eq (10) doesn\\u2019t seem to hold based on the definition of Eq (3), are the authors trying to use $S_{rT_3}$ defined in line 264?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }